f(AI)nder View
April 2019
f(AI)nder View - a monthly digest in the field of
Artificial Intelligence, Machine Learning, Deep Learning
exploring the various levels and faces of AI: from basic to sophisticated ones,
from algorithms and technologies to business applications
Company Nodis - partner of the April issue
TruTint - Transparent, color, fast switching smart glass
Nodis' TruTint smart glass technology is transforming windows, giving people the ability to change the tint, color and temperature characteristics of windows instantly. TruTint is the only color smart glass solution.
Buildings consume 40% of a city's electricity and generate 45% of its CO2 emissions. TruTint smart windows can reduce electricity usage significantly (by up to 40%), enable carbon neutral buildings.
Nodis is headquartered in Singapore and funded by Singapore government (National Research Foundation).
Nodis is Winner of Shell IdeaRefinery, Grand Prize Winner of K-Startup Grand Challenge.
AI ecosystem
AI industry analytics, trends, foresights, insights, challenges, regulations, etc.
Five pain points that can give rise to AI risks: Data difficulties, Technology troubles, Security snags, Models misbehaving, Interaction issues.
Core principles of AI risk management:
1. Clarity - use a structured identification approach to pinpoint the most critical risks. 2. Breadth - institute robust enterprise-wide controls. 3. Nuance - reinforce specific controls depending on the nature of the risk.

Read more
The first comprehensive and in-depth study of 2018 Eastern Europe AI ecosystem:
history, present state and future of the AI industry in 11 Eastern European countries, profiling 500 companies, 230 investors, 60 influencers, 30 Hubs & Accelerators and 15 conferences. The main technologies covered in the report: Robotics, Computer vision, ML,
Intelligent Data Analysis, Recommender Systems, Search Engines and Language Processing, IoT.
Read more
Robotics, automation, and AI continues to gear up for the future. Industrial robotics will regain its spot as a prime growth driver.
In 2019, sensors will continue to proliferate as costs decline and performance capabilities improve, fueling ever greater machine intelligence and enabling autonomous systems. 2019 will be the year edge computing enables ML to reach beyond the data centers and the Cloud to play a key role in getting intelligence into robots out in the field.

Read more
Four core technology trends, tightly coupled with (and sometimes enabled by) AI, will reshape the insurance industry over the next decade:
+ Explosion of data from connected devices
+ Increased prevalence of physical robotics
+ Open source and data ecosystems
+ Advances in cognitive technologies.

Read more
What's next for artificial intelligence?
See 25 of the biggest AI trends to watch in 2019: open source frameworks, edge AI, predictive maintenance, crop, monitoring, medical imaging and diagnostics, advanced healthcare biometrics, next-gen prosthetics, federated learning, synthetic training data, network optimization, generative adversarial networks (GANs) and others.

Read more
According to a 2017 report, AI could potentially double Singapore's annual economic growth rates from 3.2% to 5.4% by 2035. The Accelerated Initiative for Artificial Intelligence (AI2) will strongly complement Singapore's shift to a digital economy, and support innovative enterprises who need to bring their AI products faster to global markets.
Read more
Applications
Scientists at Facebook AI Research and
Tel Aviv University describe a system that directly converts audio of one singer to the voice of another. It's unsupervised, meaning it's able to perform the conversion from unclassified, unannotated data it hasn't previously encountered. The model was able to learn to convert between singers from just 5-30 minutes of their singing voices.

Read more
The model predictions were consistent with the user group at the ~90% level, demonstrating that a ML model can be effectively used to estimate the perceived tappability of interface elements in their design without the need for expensive and time consuming user testing. Predicting tappability is merely one example of what can be done with ML to solve usability issues in user interfaces.
Read more
Automatically generating animation from natural language text finds application in a number of areas e.g. movie script writing, instructional videos, and public safety. Existing text-to animation systems can handle only very simple sentences, which limits their applications. The paper describes a text-to-animation system which is capable of handling complex sentences.
Read more
The goal here, which doesn't seem far fetched at all, is to be able to tell a service "give me a 50-page summary of the last 4 years of bioengineering." A few minutes later, boom, there it is. The flexibility of text means you could also request it in Spanish or Korean. Parameterization means you could easily tweak the output, emphasizing regions and authors or excluding keywords or irrelevent topics.
Read more
Scientists are seeking to replicate fusion on Earth for an abundant supply of power for the production of electricity. AI is now beginning to contribute to the worldwide quest for fusion power. The deep learning code, called the Fusion Recurrent Neural Network (FRNN), opens possible pathways for controlling as well as predicting disruptions. A key feature of deep learning is its ability to capture high-dimensional rather than one-dimensional data.
Read more
In recent years, ML algorithms have become increasingly popular among astronomers, and are now used for a wide variety of tasks. The overview summarizes the topics of supervised and unsupervised learning algorithms and provides practical information on the application of such tools to astronomical datasets. The main focus is on unsupervised ML algorithms, that are used to perform cluster analysis, dimensionality reduction, visualization, and outlier detection.
Read more
Introductions
Andrej Karpathy: "I have developed a specific process for myself that I follow when applying a neural net to a new problem, which I will try to describe.
In particular, it builds from simple to complex and at every step of the way we make concrete hypotheses about what will happen and then either validate them with an experiment or investigate until we find some issue".

Read more
Gaussian processes are a powerful tool in the machine learning toolbox. They allow us to make predictions about our data by incorporating prior knowledge. For a given set of training points, there are potentially infinitely many functions that fit the data. Gaussian processes offer an elegant solution to this problem by assigning a probability to each of these functions.
Read more
OpenAI Five is the first AI to beat the world champions in an esports game. OpenAI Five's ability to play with humans presents a compelling vision for the future of human-AI interaction, one where AI systems collaborate and enhance the human experience. OpenAI Five exhibits zero-shot transfer learning—it was trained to have all heroes controlled by copies of itself, but generalizes to controlling a subset of heroes, playing with or against humans.
Read more
The advantages a quantum computer can offer depend on the quantum algorithms it can run and also how easily we can get data in and out of the system. Today's hardware is still well short of the universal quantum dream and, in truth, the most often discussed benefits are still many years away. However the promise remains revolutionary and commercial activity is already gathering pace across the many types of software that will be required to run these machines.
Read more
A single cortical pyramidal neuron is a highly complicated I/O device. It typically receives a barrage of thousands of synaptic inputs over its highly branched dendritic tree.
In response, an output in the form of a train of spikes is generated in the axon. The information contained in these spikes is then communicated, via synapses, to thousands of other (post synaptic) neurons. Understanding the relationship between the neuron's morphology, physiology and synaptic input, and its spiking output is a central question in neuroscience.

Read more
Although recent artificial intelligence succeeds in many data intensive applications, it still lacks the ability of learning from limited exemplars and fast generalizing to new tasks. A machine learning problem called Few-Shot Learning targets at this case. It can rapidly generalize to new tasks of limited supervised experience by turning to prior knowledge, which mimics human's ability to acquire knowledge from few examples through generalization and analogy.
Read more
Toolbox
"The article suggests open research problems that we'd be excited for other researchers to work on".
What are the Trade-Offs Between GANs and other Generative Models? What Sorts of Distributions Can GANs Model? How Can we Scale GANs Beyond Image Synthesis?
What can we Say About the Global Convergence of GAN Training? How Should we Evaluate GANs and When Should we Use Them? How does GAN Training Scale with Batch Size? What is the Relationship Between GANs and Adversarial Examples?

Read more
One existing challenge in AI research is modeling long-range, subtle interdependencies in complex data like images, videos, or sounds. Sparse Transformer, a deep neural network which sets new records at predicting what comes next in a sequence—whether text, images, or sound. It uses an algorithmic improvement of the attention mechanism to extract patterns from sequences 30x longer than possible previously. Sparse Transformers set new state-of-the-art scores for density estimation of CIFAR-10, Enwik8, and Imagenet 64.
Read more
Given the large search space of possible architectures, designing a network from scratch for your specific application can be prohibitively expensive in terms of computational resources and time. Approaches such as Neural Architecture Search and AdaNet use machine learning to search the design space in order to find improved architectures. An alternative is to take an existing architecture for a similar problem and, in one shot, optimize it for the task at hand. MorphNet, a sophisticated technique for neural network model refinement, takes the latter approach.
Read more
Working effectively with large graphs is crucial to advancing both the research and applications of artificial intelligence. PyTorch-BigGraph makes it much faster and easier to produce graph embeddings for extremely large graphs — in particular, multi-relation graph embeddings for graphs where the model is too large to fit in memory.
With this new tool, anyone can take a large graph and quickly produce high-quality embeddings using a single machine or multiple machines in parallel.

Read more
MLIR, or Multi-Level Intermediate Representation, is a representation format and library of compiler utilities that sits between the model representation and low-level compilers/executors that generate hardware-specific code. MLIR is, at its heart, a flexible infrastructure for modern optimizing compilers. This means it consists of a specification for intermediate representations (IR) and a code toolkit to perform transformations on that representation.
Read more
Most generative models rely on 2D kernels to generate images and make few assumptions about the 3D world.
These models therefore tend to create blurry images or artefacts in tasks that require a strong 3D understanding, such as novel-view synthesis. HoloGAN instead learns a 3Drepresentation of the world, and to render this representation in a realistic manner. Unlike other GANs, HoloGAN provides explicit control over the pose of generated objects through rigid-body transformations of the learnt 3D features.

Read more
AI hardware
Global deep learning chipset market will surge from US$1.6 billion in 2017 to US$66.3 billion by 2025. The edge computing market is expected to represent more than three-quarters of the total market opportunity, aside from cloud and data centre environments. Chinese startups lack the experience of large semiconductor companies with CPU, GPU and FPGA designs, therefore ASICs are seen as a catch-up opportunity.
Read more
DynapCNN is a 12mm2 chip, fabricated in 22nm technology, housing over 1 million spiking neurons and 4 million programmable parameters, with a scalable architecture optimally suited for implementing Convolutional Neural Networks. It is a first of its kind ASIC that brings the power of machine learning and the efficiency of event-driven neuromorphic computation together in one device.
Read more
The aim of this review is to provide quantum engineers with an introductory guide to the central concepts and challenges in the rapidly accelerating field of superconducting quantum circuits. Several foundational elements are reviewing -- qubit design, noise properties, qubit control, and readout techniques -- developed during this period, bridging fundamental concepts in circuit quantum electrodynamics (cQED) and contemporary, state-of-the-art applications in gate-model quantum computation.
Read more
So what exactly is Qualcomm doing?
In a nutshell, the company is developing a family of AI inference accelerators for the datacenter market. Though not quite a top-to-bottom initiative, these accelerators will come in a variety of form factors and TDPs to fit datacenter operator needs. And within this market Qualcomm expects to win by virtue of offering the most efficient inference accelerators on the market, offering performance well above current GPU and FPGA frontrunners.
Read more
Accelerator chips that use light rather than electrons to carry out computations promise to supercharge AI model training and inference. In theory, they could process algorithms at the speed of light. Boston-based Lightelligence, though, claims it's achieved a measure of success with its optical AI chip, which today debuts in prototype form. It says that latency is improved up to 10,000 times compared with traditional hardware, and it estimates power consumption at "orders of magnitude" lower.
Read more
Xanadu's hardware team is developing core technology that will underpin the future quantum processors (or QPUs). Xanadu's approach to quantum computing is powered by the quantum properties of light in an integrated photonic environment.
But what is special about this light and how do it is generated? In short, quantum light can exhibit entanglement, a fundamental prerequisite for quantum computing.
Below we delve into one of Xanadu's core technologies: squeezed states of light.

Read more
AI competitions
Temporal relational data is very common in industrial ML applications, such as online advertising, recommender systems, financial market analysis, medical treatment, fraud detection, etc.
Provided and Sponsored by 4Paradigm, ChaLearn, Microsoft and Amazon.
$33,500 Prize Money.

Read more
Main area of focus is machine learning models that can identify toxicity in online conversations, where toxicity is defined as anything rude, disrespectful or otherwise likely to make someone leave a discussion.
$65,000 Prize Money.
Entry deadline: June 19, 2019.
Read more
The challenge puts forward five distinct flight physics problems with varying degrees of complexity, ranging from a simple mathematical question to a global flight physics problem. The exact nature of the prize for each of the five problem statements may vary.
The submission period ends October 2019.
Read more
Events
23-24 May, 2019. Boston, USA.
Neural Networks, Image Retrieval, Speech Recognition, Robotics, Technology Infrastructure, Industrial Automation, Reinforcement Learning, Computer Vision.
Read more
28-31 May, 2019. Geneva, Switzerland.
The leading United Nations platform for global and inclusive dialogue on AI. Connecting AI innovators with problem owners for sustainable development.
Read more
4-5 June, 2019. Beijing, China.
The first and only conference dedicated solely to the ecosystem developing hardware accelerators for neural networks and computer vision.
Read more
Follow us
f(AI)nder View - a monthly digest in the field of
Artificial Intelligence, Machine Learning, Deep Learning
exploring the various levels and faces of AI:
from basic to sophisticated ones,
from algorithms and technologies to business applications

f(AI)nder View is prepared by f(AI)nder Lab
Made on
Tilda