f(AI)nder View
September 2018
f(AI)nder View - a monthly digest in the field of
Artificial Intelligence, Machine Learning, Deep Learning
exploring the various levels and faces of AI: from basic to sophisticated ones,
from algorithms and technologies to business applications

AI ecosystem
AI industry analytics, trends, foresights, insights, challenges, regulations, etc.
Major findings of the new research from the McKinsey Global Institute:
1. There is large potential for AI to contribute to global economic activity.
2. A key challenge is that adoption of AI could widen gaps among countries, companies, and workers.

Read more
Artificial Intelligence Global Executive Study and Research Report from MIT Sloan Management Review and BCG.
Myth: Companies that see success with AI flourish via small-scale experiments.
Reality: AI leaders are increasing their investments in AI and creating strategies for taking AI to industrial scale.

Read more
Industry AI deep dive from the CB Insights.
Healthcare is emerging as a prominent area for AI research and applications.
In the private market, healthcare AI startups have raised $4.3B across 576 deals since 2013, topping all other industries in AI deal activity.
Read more
There are over 950 active startups utilizing or developing AI technologies, of which 445 startups have raised one or more funding rounds. Israel's AI startup ecosystem has raised over $7.5 billion. Cumulative Israeli AI startup exits total nearly $4.4 billion over 66 exits.
Read more
"Artificial intelligence has already provided beneficial tools that are used every day by people around the world. Its continued development, guided by the following principles, will offer amazing opportunities to help and empower people in the decades and centuries ahead."
Read more
Up to 2022: 75 million jobs may be displaced by a shift in the division of labour between humans and machines,
while 133 million new roles may emerge (data analysts and scientists, Artificial Intelligence and Machine Learning specialists, etc.).

Read more
Applications
AI model is capable of predicting the location of aftershocks up to one year after a major earthquake. The model was trained on 199 major earthquake events, followed by 130,000 aftershocks, and was found to be more accurate than a method used to predict aftershocks today.
Read more
New system can not only teach itself to see and identify objects, but also understand how best to manipulate them. Armed with the new machine learning routine referred to as "dense object nets (DON)," the robot would be capable of picking up an object that it's never seen before, or in an unfamiliar orientation, without resorting to trial and error - exactly as a human would.
Read more
Design system allows users to interactively design and optimize a free-form 3D shape while the pressure on the surface and the velocity field are accurately predicted in real-time. Before, the computation of the aerodynamic properties of cars usually took a day. Machine learning can make extremely time-consuming methods a lot faster.
Read more
MIT researchers detail a neural-network model that can be unleashed on raw text and audio data from interviews to discover speech patterns indicative of depression. Given a new subject, it can accurately predict if the individual is depressed, without needing any other information about the questions and answers.
Read more
A key advantage of the proposed neural networks models is that they rely only on an extremely small number (two) of site-based descriptors. There is the potential of these models to rapidly transverse vast chemical spaces to accurately identify stable compositions, accelerating the discovery of novel materials with potentially superior properties.
Read more
Advancements in genomic research have driven modern genomic studies into "big data" disciplines. Genomics entails unique challenges to deep learning.
The paper provides a concise review of deep learning applications in various aspects of genomic research, as well as pointing out potential opportunities and obstacles for future genomics applications.

Read more
Introductions
The paper introduces AlphaGo Zero, the latest evolution of AlphaGo, the first computer program to defeat a world champion at the ancient Chinese game of Go. Zero is even more powerful and is arguably the strongest Go player in history. AlphaGo Zero learns to play simply by playing games against itself, starting from completely random play.
Read more
Generative adversarial networks are neural networks that learn to choose samples from a special distribution (the "generative" part of the name), and they do this by setting up a competition (hence "adversarial").
The generator and discriminator within a GAN is having a little contest, competing against each other, iteratively updating the fake samples to become more similar to the real ones. GAN Lab visualizes the interactions between them.

Read more
Learning a new skill by observing another individual, the ability to imitate, is a key part of intelligence in human and animals. Can we enable a robot to do the same, learning to manipulate a new object by simply watching a human manipulating the object ? Such a capability would make it dramatically easier for us to communicate new goals to robots – we could simply show robots what we want them to do, rather than teleoperating the robot or engineering a reward function.
Read more
The main failure of Convolutional Neural Networks (CNNs) is that they do not carry any information about the relative relationships between features.
Capsule Networks introduce a new building block that can be used in deep learning to better model hierarchical pose relationships between image features inside the network.
Read more
Gradient descent is one of the most popular algorithms to perform optimization and by far the most common way to optimize neural networks.
These algorithms, however, are often used as black-box optimizers, as practical explanations of their strengths and weaknesses are hard to come by.

Read more
Report presents a brief survey on development of DL approaches, including Deep Neural Network (DNN), Convolutional Neural Network (CNN), Recurrent Neural Network (RNN) including Long Short Term Memory (LSTM) and Gated Recurrent Units (GRU), Auto-Encoder (AE), Deep Belief Network (DBN), Generative Adversarial Network (GAN), and Deep Reinforcement Learning (DRL).
Read more
Toolbox
A novel video-to-video synthesis approach under the generative adversarial learning framework. Through carefully-designed generator and discriminator architectures, coupled with a spatial-temporal adversarial objective, it is achieved high-resolution, photorealistic, temporally coherent video results on a diverse set of input formats including segmentation masks, sketches, and poses.
Read more
Multi-Task Reinforcement Learning are one of the most exciting areas in the deep learning space. Despite the impressive results, most widely adopted Reinforcement Learning (RL) algorithms focused on learning a single task and present a lot of challenges when used in multi-task environments. Researchers from DeepMind proposed a new method called PopArt to improve RL in multi-task environments.
Read more
Unsupervised translation - training a machine translation model without access to any translation resources at training time. New approach provides a dramatic improvement over previous state-of-the-art unsupervised approaches and is equivalent to supervised approaches trained with nearly 100,000 reference translations.
Read more
A new feature of the open-source TensorBoard web application, which let users analyze an ML model without writing code. What-If Tool offers an interactive visual interface for exploring model results. It has a large set of features, including visualizing your dataset automatically using Facets, the ability to manually edit examples from your dataset and see the effect of those changes.
Read more
Gibson is a virtual environment based off
of real-world, as opposed to games or artificial environments, to support learning perception. Gibson enables developing algorithms that explore both perception and action hand in hand.
A State-of-the-art 3D database, over 1400 floor spaces and 572 full buildings, scanned using RGBD cameras.

Read more
"Given two videos – one of a target person whose appearance we wish to synthesize, and the other of a source subject whose motion we wish to impose onto our target person – we transfer motion between these subjects via an end to end pixel-based pipeline. We create a variety of videos, enabling untrained amateurs to spin and twirl like ballerinas, perform martial arts kicks or dance as vibrantly as pop stars."
Read more
AI hardware
Developers and systems designers have a number of options available to them for adding some form of neural-networking or deep-learning capability to their embedded designs. So what are the first questions that designers need to answer before dipping their toes into the AI waters? By asking four key questions developers will be able to zero in on the best AI processor candidates for their specific embedded AI project.
Read more
By 2025, there will be ten times more non-mobile phone connected devices in the world than humans. The average home in a developed nation has almost 10 times as many microcontrollers as CPUs. Nearly every modern appliance is powered by a microcontroller, but they're also so cheap and simple to use that they're in toys, remote controls, thermostats, and just about any other electronic gadget around.
Read more
Apple has unveiled the latest iteration of its smartphone chip: the A12 Bionic SoC (system-on-a-chip).
Huawei also proclaimed the Kirin 980 as the world's first mobile 7nm chipset. Kirin 980 is the world's first chipset to adopt ARM's Cortex-A76 CPU.
Both of the tiny chips are capable of tackling huge tasks, but how do they stack up against one another?

Read more
It remains difficult to deploy CNNs in embedded systems due to tight power budgets.
Here it is explored a complementary strategy that incorporates a layer of optical computing prior to electronic computing, improving performance on image classification tasks while adding minimal electronic computational cost or processing time.

Read more
Neuromorphic computing is inspired by the function of the human brain.
To mimic the function of the human brain, neuromorphic computing uses architectures that are fundamentally different from conventional computer hardware.
Massive parallelism of low powered chips and novel ways to engineer their communication are the central elements of all neuromorphic computers.

Read more
Early-generation quantum devices are promising newcomers to the growing collection of AI accelerators, thereby enabling machine learning. Quantum machine learning can lead to the discovery of new models and thereby innovate machine learning. Machine learning will increasingly permeate all aspects of quantum computing, redefining the way we think about quantum computing.
Read more
AI competitions
AI is reshaping the landscape of news and information. From the algorithms filtering what we see on social media, to the use of ML to generate news stories and online content. This open challenge is seeking fresh and experimental approaches to four specific problems at the intersection of AI and the news: Governing the Platforms, Stopping Bad Actors, Empowering Journalism, Reimagining AI and News.
Read more
Accurate image captioning is a challenging task that requires advancing the state of the art of both computer vision and natural language processing.
To track progress on image captioning, Google AI is announcing the Conceptual Captions Challenge for the machine learning community to train and evaluate their own image captioning models on the Conceptual Captions test bed.
Read more
Participants are tasked with developing a controller to enable a physiologically-based human model with a prosthetic leg to walk and run.
Task is to build a function f which takes the current state observation and returns the muscle excitations action (19-dimensional vector) maximizing the total reward.
The objective is to follow the requested velocity vector.
Read more
Events
9-11 October, 2018. Munich, Germany.
Part of the largest global series of events focused on artificial intelligence and its applications across many important fields. Discover the latest breakthroughs in autonomous vehicles, high performance computing, healthcare, big data, and more.

Read more
10-11 October, 2018. Amsterdam.
For the entire AI ecosystem from Enterprise to Big Tech, Startups, Investors and Science. Over 6000 attendees and 140 of the brightest brains on stage to tell you everything you need to know about AI.
Read more
15-16 October, 2018. Helsinki, Finland.
Theme: Harnessing the power of AI.
Target audience: Researchers, Scientists, Professors, Engineers, Students, Smart Innovators, Robotic Technologist, Gaming Professionals, Automation Industry Leaders, Health Care Service Providers, etc.
Read more
Follow us
f(AI)nder View - a monthly digest in the field of
Artificial Intelligence, Machine Learning, Deep Learning
exploring the various levels and faces of AI:
from basic to sophisticated ones,
from algorithms and technologies to business applications

f(AI)nder View is prepared by f(AI)nder Lab
Made on
Tilda