f(AI)nder View
January 2019
f(AI)nder View - a monthly digest in the field of
Artificial Intelligence, Machine Learning, Deep Learning
exploring the various levels and faces of AI: from basic to sophisticated ones,
from algorithms and technologies to business applications

Quadrant - partner of the January issue
Empowering Data Professionals
In the 21st century economy, data is key to success. But data can be chaotic, confusing, and in some cases, fake. Quadrant provides access to data that is relevant and trustworthy, mapping disparate inputs into reliable, intelligible information that addresses uncertainty when it comes to using data.
The platform uses blockchain technology - the best technical solution currently available for this purpose - to ensure the data is authentic, trustworthy, and useable.
With Quadrant, organisations and individuals can now have full trust in their data and use it to build targeted solutions and allocate resources efficiently to meet the requirements of their customers, citizens, and colleagues.
AI ecosystem
AI industry analytics, trends, foresights, insights, challenges, regulations, etc.
Measuring gains in productivity due to general purpose technologies, or GPTs, like AI is difficult because much of it is reflected at first in those intangible investments: things like developing new processes or learning new skills. The tangible benefits of the GPTs are only seen later. The period in between can cover a significant amount of time, and attempts to measure it form a j-shaped curve. AI may be in the early part of the J-curve now.
Read more
The report presents the results from an extensive look at the American public's attitudes toward AI and AI governance and is based on findings from a nationally representative survey conducted by the Center for the Governance of AI, housed at the Future of Humanity Institute, University of Oxford. The median respondent predicts that there is a 54% chance that high-level machine intelligence will be developed by 2028.
Read more
Google's contribution to AI ecosystem in 2018 is versatile and tangible: Google AI Principles, responsible AI practices, TensorFlow ecosystem, TPUs, Google Duplex, BERT, JAX, Google Dataset Search, MobileNetV2, Bristlecone, Cirq (an open source programming framework for quantum computers), works on flood prediction and earthquake aftershock prediction, Google AI for Social Impact Challenge and much more.
Read more
The German AI startup landscape comprises 132 companies with a total of 803m € in funding. Few geographical hubs dominate the scene with Berlin 51 (38,6%), Munich 31 (23.5%) and Hamburg 9 (6.8%) housing 68,1% of all German AI startups and controlling 88% of the total funding volume in Germany. The majority of startups (59.1%) operates cross-industry wide.
The number of founded start-ups per year has increased significantly, especially in the last three years (2015-2017)
.

Read more
The model framework will help businesses in Singapore tackle the ethical and governance challenges of AI implementations. Underpinning the framework are two high-level guiding principles – AI implementations should be human-centric, and decisions made or assisted by AI should be explainable, transparent and fair to consumers.
The model framework, announced at the World Economic Forum (WEF) in Davos, Switzerland.

Read more
MIT Technology Review Insights surveyed 871 Asia-based senior executives to gather perspectives, and conducted in-depth interviews with more than a dozen global experts in the field. Some key findings:
Asia has credible potential for becoming a front runner in the AI era; China is rapidly applying AI, but basic research lags; building digital economies and digital societies is a key to competitiveness; the big issues include securing foundational assets and managing AI's evolution.

Read more
Applications
IBM Research has built an AI system that can analyze 300 million articles, papers or records on a given topic and construct a persuasive speech about it.
It would take a human—reading twenty-four hours a day—about 2,000 years to get through the same material.
IBM Project Debater does it in 10 minutes.

Read more
Columbia neuroengineers have created a system that translates thought into intelligible, recognizable speech. By monitoring someone's brain activity, the technology can reconstruct the words a person hears with unprecedented clarity. Advance marks critical step toward brain-computer interfaces that hold immense promise for those with limited or no ability to speak.
Read more
For that purpose we will use a Generative Adversarial Network (GAN) with LSTM, a type of Recurrent Neural Network, as generator, and a Convolutional Neural Network, CNN, as a discriminator. We use LSTM for the obvious reason that we are trying to predict time series data. Why we use GAN and specifically CNN as a discriminator?
Read more
The use of artificial intelligence, and the deep-learning subtype in particular, has been enabled by the use of labeled big data, along with markedly enhanced computing power and cloud storage, across all sectors. In medicine, this is beginning to have an impact at three levels: for clinicians, for health systems and for patients.
Read more
The applications of modern AI algorithms within the field of aging research offer tremendous opportunities. Aging is an almost universal unifying feature possessed by all living organisms, tissues, and cells. Modern deep learning techniques used to develop age predictors offer new possibilities for formerly incompatible dynamic and static data types.
Read more
Recently, deep learning has shown great promise in helping make sense of electroencephalography (EEG) signals.
The review covers 156 papers that apply DL to EEG, published between 2010 and 2018, and spanning different application domains such as epilepsy, sleep, brain-computer interfacing, and cognitive and affective monitoring.
Read more
Introductions
Auto-Keras and AutoML enable non-deep learning experts to train their own models with minimal domain knowledge of either deep learning or their actual data.
The end goal of both Auto-Keras and AutoML is to reduce the barrier to entry to performing machine learning and deep learning through the use of automated Neural Architecture Search (NAS) algorithms.

Read more
Fourier transforms are things that let us take something and split it up into its frequencies. The frequencies tell us about some fundamental properties of the data we have. Fourier transform is an extremely powerful tool, because splitting things up into frequencies is so fundamental.
They're used in a lot of fields, including circuit design, mobile phone signals, magnetic resonance imaging (MRI), and quantum physics!

Read more
How to turn a collection of small building blocks into a versatile tool for solving regression problems. Gaussian processes are a powerful tool in the machine learning toolbox. They allow us to make predictions about our data by incorporating prior knowledge. Their most obvious area of application are regression problems, for example in robotics.
Read more
Monte Carlo Tree Search is one of the core components of the Alpha Go/Zero system. Monte Carlo Tree Search was introduced by Rémi Coulom in 2006 as a building block of Crazy Stone – Go playing engine with an impressive performance. From a helicopter view Monte Carlo Tree Search has one main purpose: given a game state to choose the most promising next move.
Read more
Today, there are two leading architectures for language modeling – Recurrent Neural Networks (RNNs) and Transformers. Though both architectures have reached impressive achievements, their main limitation is capturing long-term dependencies, e.g. use of important words from the beginning of the document to predict words in a subsequent part.
Read more
There is an increasing number of applications where data are generated from non-Euclidean domains and are represented as graphs with complex relationships and interdependency between objects.
The complexity of graph data has imposed significant challenges on existing machine learning algorithms. Recently, many studies on extending deep learning approaches for graph data have emerged.

Read more
Toolbox
Subjective list from Sebastian Ruder covers ideas mainly related to transfer learning and generalization: Unsupervised Machine Translation, Pretrained language models, Common sense inference datasets, Meta-learning, Robust unsupervised methods, Understanding representations, Clever auxiliary tasks and other ideas.
Read more
Authors present a method and application for animating a human subject from a single photo. E.g., the character can walk out, run, sit, or jump in 3D. The output animation can be played as a video, viewed interactively on a monitor, and as an augmented or virtual reality experience.
Read more
DLTK, the Deep Learning Toolkit for Medical Imaging extends TensorFlow to enable deep learning on biomedical images.
It provides specialty ops and functions, implementations of models, tutorials (as used in this blog) and code examples for typical applications.

Read more
Qiskit is an open-source quantum computing framework for leveraging today's quantum processors in research, education, and business. Qiskit is driven by avid community of Qiskitters - from IBM Research, MIT, Princeton, ETH Zurich, Nagoya University and other organizations.
Read more
JAX is Autograd and XLA, brought together for high-performance machine learning research. JAX can automatically differentiate native Python and NumPy functions. It can differentiate through loops, branches, recursion, and closures, and it can take derivatives of derivatives of derivatives.
Read more
A new family of deep neural network models is introduced. Instead of specifying a discrete sequence of hidden layers, the derivative of the hidden state is parameterized using a neural network.
The output of the network is computed using a black-box differential equation solver.

Read more
Datasets
Open Images V4, MURA, BDD100K, SQuAD 2.0, CoQA, Spider 1.0, HotpotQA, Tencent ML — Images, Tencent AI Lab Embedding Corpus for Chinese Words and Phrases, fastMRI Dataset.
Read more
From deep learning based voice extraction to teaching computers how to read our emotions. Over 1.5 TB's of Labeled Audio Datasets: Music Datasets, Speech Datasets, Sound/Nature.
Read more
CheXpert is a large dataset of chest X-rays and competition for automated chest x-ray interpretation, which features uncertainty labels and radiologist-labeled reference standard evaluation sets.
Read more
AI hardware
The end of Dennard scaling and Moore's Law and the deceleration of performance gains for standard microprocessors are not problems that must be solved but facts that, recognized, offer breathtaking opportunities. The next decade will see a Cambrian explosion of novel computer architectures, meaning exciting times for computer architects in academia and in industry.
Read more
At the 2019 Consumer Electronics Show (CES) IBM unveiled IBM Q System One™, the world's first integrated universal approximate quantum computing system designed for scientific and commercial use. IBM also announced plans to open its first IBM Q Quantum Computation Center for commercial clients. IBM Q System One has a sophisticated, modular and compact design optimized for stability, reliability and continuous commercial use.
Read more
Most AI chips—both training and inferencing—have been developed for data centers. This trend will soon shift, however. A large part of that processing will happen at the edge, at the edge of a networks or in or closer to sensors and sensor arrays.
At the edge is where things are going to get interesting with smartphones, robots, drones, cameras, security cameras—all the devices that will need some sort of AI processing in them.

Read more
On the computational side, there have been confusions about how TPUs and GPUs relate to BERT. Note the computational load for BERT should be about 90% for matrix multiplication. A TPU is a matrix multiplication engine — it does matrix multiplication and matrix operations, but not much else. It is fast at computing matrix multiplication, but one has to understand that the slowest thing in matrix multiplication is to get the elements from the main memory and load it into the processing unit.
Read more
How do you know if an HPC system, particularly a larger-scale system, is well-suited for deep learning workloads?
Today, that's not an easy question to answer in the sense there is no widely agreed-upon benchmark or reference architecture for comparing DL performance across systems. Deep500 – a benchmarking suite, reference architecture, and contest – to provide a meaningful assessment tool for deep learning capabilities on HPC platforms.

Read more
The first round of MLperf results are in and while they might not deliver on what we would have expected in terms of processor diversity and a complete view into scalability and performance, they do shed light on some developments that go beyond sheer hardware when it comes to deep learning training. We have seen a great many numbers from Nvidia and Intel when it comes to deep learning performance but MLPerf is placing the TPU directly against the GPU and finally giving us a sense of how these devices scale.
Read more
Events
22-24 February, 2019. Prague.
Practical conference about ML, AI and DL applications. 1 000+ Attendees, 2 Days, 45 Speekers, 8 Workshops, 2 Parties.
Read more
5-6 March, 2019. Singapore.
The only festival in ASIA where science meets business and AI meets the real-world.
4 stages, 50 speakers, 400 Innovators.
Read more
20-21 March, 2019. Sunnyvale, USA.
Advances in ultra-low power (on the order of few mW power consumption) Machine Learning technologies and applications.
Read more
Follow us
f(AI)nder View - a monthly digest in the field of
Artificial Intelligence, Machine Learning, Deep Learning
exploring the various levels and faces of AI:
from basic to sophisticated ones,
from algorithms and technologies to business applications

f(AI)nder View is prepared by f(AI)nder Lab
Made on
Tilda