f(AI)nder View
November 2018
f(AI)nder View - a monthly digest in the field of
Artificial Intelligence, Machine Learning, Deep Learning
exploring the various levels and faces of AI: from basic to sophisticated ones,
from algorithms and technologies to business applications

AI ecosystem
AI industry analytics, trends, foresights, insights, challenges, regulations, etc.
Potential total annual value of AI and analytics across industries = > $9.5T - $15.4T. Size the opportunity and determine data needs. Learn best practices to scale AI: Align on strategy, Building tech, data & people capabilities, Completing the last mile. Beware: Know the warning signs of AI program failure.
Read more
Now is the time for organizations to start selecting the business use cases that can deliver measurable value through AI-powered capabilities.To obtain a cross-industry view of how organizations are adopting and benefiting from cognitive computing/AI, Deloitte surveyed 1,100 IT and line-of-business executives from US-based companies.
Read more
Research: Technology breakthroughs and their capabilities.
Talent: Supply, demand and concentration of talent working in the field.
Industry: Large platforms, financings and areas of application for AI-driven innovation today and tomorrow.
Politics: Public opinion of AI, economic implications and the emerging geopolitics.

Read more
Top industry sectors for AI applications: Agriculture, Healthcare, Education.
Top AI applications: Automation of Business Processes, Chatbots, Natural Language Processing, Image Recognition.
Current challenges to AI traction: Indian society is not as forgiving to failure in entrepreneurship as US or Europe.

Read more
Artificial general intelligence (AGI) is the long-range, human-intelligence-level target of contemporary AI technology.
While many AI experts believe AGI is still a far-fetched fantasy unachievable with existing tech, Ilya Sutskever, founder and research director of OpenAI, has a decidedly different point of view.

Read more
Richard P. Feynman: "Nature isn't classical and if you want to make a simulation of nature, you'd better make it quantum mechanical". Quantum computing enables exponential increases in speed by harnessing the weirdness of quantum mechanics. The key challenge is to build robust systems at scale.
Read more
Applications
With GANs we can generate not only images, but text, sounds, voices, music
or structured data like game levels.
Alternative examples of using GANs
can be applied in different areas:
Data augmentation, Privacy preserving, Anomaly detection, Discriminative modeling, Domain adaptation, Data manipulation, Adversarial training.

Read more
Is it possible to design an AI system that develops theories the way Galileo did, zeroing in on the information it needs to explain different aspects of the world it observes? Today we get an answer: researchers from MIT have developed an AI system that copies Galileo's approach and some of the other tricks that physicists have learned over the centuries.
Read more
"AI anchors" — digital composites created from footage of human hosts that read the news using synthesized voices.
Each anchor (one for English broadcasts and one for Chinese) can "work 24 hours
a day on its official website and various social media platforms, reducing news production costs and improving efficiency," says Xinhua
.

Read more
AI and ML can help mining companies find minerals to extract, a critical component of any smart mining operation. Autonomous vehicles and drillers have impacted the mining companies by reducing fuel use and are safer to operate. Some companies have begun to use smart sorting machines that can sort the mined material.
Read more
Using an AI design process, engineers at software company Autodesk and NASA's Jet Propulsion Laboratory came up with a new interplanetary lander concept that could explore distant moons like Europa and Enceladus. Its slim design weighs less than most of the landers that NASA has already sent to other planets and moons.
Read more
Researchers are exploring machine learning to process the data obtained during 3D builds in real time, detecting within milliseconds whether a build will be of satisfactory quality. "The advantage is that you can collect video while you're printing something and ultimately make conclusions as you're printing it".
Read more
Introductions
An overview of transfer learning
and why it warrants our attention
.
What is Transfer Learning?
Why Transfer Learning Now?
Transfer Learning Scenarios.
Applications of Transfer Learning.
Transfer Learning Methods.
Read more
A human can act and adapt intelligently
to a wide variety of new, unseen situations.
How can we enable our artificial agents to acquire such versatility?
Approach of learning to learn, or meta-learning, is a key stepping stone towards versatile agents that can continually learn a wide variety of tasks throughout their lifetimes. What is learning to learn, and what has it been used for?
Read more
The traditional paradigm in machine learning research is to get a huge dataset on a specific task, and train a model from scratch using this dataset.
That's very far from how humans leverage past experiences to learn very quickly a new task from only a handset of examples.
The idea of meta-learning is to learn the learning process.

Read more
Many hallmarks of human intelligence, such as generalizing from limited experience, abstract reasoning and planning, analogical reasoning, creative problem solving, require the ability to consolidate experience into concepts.
An energy-based model can quickly learn to identify and generate instances of concepts, such as near, above, between, closest, and furthest, expressed as sets of 2d points.

Read more
A lot of the biggest challenges in Reinforcement Learning revolve around two questions: how we interact with the environment effectively (e.g. exploration vs. exploitation, sample efficiency), and how we learn from experience effectively.
A few recent directions in deep RL research: hierarchical RL, memory and predictive modeling, combined model-free and model-based approaches.

Read more
The first part explains the basic quantum theory, then quantum computation and quantum computer architecture are explained in section two. The third part presents quantum algorithms which will be used as subroutines in quantum machine learning algorithms. Finally, the fourth section describes quantum machine learning algorithms with the use of knowledge accumulated in previous parts.
Read more
Toolbox
Most imitation learning approaches require concise representations, such as those recorded from motion capture (mocap).
It is presented a framework for learning skills from videos (SFV), by combining state-of-the-art techniques in computer vision and reinforcement learning. The framework is structured as a pipeline, consisting of three stages: pose estimation, motion reconstruction, and motion imitation
.

Read more
TensorSpace provides Keras-like APIs to build deep learning layers, load pre-trained models, and generate a 3D visualization in the browser. The framework is designed for not only showing the basic model structure, but also presenting the processes of internal feature abstractions, intermediate data manipulations and final inference generations. TensorSpace supports to visualize pre-trained model from TensorFlow, Keras and TensorFlow.js.
Read more
Spinning Up - an educational resource designed to let anyone learn to become a skilled practitioner in deep RL - consists of the following core components:
a short introduction to RL, a curated list of important papers, a well-documented code repo of short, standalone implementations (Vanilla Policy Gradient, Proximal Policy Optimization, Soft Actor-Critic, etc.), a few exercises to serve as warm-ups.

Read more
PocketFlow is an open-source framework for compressing and accelerating deep learning models with minimal human effort. DL models are often computational expensive, which limits further applications on mobile devices with limited computational resources.
PocketFlow aims at providing an easy-to-use toolkit for developers to improve the inference efficiency
.

Read more
AdaNet, a lightweight TensorFlow-based framework for automatically learning high-quality models with minimal expert intervention. AdaNet creates models, implementing an adaptive algorithm for learning a neural architecture as an ensemble of subnetworks. It is capable of adding subnetworks of different depths and widths to create a diverse ensemble.
Read more
Xanadu, a leader in photonic quantum computing and advanced AI, announced PennyLane, the first dedicated machine learning software for quantum computers. Free and open source, PennyLane will enable programmers, researchers, and enthusiasts worldwide to take part in the cutting-edge field of quantum machine learning - the next big step for AI.
Read more
Datasets
12000+ datasets, covering image data, economics, business, finance, demographics, linguistics, etc.
File types: CSV, SQLite, JSON, BigQuery.
Read more
450+ datasets, 33.74 TB of research data available. Service is designed to facilitate storage of all the data used in research, including datasets.
Read more
List of 50+ selected dataset resources, prepared by Google AI.
Dataset types: Image, Video, Audio, Text Annotation, Robotics, etc.

Read more
AI hardware
Intel has announced it's making the process of imparting intelligence into smart home gadgets and other network edge devices faster and easier thanks to the company's latest invention: the Neural Compute Stick 2. Edge devices include not just routers, switches and gateways but also a range IoT gadgets like Ring doorbell cameras, industrial robots, smart medical devices or self-guided camera drones.
Read more
Some analysts forecast that AI chips market will be worth $91 billion by 2025.
Esperanto Technologies aims to develop energy efficient, high-performance compute solutions based on RISC-V, an open source and royalty-free instruction set architecture (ISA). It'll leverage standards such as the Open Compute Platform (OCP), Facebook's Pytorch framework and Glow compiler, and the Open Neural Network Exchange (ONNX) to accelerate AI and ML workflows
.

Read more
Researchers from the University of Pavia in Italy have built the world's first perceptron implemented on a quantum computer and then put it through its paces on some simple image processing tasks. It turns out that the quantum perceptron can easily classify the patterns in the simple images. They go on to show how it could be used in more complex patterns, albeit in a way that is limited by the number of qubits the quantum processor can handle.
Read more
Princeton researchers have built a new type of computer chip,based on a technique called in-memory computing, that boosts the performance and slashes the energy demands of systems used for AI.
The chip, which works with standard programming languages, could be particularly useful on phones, watches or other devices that rely on high-performance computing and have limited battery life
.

Read more
Is your smartphone capable of running the latest Deep Neural Networks to perform these AI-based tasks? Is it fast enough?
The benchmark consists of 9 Computer Vision AI tasks performed by 9 separate Neural Networks that are running on your smartphone. Considered networks comprise a broad range of architectures which allows to assess the performance and limits of various approaches used to solve AI problems.

Read more
Deep learning models, such as the ResNet-50 convolutional neural network, are trained using floating point arithmetic. Because floating point has been extremely resource-intensive, AI deployment systems typically rely upon one of a handful of now-standard integer quantization techniques using int8/32 math. An alternate approach makes AI models as much as 16 percent more efficient than int8/32 math.
Read more
AI competitions
A new facility -- the Large Synoptic Survey Telescope (LSST) -- is discovering 10 to 100 times more astronomical sources that vary in the night sky than we've ever known. Some of these sources will be completely unprecedented! The Photometric LSST Astronomical Time-Series Classification Challenge (PLAsTiCC) helps to classify the data from this new survey.
December 10, 2018 - Entry deadline.
You must accept the competition rules before this date in order to compete.

Read more
Can we use the content of news analytics to predict stock price performance?
The challenge is ingesting and interpreting the data to determine which data is useful, finding the signal in this sea of information. Data for this competition comes from the following sources: market data provided by Intrinio, news data provided by Thomson Reuters.
January 8, 2019 - Submission deadline.
Any more submissions will not be taking after this date.
Read more
Participants will develop models capable of classifying mixed patterns of proteins in microscope images.
The Human Protein Atlas will use these models to build a tool integrated with their smart-microscopy system. Images visualizing proteins in cells are commonly used for biomedical research, and these cells could hold the key for the next breakthrough in medicine.
January 3, 2019 - Entry deadline.
You must accept the competition rules before this date in order to compete.

Read more
Learning the Kaggle Environment and an Introductory Notebook. The article and introductory kernel demonstrated a basic start to a Kaggle competition. It's not meant to win, but rather to show you the basics of how to approach a machine learning competition and also a few models to get you off the ground.
Read more
Focus on three crucial steps of any machine learning project: feature engineering, feature selection, and model evaluation.
Feature engineering is the process of creating new features from existing data. Too many features can slow down training, make a model less interpretable and, most critically, reduce the model's generalization performance on the test set.
Read more
Focus on a crucial aspect of the ML pipeline: model optimization through hyperparameter tuning. Model hyperparameters, in contrast to model parameters that are learned during training, are set by the data scientist before training. There are a handful of ways to tune a ML model: Manual, Grid Search, Random Search, Automated Hyperparameter Tuning.
Read more
Events
4-5 December, 2018. Singapore.
The topics covered: The Next Wave of Digital Transformation, Leveraging Big Data and AI at Your Advantage, Real World Applications of Machine Learning, Experts' Round Table and Business Case Studies.
Read more
10-12 December, 2018. Mountain View, USA.
Practical Quantum Computing:
application development in optimization, simulation, machine learning and cryptography using quantum algorithms and quantum computing resources.
Read more
21-23 December, 2018. Sanya, China.
2018 International Conference on Algorithms, Computing and Artificial Intelligence (ACAI): aims to bring together researchers, engineers, developers and practitioners from academia and industry.
Read more
Follow us
f(AI)nder View - a monthly digest in the field of
Artificial Intelligence, Machine Learning, Deep Learning
exploring the various levels and faces of AI:
from basic to sophisticated ones,
from algorithms and technologies to business applications

f(AI)nder View is prepared by f(AI)nder Lab
Made on
Tilda