This blog is written by Alexandre Obadia, who is a final year mathematics student at Warwick University. Alexandre researched the Machine Learning space for 3 months this summer at OpenOcean in London, a European venture capital firm focused on software companies raising their Series A. He was interning with Rahul Singireddy, who is a student at Stanford University.
My top 5 AI trends to watch
As I researched the machine learning space for 3 months this summer at OpenOcean, I stayed up to date with its latest developments and gained enough understanding of it to develop an opinion on where future areas of growth might be.
Artificial Intelligence (AI) is now considered by many to be an industrial revolution and has become a trendy buzzword, from Justin Timberlake’s video at a 2028 Pan-Asian Deep Learning conference with dancing humanoids to SingularityNET’s $36M ICO raised in 60 seconds, promising to create a decentralized AI market place using blockchain technology. (2 buzzwords in 1 project.. where do I sign?!)
However, artificial intelligence has been around for a while with pioneers like Ada Lovelace in the 1800s already working on it, although some pinpoint its ‘real’ starting point in the summer of 1956 at Dartmouth College during the Summer Research Project on Artificial Intelligence. It has had many booms and busts since, called AI winters, usually because companies had very good demos that would attract investments but were actually delusional about the capacities of their machines.
The current renewed interest for artificial intelligence comes from the research progresses made in machine learning and more specifically deep learning, thanks to better algorithms discovered, investments from governments, big companies and universities, exponential growth in computing performances and in amounts of labeled data and the open-sourcing of research (ie. arxiv, opensource datasets) that lead to better collaboration and exponential progress.
For those wondering what machine learning is, put simply it is the science of learning: “A computer program is said to learn from experience E with respect to some task T and some performance measure P, if its performance on T, as measured by P, improves with experience E.” — Tom Mitchell, 1997
Deep learning is a subset of machine learning, which is itself a subset of AI, where the algorithm used to allow the machine to get better at some task is a neural network with many (hidden) layers. (‘deep’ because of many layers).
China for instance has committed $150 billion dollars to AI in its current five-year plan. As a comparison, the US spent $1.2 billion on unclassified programs in 2016. (To be taken with a pinch of salt given a lot of the AI funding in the US is 1) from private companies, 2)for classified programs)
On the other side of the Pacific, Canada has become an internationally renowned research hub for AI, with a national AI strategy, leading universities, researchers and research labs. This past March, the Vector Institute was founded, and received $150 million from the government and Canadian businesses, to work with academic institutions, industry, start-ups, incubators and accelerators to advance AI research and drive the application, adoption and commercialization of AI technologies across Canada.
On the corporate front, titans like Google, Baidu, Facebook, Amazon, Tencent, General Motors, Toyota and many more are all putting a big emphasis on AI in their strategies, going as far as declaring it as their top priority and actively training individuals to short-circuit the shortage of talent in the industry by founding their own research labs, launching higher education programs like the Google Brain Residency and partnering with online courses platforms like Udacity.
Below are 5 trends in AI that I believe are worth looking into, some coming much sooner than others. I’ll keep it short and try to pique your curiosity with quotes and facts. Consider each point a conversation starter, inviting you to think and comment on here.
👨🚀 Unsupervised Learning 👨🚀
You can classify machine learning systems in broad categories based on whether or not they were trained with human supervision (supervised, unsupervised, semisupervised and reinforcement learning). 99% of the machine learning techniques we use today use supervised learning, yet researchers seem to agree that unsupervised learning remains the holy grail.
One of the strongest advocate of such a view is member of Deep Learning’s Holy Trinity and current Chief AI Scientist at Facebook, previously Head of Facebook AI Research (FAIR), Yann Lecun, who uses this slide to explain the importance of unsupervised learning:
To make you grasp a little better how crucial unsupervised learning is, here is a quote from Geoffrey Hinton, second member of the Deep Learning Holy Trinity, now researcher at Google Brain in Montreal, who uses an intuitive analogy comparing supervised and unsupervised learning. This quote is from 1996 yet is ever more relevant today.
- “When we’re learning to see, nobody’s telling us what the right answers are — we just look. Every so often, your mother says “that’s a dog”, but that’s very little information. You’d be lucky if you got a few bits of information — even one bit per second — that way. The brain’s visual system has 10¹⁴ neural connections. And you only live for 10⁹ seconds. So it’s no use learning one bit per second. You need more like 10⁵ bits per second. And there’s only one place you can get that much information: from the input itself.” — Geoffrey Hinton
Here’s what other experts in the field like Yoshua Bengio, third member of the Deep Learning Holy Trinity and co-founder of ElementAI, and Andrew Ng, ex-Chief Scientist at Baidu and co-founder of Coursera, have to say about it:
- “Many of the senior people in Deep Learning, including myself, remain very excited about it (unsupervised learning) but none of us have any idea how to make it work yet.” — Andrew Ng in Heroes of Deep Learning
- We don’t even have a good definition of what’s the right objective function to measure that a system is doing a good job in unsupervised learning. It leaves open such a wide range of research possibilities. What’s exciting is the direction of research, where we’re not trying to build something that’s useful, we’re just going back to basic principles about how can a computer observe the world, interact with the world, and discover how that world works. That’s cool because I don’t have to compete with Baidu, Google or Facebook. This is the kind of research that can be done in your garage. — Yoshua Bengio in Heroes of Deep Learning
What’s particularly interesting about Bengio’s quote is that even though it seems to some that the AI train is long gone and now exclusively ridden by industry behemoths, there are still some fundamental progresses to be made on fronts that aren’t reserved to large corporations. This means there is an opportunity for bold researchers and bold investors to turn the industry on its head.
💂 The Next Step: After Deep Learning 💂
Although Deep Learning has been on everyone’s tongue recently and has had many successful use-cases, from the software behind self driving cars to Netflix’s recommendation algorithm, some are now arguing that it might be a dead end to reach Artificial General Intelligence (AGI). By AGI we mean here a machine that has reached or surpassed human intelligence.
“‘Science progresses one funeral at a time.’ The future depends on some graduate student who is deeply suspicious of everything I have said.” — Geoff. Hinton, September 15, 2017
Here are some current problems with deep learning put forward by ex-director of Uber AI’s lab Marcus Gary in his Case Against Deep Learning Hype:
- Deep learning is data hungry
- Deep learning thus far is shallow and has limited capacity for transfer learning (applying something learned to a slightly different situation)
- Deep learning thus far cannot inherently distinguish causation from correlation
- Deep learning thus far has struggled with open-ended inference
“For most problems where deep learning has enabled transformationally better solutions (vision, speech), we’ve entered diminishing returns territory in 2016–2017.” — François Chollet, Google, author of Keras neural network library December 18, 2017
Although most researchers agree deep learning won’t be enough in itself to get to AGI, many believe it isn’t going to be abandoned but instead built upon. Examples of projects building upon deep learning include Geoffrey Hinton’s capsule networks.
Like unsupervised learning above, this is a very interesting area without any particular direction that makes sense at the moment. This means it is an opportunity for bold researchers and investors to turn it on its head.
☄️ AI’s impact on society ☄️
Evercore raised NVIDIA’s share price target by 40% last September because investors appear to “severely” undervalue the potential market for artificial intelligence. This is just one of the many examples of how AI’s disruption potential is underestimated. I believe this is an opportunity and that this trend is actually composed of sub-trends:
Job-displacement and ‘social’ AI: As AI is coming to disrupt many industries. It is obvious that many individuals are going to lose their job. Check how likely you are to lose your job here. As an AI-focused investor, it might be worth looking into investing in job-displacement start-ups facilitating the transition of disrupted individuals to new jobs. You’d also be investing for the good of society which is a double win, aligning financial and social incentives.
An example of a company doing that is 80000 hours, supported by Y Combinator and Sam Altman amongst other donors. It’s worth noting that like every industrial revolution, jobs disappear while others are created. What worries analysts is that the rate at which jobs are lost is potentially higher than the rate at which jobs are created.
Explainability: To dampen the shock of this disruption, and also accelerate it, a problem remains to be solved: explainability. Deep learning as it is today is a black box: it is impossible to explain how an algorithm processes an input to give the correct output. For people to trust AI and to foster its adoption, we will have to develop technologies that will be able to explain the steps the machine took to find the solution. Would you trust a machine’s medecine prescription without understanding why it’s giving it to you?
Infrastructural changes: AI disruptions will not stop at job disruption but will also cause infrastructural disruption. One of the looming examples is self-driving cars and the urban landscape of the future. As an AI-focused fund, I would definitely look into investing in infrastructural projects that will accommodate AI disruptions.
Here are 2 quotes from previously mentioned Geoffrey Hinton, and major figure of modern artificial intelligence Sebastian Thrun, co-founder of Udacity and ex-leader of Google’s self-driving car project:
- Our relation to computers has changed, instead of programming them we now show them. Computer Science departments are built around the idea of programming computers, they don’t understand that this ‘showing computers’ is going to be as big as ‘programming computers’ and that half of the department should be people working on getting computers to do things by showing them, not only a few professors. — Geoffrey Hinton in Heroes of Deep Learning
- It’s not only the taxi, bus, and truck drivers that will be disrupted from the self-driving car revolution. If transportation as a service becomes a reality you only need a quarter of cars left today on the street. It’d be a bad day for many manufacturers and car-insurance companies. Going even further, the urban landscape itself would change with no more parking spots around the city needed and cities freed of parked cars. — Sebastian Thrun in Udacity talks
👨🏼⚕️ Solving the brain with AI 👩🏼🔬
We still barely understand how our brain works. By applying concepts discovered in machine learning, we were able to find analogous concepts in the brain, and vice-versa.
Geoffrey Hinton and a collaborator came up with an algorithm in 1988 that worked analogously to a mechanism in our brain demonstrated in 1993 by Henry Markram. The Recirculation algorithm and Spike-Timing-Dependent Plasticity seem to operate the same way, using presynaptic activity.
Many scientists, part of the connectionist school of thought, think progress in machine learning brings us closer to understanding the brain and that the brain is a source of inspiration for progress in AI. Here is Geoffrey Hinton again, who by the way got into machine learning because of his interest for the brain.
- “If it turns out that back prop is a good algorithm for doing learning then surely evolution could’ve figured out a way to implement it in the brain. Presumably there’s this huge selective pressure for it and the brain could have something quite close to back prop!” — Geoffrey Hinton
- In the early days, back in the 50s, people like Von Neumann and Turing didn’t believe in Symbolic AI. They were far more inspired by the brain. Unfortunately, they both died very young and their voice wasn’t heard. — Geoffrey Hinton in Heroes of Deep Learning
Researchers against the connectionist view give the example of how we were inspired by birds to design planes but ultimately strayed away from birds to design better planes.
Ultimately though, a greater insight into how our brains work could lead to a future depicted in Wait But Why’s must-read Neuralink article. Isn’t it weird that Elon Musk, known for anticipating and accelerating technological trends, has co-founded OpenAI, an AI research company, and Neuralink, a neurotechnology company focused on enhancing the brain, at a 7 months interval? Food for thought!
🤖 Human-machine synergy and intelligence augmentation 👨
Today, we seem to hear a lot about AI replacing humans but what if its best purpose is to enhance us, similar in a way to what Neuralink wants to do with the brain.
Many AI start-ups are actually branding their product as intelligence augmentation, especially in health-care. Yet, I feel there isn’t enough emphasis on it in the public media and not enough importance given to it on the investors side so I’m adding it as my top 5 AI trend. Here’s a famous example of human-machine synergy:
‘In 1997 Gary Kasparov lost a chess match to Deep Blue, a machine. It was an event remembered by many as a pivotal moment in the relationship between humans and computers. “To many, this was the dawn of a new era, where man would be dominated by machines.” But, argues Sankar, that’s not what happened, “Twenty years later, the greatest change to how we relate to computers was the iPad, not HAL.”
The second chess match was in 2005. In this match, humans and computers could collaborate if they liked. Who won? It wasn’t a grandmaster with a supercomputer, but a couple of amateurs with a few laptops. They were able to counter both the skill of grandmasters and the power of supercomputers by finding the right way to cooperate with the machines — to guide the computers to the right answer.’ — Shyam Sankar
Watch the full TED Talk and check out the movie about the chess match! If you find it a bit outdated, I highly recommend the AlphaGo documentaryreleased in 2017 documenting the journey of the AlphaGo team and the 5 matches against Lee Sedol.
- Nathan Benaich predicted a rise in AI cybersecurity threats in 2018 that will launch a new wave of cyber security start-ups. AI viruses will have unprecedented power to adapt to their enemies and use techniques from supervised and reinforcement learning to become much harder to stop. Those viruses could directly attack our infrastructure, like our power grids. How will we defend ourselves?
- After going to London.AI #11, I believe I should mention Decentralised AI. With the rise of concerns over data ownership, and the data-hungry nature of machine learning, there is a new tension between developers and consumers: developers want the ability to create innovative products and services, while consumers want to avoid sending developers a copy of their data. Decentralized AI is a solution to this problem: a model can now be trained on data that it never has access to thanks to technologies like federated machine learning, blockchain, multi-party computation, and homomorphic encryption. Parallel to this trend, there are blockchain projects building decentralized data marketplaces like Streamr , giving data control back to the individual. Curious to see what comes next 👀
EDIT: Graphcore, a leading ML hardware start-up, just posted the ‘Directions of AI Research in 2018’ , definitely check it out for expert thoughts on where AI is going
AI newsletters: If this article interested you and you want to keep up with artificial intelligence, I recommend subscribing to Nathan Benaich’s Nathan.AI , Denny Britz’s Wild Week in AI and Jack Clark’s Import AI!
About me: I’m a Mathematics student at Warwick University with 6 months left before graduating. I’ve researched the Machine Learning space for 3 months this summer at OpenOcean in London, a European venture capital firm focused on software companies raising their Series A. If you want to read more of my content, here are my 2 latests posts: