AI technologies have cemented a place for themselves in modern society, and their role is expected to grow and expand as years go by. Machine learning is particularly on the rise in recent years, thanks to large amounts of data available and the increased computer power. The subset of machine learning that “teaches” computer based on structures inspired by the human brain is called deep learning.
The concept of deep learning
Drawing from the neuron network of the human brain, deep learning constructs artificial neural networks with input and output layers. AI is not exactly taught by humans with specific rules. A deep learning system is fed multiple examples of a given phenomenon (in the input layer) and the correct answer (in the output). The network in-between those layers is trained on large quantities of data to maximize the change of getting the correct output – that is the process of deep learning. That term is often used interchangeably with “machine learning,” as that subfield of AI has produced the most successful applications.
In the “AI 2041. Ten Visions for Our Future” book, computer scientist and writer, Dr. Kai-Fu Lee and science fiction writer Chen Qiufan use two examples to explain the training of a deep learning system.
The first example concerns cat recognition. Deep learning is mathematically trained to maximize the value of an objective function, which is the probability of correct recognition in this case, and the network operates as a giant mathematical equation to find the correct answers. After receiving the data on the phenomenon and “learning” to determine presence or absence of cats, the system can repeat the process on images it hasn’t seen. Recognition and other uses of deep learning – prediction, classification, decision-making, synthesis – can be applied to almost any domain.
In another example, the authors consider an imagined app, the AI-powered “The Golden Elephant”. The app determines the likelihood of the given user developing serious health issues and sets premiums accordingly. The training would consist of feeding the network data about all past insurance applicants, their medical claims, and family information. In the output layer, each case would be given labels indicating whether the given applicant filed serious health claims. As in the example with cat recognition, the AI applies the same process to the data it hasn’t seen before and infers the likelihood of new applications leading to serious health claims.
One of the key elements of the process is data. Deep learning is possible only with large amounts of data, and with computing power. Both requirements have been met only in the last decade. In comparison with the human brain, deep learning networks require much more data.
In this regard, let’s consider the advantages and limitations that these conditions give to deep learning.
The highs and the lows of deep learning
Deep learning networks outperform humans when working with data, especially in case of quantitative optimization. These networks are not limited in the number of things they can pay attention to at once, and the fact they can process huge amounts of data quickly means they can follow any user’s patterns with ease and customize. This creates targeted accuracy.
On the other hand, deep learning algorithms do not function well in multiple domains at one and with an uncertain objective function. They do not have humans’ unique ability to draw on experience, have abstract concepts, and use common sense in decision-making. Massive amounts of relevant data, a narrow domain, and a concrete objective function are all equally crucial for a deep learning network to do well.
If large amounts of data make deep learning function, it is no wonder that the biggest internet companies with their unprecedented access to huge information flows come out the winners. User actions like staying on the page and making a purchase maximize business metrics like clicks and revenue, which makes an app or platform “a money-printing machine.”
Fintech is another field where deep learning can generate profit. Companies like U.S.-based Lemonade and China-based Waterdrop provide insurance and loans through apps, and their model of instant transactions and lower costs is attracting a huge number of customers. Furthermore, AI networks in these companies use data beyond the reach of a human insurance underwriter in order to assess relative risks of the insured person: the client’s investments in hedge funds, preferred food and recreation, personal relationships. A big proof of fintech’s success is the rush among traditional financial companies to put AI in their own processes.
The first risk of AI concerns personalization, as Dr. Lee and Mr. Qiufan also note in their book. We are all familiar with the way AI algorithms work: notice and remember what you like, offer you more of it. Gradually, the algorithm will get to know you so well, it will show you only things you like. According to “The Social Dilemma” documentary (2020), as each of us becomes surrounded exclusively by things we like and opinions we agree with, viewpoints get narrower, and societies grow polarized. AI does not understand that – it is focused on the objective function it was given, and it simply does what is required.
Another issue is bias. If the data fed to a deep learning network is biased toward a certain demographic group, the network will recreate the bias in decision-making. Alternatively, AI can become a tool of discrimination if it can identify a certain group that is marginalized in the society. The book mentioned above again uses the example of “The Golden Elephant.”In the script, the dating app tries to keep user Sahej apart from another user, Nayana, because of the former’s Dalit status. The authors suggest government regulations and AI audits as possible remedies to these problems.
Finally, transparency remains a downside of AI. Deep learning networks’ reasons behind a key decision are too complex to explain, as they are drawn from large quantities of data, but the justification is required either by law or user expectation.
The advantages of deep learning are fundamental, and so are the downsides. Like all new technologies, AI needs time for its errors and faulty features to be corrected and improved. Technology and policy solutions, the chapter concludes, can set things right in time.
Tag CloudAgile - Agile Delivery - AI - amazonecommerce - Animal Framework - Attracting talent - Autonomous weapons - B2B - blockchain - businessbuilding - Business building - Clean code - Client consulting - cloud platform - Code Refactoring - coding - Company building - Computer Vision - Corporate startup - cryptocurrencies - de-risking business building - Deepfakes - Deep Learning - DeepMind - derisking business building - Design Research - Developer Path - DevOps - Digital Ownership - Digital Product Strategy - ecommerce - entrepreneurs - Figma - founder equality - founder equity - front end developer - Fullstack Engineer - Growth strategy - Hook model - Incubator - innovation - Iterative and Incremental Development - legacy system - Manual Testing - Metaverse - methodology - Mobile Engineer - Natural Language Processing - NFT - NLP - online recruitment - playbooks - Podcast - Product Design - Product Development - Product Development Strategy - Product strategy - product versions - project management - Prototyping early-stage ideas - Quantum Computing - Recruitments - Remote Work - Research - research problem - Robotics - Sales machine - scalable software - Scrum - Self-Driving Cars - Serial entrepreneurs - Slash - software - software design - Software Development - Software Development Company - Software Engineering - Spotify Model - Staff Augmentation - teamwork - Tech Talks - tech teams - tech vendor - testing playbook - The Phoenix Project - Unit testing - user interview - user retention design - VB Map podcast - Venture Building - Venture building strategies - Venture Capital - venturecapital - virtual retreat - Web3