What Is AI?
Less than a decade after helping the Allied forces win World War II by breaking the Nazi encryption machine Enigma, mathematician Alan Turing changed history a second time with a simple question: “Can machines think?”
Turing’s 1950 paper “Computing Machinery and Intelligence” and its subsequent Turing Test established the fundamental goal and vision of AI.
At its core, AI is the branch of computer science that aims to answer Turing’s question in the affirmative. It is the endeavor to replicate or simulate human intelligence in machines. The expansive goal of AI has given rise to many questions and debates. So much so that no singular definition of the field is universally accepted.
The major limitation in defining AI as simply “building machines that are intelligent” is that it doesn’t actually explain what AI is and what makes a machine intelligent. AI is an interdisciplinary science with multiple approaches, but advancements in machine learning and deep learning are creating a paradigm shift in virtually every sector of the tech industry.
However, various new tests have been proposed recently that have been largely well received, including a 2019 research paper entitled “On the Measure of Intelligence.” In the paper, veteran deep learning researcher and Google engineer François Chollet argues that intelligence is the “rate at which a learner turns its experience and priors into new skills at valuable tasks that involve uncertainty and adaptation.” In other words: The most intelligent systems are able to take just a small amount of experience and go on to guess what would be the outcome in many varied situations.
Meanwhile, in their book Artificial Intelligence: A Modern Approach, authors Stuart Russell and Peter Norvig approach the concept of AI by unifying their work around the theme of intelligent agents in machines. With this in mind, AI is “the study of agents that receive percepts from the environment and perform actions.”
ARTIFICIAL INTELLIGENCE DEFINED: FOUR TYPES OF APPROACHES
- Thinking humanly: mimicking thought based on the human mind.
- Thinking rationally: mimicking thought based on logical reasoning.
- Acting humanly: acting in a manner that mimics human behavior.
- Acting rationally: acting in a manner that is meant to achieve a particular goal.
The first two ideas concern thought processes and reasoning, while the others deal with behavior. Norvig and Russell focus particularly on rational agents that act to achieve the best outcome, noting “all the skills needed for the Turing Test also allow an agent to act rationally.”
Former MIT professor of AI and computer science Patrick Winston defined AI as “algorithms enabled by constraints, exposed by representations that support models targeted at loops that tie thinking, perception and action together.”
While these definitions may seem abstract to the average person, they help focus the field as an area of computer science and provide a blueprint for infusing machines and programs with ML and other subsets of AI.
The Future of AI
When one considers the computational costs and the technical data infrastructure running behind artificial intelligence, actually executing on AI is a complex and costly business. Fortunately, there have been massive advancements in computing technology, as indicated by Moore’s Law, which states that the number of transistors on a microchip doubles about every two years while the cost of computers is halved.
Although many experts believe that Moore’s Law will likely come to an end sometime in the 2020s, this has had a major impact on modern AI techniques — without it, deep learning would be out of the question, financially speaking. Recent research found that AI innovation has actually outperformed Moore’s Law, doubling every six months or so as opposed to two years.
By that logic, the advancements artificial intelligence has made across a variety of industries have been major over the last several years. And the potential for an even greater impact over the next several decades seems all but inevitable.
So, how do we work with these intelligent machines?
Let’s use surgery as an example:
It may be extreme work, but until recently surgeons in training learned their profession the same way most of us learned how to do our jobs: We watched an expert, got involved in the easier work first, and then progressed to harder, often riskier tasks under close supervision until we became experts ourselves. This process goes by lots of names: apprenticeship, mentorship, on-the-job learning (OJL). In surgery, it’s called See one, do one, teach one.
Critical as it is, companies tend to take on-the-job learning for granted; it’s almost never formally funded or managed, and little of the estimated $366 billion companies spent globally on formal training in 2018 directly addressed it. Yet decades of research show that although employer-provided training is important, the lion’s share of the skills needed to reliably perform a specific job can be learned only by doing it. Most organizations depend heavily on OJL: A 2011 Accenture survey, the most recent of its kind and scale, revealed that only one in five workers had learned any new job skills through formal training in the previous five years.
Today OJL is under threat. The headlong introduction of sophisticated analytics, AI, and robotics into many aspects of work is fundamentally disrupting this time-honored and effective approach. Tens of thousands of people will lose or gain jobs every year as those technologies automate work, and hundreds of millions will have to learn new skills and ways of working. Yet broad evidence demonstrates that companies’ deployment of intelligent machines often blocks this critical learning pathway: It moves trainees away from learning opportunities and experts away from the action, and overloads both with a mandate to master old and new methods simultaneously.
Obstacles to Learning
Below are the four widespread obstacles to acquiring needed skills that drive shadow learning:
1. Trainees are being moved away from their “learning edge.”
Training people in any kind of work can incur costs and decrease quality, because novices move slowly and make mistakes. As organizations introduce intelligent machines, they often manage this by reducing trainees’ participation in the risky and complex portions of the work. Thus trainees are being kept from situations in which they struggle near the boundaries of their capabilities and recover from mistakes with limited help—a requirement for learning new skills.
2. Experts are being distanced from the work.
Sometimes intelligent machines get between trainees and the job, and other times they’re deployed in a way that prevents experts from doing important hands-on work. In robotic surgery, for instance, surgeons don’t see the patient’s body or the robot for most of the procedure, so they can’t directly assess and manage critical parts of it. For example, in traditional surgery, the surgeon would be acutely aware of how devices and instruments impinged on the patient’s body and would adjust accordingly; but in robotic surgery, if a robot’s arm hits a patient’s head or a scrub is about to swap a robotic instrument, the surgeon won’t know unless someone tells her. This has two learning implications: Surgeons can’t practice the skills needed to make holistic sense of the work on their own, and they must build new skills related to making sense of the work through others.
A company’s deployment of AI may move trainees away from learning opportunities.
3. Learners are expected to master both old and new methods.
Robotic surgery comprises a radically new set of techniques and technologies for accomplishing the same ends that traditional surgery seeks to achieve. Promising greater precision and ergonomics, it was simply added to the curriculum, and residents were expected to learn robotic as well as open approaches. But the curriculum didn’t include enough time to learn both thoroughly, which often led to a worst-case outcome: The residents mastered neither.
Dealing with this tension was difficult for everyone, especially because the approaches were in constant flux: New tools, metrics, and expectations arrived almost daily, and instructors had to quickly assess and master them. The only people who handled both old and new methods well were those who were already technically sophisticated and had significant organizational resources.
4. Standard learning methods are presumed to be effective.
Decades of research and tradition hold trainees in medicine to the See one, do one, teach one method, but as we’ve seen, it doesn’t adapt well to robotic surgery. Nonetheless, pressure to rely on approved learning methods is so strong that deviation is rare: Surgical-training research, standard routines, policy, and senior surgeons all continue to emphasize traditional approaches to learning, even though the method clearly needs updating for robotic surgery.
Three organizational strategies that may help leverage shadow learning’s lessons:
1. Keep studying it.
Shadow learning is evolving rapidly as intelligent technologies become more capable. New forms will emerge over time, offering new lessons. A cautious approach is critical. Shadow learners often realize that their practices are deviant and that they could be penalized for pursuing them. (Imagine if a surgical resident made it known that he sought out the least-skilled attendings to work with.) And middle managers often turn a blind eye to these practices because of the results they produce—as long as the shadow learning isn’t openly acknowledged. Thus learners and their managers may be less than forthcoming when an observer, particularly a senior manager, declares that he wants to study how employees are breaking the rules to build skills. A good solution is to bring in a neutral third party who can ensure strict anonymity while comparing practices across diverse cases.
2. Adapt the shadow learning practices you find to design organizations, work, and technology.
Organizations have often handled intelligent machines in ways that make it easier for a single expert to take more control of the work, reducing dependence on trainees’ help. Robotic surgical systems allow senior surgeons to operate with less assistance, so they do. Investment banking systems allow senior partners to exclude junior analysts from complex valuations, so they do. All stakeholders should insist on organizational, technological, and work designs that improve productivity and enhance on-the-job learning. In the LAPD, for example, this would mean moving beyond changing incentives for beat cops to efforts such as redesigning the PredPol user interface, creating new roles to bridge police officers and software engineers, and establishing a cop-curated repository for annotated best practice use cases.
3. Make intelligent machines part of the solution.
AI can be built to coach learners as they struggle, coach experts on their mentorship, and connect those two groups in smart ways. For example, when Juho Kim was a doctoral student at MIT, he built ToolScape and LectureScape, which allow for crowdsourced annotation of instructional videos and provide clarification and opportunities for practice where many prior users have paused to look for them. He called this learnersourcing. On the hardware side, augmented reality systems are beginning to bring expert instruction and annotation right into the flow of work. Existing applications use tablets or smart glasses to overlay instructions on work in real time. More-sophisticated intelligent systems are expected soon. Such systems might, for example, superimpose a recording of the best welder in the factory on an apprentice welder’s visual field to show how the job is done, record the apprentice’s attempt to match it, and connect the apprentice to the welder as needed. The growing community of engineers in these domains have mostly been focused on formal training, and the deeper crisis is in on-the-job learning. We need to redirect our efforts there.
In conclusion, AI is a boon for improving productivity and efficiency while at the same time reducing the potential for human error. But there are also some disadvantages, like development costs and the possibility for automated machines to replace human jobs. It’s worth noting, however, that the artificial intelligence industry stands to create jobs, too — some of which have not even been invented yet.
THE PROBLEM: YOUR BUSINESS ISN’T GROWING AS FAST AS IT SHOULD!
Your sales have stagnated or decreased, and you can’t figure out why. Discover what’s holding you back from achieving predictable sales growth in your business.
If you want to grow your business, you need a proven plan and framework. That’s what you get with the 2X Your Sales Discovery Session.
Want to learn about a formula for Predictable Growth that will put your business on a 90-day path to 2X Your Sales?