We are on the verge of a technological revolution that will fundamentally alter the way we live, work, and relate to one another unlike anything humankind has experienced before. The main driver for this technological revolution is Artificial Intelligence (AI).
Technological change driven by AI will change not only what we do but also who we are. It will affect our identity and all the issues associated with it: our sense of privacy, our notions of ownership, our consumption patterns, the time we devote to work and leisure, and how we develop our careers, cultivate our skills, and nurture relationships. But the development and applications of artificial intelligence can also present a dystopian threat to our collective and individual well being.
What is Artificial Intelligence?
From SIRI to self-driving cars, artificial intelligence (AI) is progressing rapidly. While science fiction often portrays AI as robots with human-like characteristics, AI can encompass anything from Google’s search algorithms to IBM’s Watson to autonomous robots and weapons systems.
Artificial intelligence today is often referred to as narrow AI (or weak AI), which is designed to perform a narrow task (eg:facial recognition or only internet searches or driving a car). The other kind of Artificial Intelligence is termed general AI (AGI or strong AI) which is designed to “think,” and solve problems much like humans. While narrow AI may outperform humans at a specific task is, like playing chess or solving equations, AGI would outperform humans at nearly every cognitive task.
Artificial intelligence involves the attempt to make machines think in the way humans do. The famous Turing Test is a test for intelligence in a computer, requiring that a human being should be unable to distinguish the machine from another human being by using the replies to questions put to both. Arthur Samuel, a pioneer in the field of Artificial Intelligence, defined machine learning as “the ability to learn without being explicitly programmed.” Machine Learning at its most basic is the practice of using algorithms to parse data, learn from it, and then make a conclusion or prediction.
Robots are autonomous or semi-autonomous machine applications of Artificial Intelligence that can act independently of external commands. Robots make use of artificial intelligence to improve their autonomous functions by learning. However, it is also common for robots to be designed with no capability to self-improve.
There are at least 33 types of Artificial Intelligence, examples of which you can read about at this link.
Artificial Intelligence and the Internet of Things (IoT)
Think of all the “smart” devices that exist in our world from phones to appliances and even entire buildings. These devices are all connected through the “cloud” to the Internet, with the capability of communicating with each other.
An estimated 25 billion connected “things” will be in use by 2020. 65% of approximately 1,000 global business executives surveyed say they agree organizations that leverage the internet of things will have a significant advantage. The IoT market is predicted to grow to $1.7 trillion by 2020, marking a compound annual growth rate of 16.9%.
Technology author Anthony D. Williams argues, “Virtually every animate and inanimate object on Earth could be generating and transmitting data, including our homes, our cars, our natural and man-made environments, and yes, even our bodies.”
The Dark Side of Artificial Intelligence
The question of the possibility of AI becoming malevolent or destructive has been raised. Experts think two scenarios most likely:
The AI device or program does something destructive: For example, autonomous weapons that are programmed to kill.
The AI is programmed to do something beneficial, but it develops a destructive method for achieving its goal: For example, an AI system is tasked with a ambitious geoengineering project, but it might wreak havoc with our ecosystem as a side effect, and view human attempts to stop it as a threat to be met.
In a paper published in the journal Science Robotics, researchers Sandra Wachter, Brent Mittelstadt, and Luciano Floridi point out that policing robotics is extremely difficult. And as artificial intelligence becomes more widespread, it’s going to become a greater problem for society.
In 2015, Elon Musk donated $10 million to, as reported in Wired magazine, “to keep A.I. from turning evil.” Musk, Bill Gates and Stephen Hawking have all issued warnings of the dark side of Artificial Intelligence, if we fail to control its development.
Artificial Intelligence Research and Applications
New “deep learning” artificial intelligence (AI) algorithms are showing promise in performing medical work which until recently was thought only capable of being done by human physicians. For example, deep learning algorithms have been able to diagnose the presence or absence of tuberculosis (TB) in chest x-ray images with astonishing accuracy.
Researchers at Google were able to train an AI to detect spread of breast cancer into lymph node tissue on microscopic specimen images with accuracy comparable to (or greater than) human pathologists. Similarly, neural networks have shown to be (slightly) better than human physicians at detecting changes of diabetes in images of patient’s retinas. In other words, these early investigations into deep learning medical AI demonstrate that the algorithms can do as well as (if not better than) expert human physicians in some fields of medical diagnosis and prognosis.
Here’s a sample of the kinds of AI research and applications that either currently exist or are in development:
Researchers at Vanderbilt, Virginia Tech and Yale universities have discovered that brain scans can reveal a criminal suspect’s ‘state of knowledge’ (shades of the movie Minority Report);
Despite the common preconception that creating emotionally intelligent computers is something that won’t happen until far into the future, computers can already augment — and in some cases even replace — emotional intelligence Sony has announced plans to create customer service robots that will develop emotional bonds with customers; And apps like Cogito use AI to guide human agents in using more emotional intelligence as they work with customers;
AI versions of therapists have accurately predicted suicidal patients, depressive behavior, and criminality;
Scientists at the University of Oxford have developed software that can read lips correctly 93.4 per cent of the time – a level that far surpasses the best professionals;
In a significant step forward for artificial intelligence, Alphabet’s hybrid system — called a Differential Neural Computer (DNC) or DeepMind, is now capable of teaching itself based on information it already possesses;
The Central Intelligence Agency (CIA) has upgraded its approach to surveillance by focusing on a new “technology-first” strategy that sees it using deep learning, neural networks to scan big data in order to predict when and where trouble is likely to occur in the US;
DeepMind’s AlphaGo Artificial Intelligence has won the final match of the Go series against world champion Lee Sedol. The 3,000-year-old Chinese board game has proved notoriously hard to master for AI developers due to the sheer number of possible moves;
Australian scientists have built an artificial intelligence system that can predict whether or not you will die soon by looking at images of your organs with about 69% accuracy.
For an expanded description of 59 things Artificial Intelligence can do, go here.
The word “Robotics” was first used by Isaac Asimov, an acclaimed science fiction writer. Asimov also devised the “Three Laws of Robotics” that define how robots should interact with humans.
We can define a robot as “any automatically operated machine that replaces human effort, though it may not resemble human beings in appearance or perform functions in a humanlike manner.” A robot designed specifically to look and act like a human, particularly if it has an external skin-like surface and facial expressions is called an Android.
China is already the world’s largest producer of industrial robots, supplying about 27% of the global market since 2015. It’s also the largest buyer of robots.
Here are some examples of the research and use of robots currently:
Researchers are working to build humanoid robots that can sense the world and give robots human-level navigation abilities. The robots will use a combination of tactile sensors, gyroscopes, cameras, and microphones to enhance their sensing abilities and use that data to understand the world;
Scientists created fleshy “bio-bots” made of living cells which can wriggle and walk. FEDOR — short for Final Experimental Demonstration Object Research — is a humanoid robot developed by Android Technics and the Advanced Research Fund. The multi-talented bot can drive a car, use various tools (including keys), screw in light bulbs, and even do pushups. It has also proven capable of working in extreme conditions. Now, FEDOR has added shooting handguns to its skill set;
China’s new robot police officers have started patrolling streets. The E-Patrol Robot Sheriff is able to track and follow potential criminals or suspicious people via facial recognition, according to the Economic Daily. Besides fighting crime, the robot officer is also capable of monitoring air quality and temperature, and is supposed to be able to track potential criminal activity, safety hazards and potential fires. Dubai’s government is introducing a “new fleet of intelligent police androids” that will be patrolling streets, malls and other crowded public spaces in 2017;
The U.S. Defense Department is designing robotic fighter jets that would fly into combat alongside manned aircraft. It has tested missiles that can decide what to attack, and it has built ships that can hunt for enemy submarines, stalking those it finds over thousands of miles, without any help from humans;
Sometimes referred to as “cloud robotics,” networks of robots are already teaching one another about what they learn as they interact with the world. This co-evolution could occur rapidly and enable robots to quickly become even more physically and mentally capable of engaging with the world than any single human being;
The company, Soul Machines, has created a virtual chatbot called Nadia that can not only portray human emotion, but also read human facial expressions.
A new Tokyo hotel staffed mostly by robots and automatons, has recently opened. Nine types of robots help with check-ins, clean the lobby, and entertain guests;
Researchers at the University of Utah have developed a surgery-assisting robot capable of performing complex brain surgeries. The machine can reduce the time of surgeries by cutting down the time it takes to cut into the skull from two hours to two and a half minutes;
A company, PassivDom, uses a 3D printing robot that can print the walls, roof, and floor of a 380-square-foot model home in about eight hours. When complete, the homes are autonomous and mobile, meaning they don’t need to connect to external electrical and plumbing systems;
Harmony is a sexbot – a silicone sex robot with artificial intelligence (AI) who looks human, feels human and responds in an eerily human way.
Artificial intelligence, Virtual Reality (VR) and Augmented Reality (AR)
Virtual reality (VR), which can be referred to as immersive multimedia or computer-simulated reality, replicates an environment that simulates a physical presence in places in the real world or an imagined world, allowing the user to interact in that world. Virtual reality is the umbrella term for all immersive experiences, which could be created using purely real-world content, purely synthetic content or a hybrid of both. CG VR is an immersive experience created entirely from computer-generated content. CG VR can be either pre-rendered and therefore not reactive—in this way it is very similar to 360° video—or rendered in real time using a games engine. Augmented reality (AR) is a live, direct or indirect view of a physical, real-world environment whose elements are augmented (or supplemented) by computer-generated sensory input such as sound, video, graphics or GPS data.
VR headsets—such as Sony’s Morpheus, or Facebook’s Oculus Rift—block out the surrounding world and, making use of an old trick called stereoscopy, show slightly different images of each to a user’s eyes. That fools his brain into creating an illusion of depth, transforming the pair of images into a single experience of a fully three-dimensional world. Motion trackers, either mounted on the headset or externally, keep track of the users’ head, updating the view as he moves it around; optional hand controllers allow him to interact with virtual objects. The result is a reasonably convincing illusion of being somewhere else entirely.
Augmented reality, by contrast, does not dispense with the real world, but uses computers to improve it in various ways. AR, by design, maintains its users’ connection with the real world, and that means that a headset is not necessary. Heads-up displays are an early example of AR, but there are others: VeinViewer, for instance, is a medical device that projects images of a patients’ veins onto his skin, to help doctors aim injections.
Google has developed Expeditions, a virtual reality platform built for classrooms. Students can use Cardboard to take guided tours of famous cities like Barcelona, Spain, or inaccessible places like space.
The Virtual Reality Medical Center (VRMC)
specializes in VR exposure therapy, particularly in the treatment of fear of flying. The center utilizes a real, repurposed commercial airliner seat. Patients undergoing psychotherapy are belted into the seat and equipped with a head-mounted display, at which point they are taken through the entire experience of a flight, from take-off to safe landing.
Artificial Intelligence and Cybersecurity
Cyberspace is an increasingly hostile environment. In 2015, a PwC study of U.S. organizations found that 79 percent of respondents had detected a security incident during the year.Cyber-attackers are leveraging automation technology to launch strikes, while
many organizations are increasingly using AI to aggregate internal security findings and contextualizing them with external threat information. Unveiled to the world in April, AI-Squared is a collaborative project between MIT’s Computer Science and AI Laboratory (CSAIL) and a machine-learning startup known as PatternEx. Its function – to identify cyber-attacks.
Artificial Intelligence and the New Workplace
AI technology is and will continue to be a major disruptor in the workplace and jobs. In January 2016, the World Economic Forum released a report predicting AI, machine learning, and other nascent technologies will spur a so-called “Fourth Industrial Revolution” that replaces 5.1 million jobs by 2020. According to the report, jobs across every industry and every geographical region in 15 of the world’s largest economies — Australia, Brazil, Germany, China, Japan, the UK and the US, among others — will be affected. Six jobs are eliminated for every robot introduced into the workforce, a new study says.
In 2013, Oxford University researchers In a published paper titled: “The Future of Employment: How Susceptible are Jobs to Computerization”
C.B. Frey and M.A. Osborne, researchers at Oxford University, created a model that calculates the probability of substituting a worker in a given sector. Frey and Osborne conclude machines may replace 47% of active workers in the future. Of 1,896 prominent scientists, analysts, and engineers questioned in a recent Pew survey on the future of jobs, 48% of them said the AI revolution will be a permanent job killer on a vast scale. The Bank of England has warned that within the coming decades as many as 80 million jobs in the U.S. could be replaced by robots.
A team of researchers led by Katja Grace of Oxford University’s Future of Humanity Institute
surveyed several hundred machine-learning experts to get their educated guess. The researchers used the responses to calculate the median number of years it would take for AI to reach key milestones in human capabilities. Overall, the respondents believe there is a 50% chance that AI beats humans at all tasks in 45 years and will automate all human jobs within 120 years. Experts believe artificial intelligence will be better than humans at all tasks within 45 years, according to a new report.
One of the surprises of AI in the last 50 years is that people thought we would start by automating the trivial things, like construction work or cleaning toilets and the hardest things would be what doctors and lawyers do. It actually turns out to be exactly the opposite. Doctors and lawyers are much easier to automate than street sweepers.
Obviously, the major question we must answer is what will people do if large numbers of jobs are taken by Artificial Intelligence programs or robots? Millions of white-collar workers could now be at risk according to politicians and business leaders meeting at the World Economic Forum. In his book, Rise of the Robots, Martin Ford describes the social and economic disruption that is likely to result when educated workers can no longer find employment.
Futurist Jeremy Rifkin contends we are entirely a new phase in history, one characterized by a steady and inevitable decline of jobs. He says the world of work is being polarized into two forces: One, an information elite that controls the global economy; and the other, a growing number of displaced workers.
AI and Management in Organizations
In the years ahead, everyone from doctors, lawyers and scientists to journalists, will find themselves working with, and may be replaced by Artificial Intelligence machines. Computers are becoming increasingly capable of making decisions, taking complex actions, and performing “knowledge work.”
An Economist special report, “The Future of Jobs,” described how entire professions will be impacted through automation and AI. Accounting and auditing examples of business functions which can increasingly be done by expert AI systems, putting these professions at risk, at least in their current form. Middle management decision making processes based on financials are similarly capable of being driven by AI algorithms.
At a Rotterdam School of Management (RSM)
Leadership Summit on Big Data, an expert panel briefly discussed the implications of AI advances in terms of management. As an example, airline autopilots were raised as a domain where computer decision making surpasses human decision making. Similarly, with rapid advances in computer driven automobiles, such as Google car, we are now within generational sight of the obsolescence of human drivers.
Managers might like to believe that they have better hiring judgment than a computer, but a
working paper (paywall) from the National Bureau of Economic Research suggests otherwise. The researchers looked at the employment record of 300,000 low-skill service sector workers across 15 companies. The jobs had low retention rates, with the average worker lasting just 99 days, but researchers found that employees stayed in the job 15% longer when an algorithm was used to judge their employability.
Which white collar professions may be immune to AI and automation? In essence, professions which help people find and pursue ‘meaning’ and fulfillment will be increasingly necessary. For example, ‘divinity consultants’ may work with people to help connect them to a religious tradition to which they will develop a personal connection. And imagine ‘leisure time advisors’ and ‘experience orchestrators’ – a hypothetical mixture of tourism specialist, hobby advisor, and therapist. Leisure time is increasingly a precious resource for which technology will compete for attention. Those who can manage the connection of personal desires and
happiness to new technical possibilities will be in demand. But traditional jobs that are routinized and susceptible to algorithms can be replaced by AI and robots.
A study from the Human-Computer Interaction Lab at the University of Manitoba, suggests that you’ll probably obey a robot boss nearly as predictably as you would a human. The researchers found humans willing to take orders from computers, but much less readily from other humans.
McKinsey’s Rik Kirkland, Erik Brynjolfsson and Andrew McAfee argue senior managers are far from obsolete. As machine learning progresses at a rapid pace, top executives will be called on to create the innovative new organizational forms needed to crowd source the far-flung human talent that’s coming online around the globe. Those executives will have to emphasize their creative abilities, their leadership skills, and their strategic thinking to a much greater degree.
AI sophistication will expand into many HR functions. For example, Jobaline, a job-placement site, uses intelligent voice analysis algorithms to evaluate job applicants. The algorithm assesses paralinguistic elements of speech, such as tone and inflection, predicts which emotions a specific voice will elicit, and identifies the type of work at which an applicant will likely excel.
Advances in technology are causing firms to restructure their organizational makeup, transform their HR departments, develop new training models, and reevaluate their hiring practices. This is according to Deloitte’s 2017 Human Capital Trends Report, which draws on surveys from over 10,000 HR and business leaders in 140 countries. Many of these changes are a result of the early penetration of basic AI software, as well as preparation for the organizational needs that will emerge as they mature.
A survey done by the World Economic Forum’s Global Agenda Council on the Future of Software and Society shows people expect artificial intelligence machines to be part of a company’s board of directors by 2026.
A report by MIT published in Sloan Review makes this provocative statement: “An inevitable shift in which a parent-to-child way of looking at the relationship between the manager and his or her team would be questioned and ultimately superseded by an adult-to-adult form. The nexus of this more adult relationship concerns how commitments are made and how information is shared. When technology enables many people to have more information about themselves and others, it’s easier to take a clear and more mature view of the workplace. Self-assessment tools, particularly those that enable people to diagnose what they do and how they do it, can help employees pinpoint their own productivity issues. They have less need for the watchful eyes of a manager.” One could easily imagine that the “the end of management” is in sight — crushed by peer feedback, pushed out by specialist roles, disintermediated by powerful platforms, and exposed by social network analysis.
The Universal Basic Income
One of the possible solutions to the massive unemployment that could result from the implementation of Artificial Intelligence in the workplace is the institution of a “Universal Basic Income,” in which all citizens or residents of a country regularly receive a regular, unconditional sum of money, either from a
government or some other public institution, in addition to any income received from elsewhere. And it would replace the current system of social welfare payments.
Finland, France, and Canada have already approved pilot tests for government-provided universal basic income, something Elon Musk has said will be an inevitable necessity as A.I. spreads. Basic income has been tested for decades. Last month, Finland voted to give the system a try starting in 2016. In the Netherlands, it has been spreading rapidly since the Dutch city of Utrecht launched an experimental program earlier this summer. The US even tested out a system in the 1960s under the Nixon administration, although the experiment eventually fizzled. But just recently the state of Hawaii passed legislation for the Universal Basic Income for state residents. For four years, in the small Canadian town of Dauphin, residents making less than $13,800 annually were given $4,800 per year to supplement their income. During this time, the population saw a decline in the number of mental health-related visits to the doctor and fewer hospital admissions due to “accident and injury,” as well as few mental health diagnoses in general. These findings were also corroborated by a similar program implemented nearly two decades later on Cherokee land in the United States.
French policy analysts Nicolas Colin and Bruno Palier recommend that other countries adopt the Nordic model of “flexicurity” in which benefits are decoupled from jobs. By guaranteeing access to health care, housing and training, “people won’t be so terrified of switching jobs or losing a job,” they say in another Foreign Affairs piece.
Tesla CEO Elon Musk, Y Combinator President Sam Altman, and Facebook Cofounder Chris Hughes have all endorsed basic income. (Altman and Y Combinator are leading a basic-income trial in Oakland, California).
Artificial Intelligence and Education
To a significant degree, the Artificial Intelligence revolution will make obsolete, or at least, require us to rethink the current system of education and workplace training and development.
“In next century, schools as we know them will no longer exist,” says a feature in The Age publication, based in Melbourne, Australia. “In their place will be community-style centers operating seven days a week, 24 hours a day.” Computers will become an essential ingredient in the recipe for an effective school of the future. Students, The Age asserts, will see and hear teachers on computers, with “remote learning” the trend of tomorrow. Accessing “classrooms” on their home computers, students will learn at times most convenient for them. Yet some attendance at an actual school will be required to help students develop appropriate social skills.
In the 2011 book The Innovative University, Clayton Christensen, a professor of business administration at Harvard, argues that universities could be overtaken by competitors if they fail to adopt new technologies. Children need to learn social and emotional skills if they are to thrive in the workplace of the future, a World Economic Forum report has found.
The new research shows that as the digital economy transforms the workplace, Social and Emotional Learning (SEL) skills such as
collaboration, communication and problem solving will become ever more important as more traditional roles are mechanized. With more than half of children now entering school expected to work in jobs that don’t yet exist, adaptability is becoming a core skill.
A 48-page report titled “Preparing for the Future of Artificial Intelligence ” concludes it is time to stop thinking of higher education as an experience that people take part in once during their young lives — or even several times as they advance up the professional ladder — and begin thinking of it as a platform for lifelong learning. Colleges and universities need to be doing more to move beyond the array of two-year, four-year, and graduate degrees that most offer, and toward a more customizable system that enables learners to access the learning they need when they need it. This will be critical as more people seek to return to higher education repeatedly during their careers, compelled by the imperative to stay ahead of relentless technological change.
Here are some ways in which Artificial Intelligence will have a huge impact on both the structure and delivery of higher education:
AI can create unique learning pathways for individual learners in MOOCs and blended and online learning;
AI could allow researchers to bring together vast amounts of data for the benefit of learners and advancement of knowledge;
AI could provide the opportunity for global classrooms and connect learners globally;
Intelligent Tutor Systems also can provide timely guidance, feedback and explanations to the learner and can promote productive learning behaviors, such as self-regulation, self-monitoring, and self-explanation. Furthermore, Intelligent Tutor Systems can also prescribe learning activities at the level of difficulty and with the content most appropriate for the learner;
AI can help organize and synthesize content to support content delivery. Known as deep learning systems, technology can read, write and emulate human behavior. For example, Dr. Scott R. Parfitt’s Content Technologies, Inc. (CTI) enables educators to assemble custom textbooks. Educators import a syllabus and CTI’s engine populates a textbook with the core content;
Leading-edge technologies like wearable devices, apps, and virtual reality can also improve SEL skills. Wearables are already being used to help students manage their emotions and build communication skills, while virtual reality can be used to take children on virtual field trips that build curiosity and improve critical thinking;
In recent years, thanks to online services, students have been able to get help from peers thousands of miles away. Now with the help of AI and Machine Learning, finding remote help is becoming even easier. Brainly, a social network that helps millions of students collaborate, is exploring the power of AI on its platform.
Artificial Intelligence and Training and Development
In the future of work, the most important skill is to be able to learn how to learn. The amount of knowledge available and the skills needed to be successful in the workplace are constantly changing, and the best employees know how to find the information they need and continually be honing their skills to be at the top of their game.
The corporate training market, which is over $130 billion in size, is about to be disrupted. Companies are starting to move away from their Learning Management Systems (LMS), buy all sorts of new tools for digital learning, and rebuild a whole new infrastructure to help employees learn. Programs such as GSuite, Microsoft Teams, Slack, and Workplace by Facebook are growing quickly. Axonify and Qstream can “space learning” based on your job and prescribe small nuggets just as needed. This is pushing vendors like Workday, Oracle, SuccessFactors, SumTotal and others are now reinventing the LMS — focusing on developing video-learning platforms that feel more like YouTube than an educational course catalog.
Deloitte Human Capital Trends’ newest research shows that “reinventing careers and learning” is now the #2 issue in business (followed only by reorganizing the company for digital business), creating urgency and budget in this area.
Walmart is betting big on virtual reality to help improve its employee training techniques, and it’s turned to a new company to help. TechCrunch is reporting that Walmart plans to install VR training platforms at each of its 200 Academy training centers across the U.S. by the end of the year. Each will have an Oculus Rift and a VR-ready PC to run it on.
Within the contemporary organization, staff-
coaching processes continue to evolve. This is achieved by migrating towards newer technologies and software systems that will adequately assist with a more dynamic mode of providing staff-training and development. Ari Kopoulos, writing for EmployeeConnect.com, says that “AI programs offer HR departments ways to train their staff, earn certifications, cross-train and learn new skills.” What is instinctive of AI-enriched software programs, is that they allow staff to engage in self-directed progress with their training, at their own comfortable pace.
ValeurHR.com points out AI-enriched learning systems are now beginning to offer “customizable employee-related training based on individual performance”. The impact of advancements like this will be numerous: can you imagine the gratification to be gained from knowing that each employee in your organization has access to their ‘own personal mentor?
John Seely Brown, former Chief Scientist at Xerox and Director of its Palo Alto Research Center argues “We must re-invent the workplace as a ‘learningscape.’” He goes on to say that we should build urban learning initiatives such as “Cities of Learning’’—a new movement in which employers, libraries, and museums are wired together to help kids find their interests outside school and pick up new skills—or networks of partners in the corporate world. A powerful example of this kind of learning is the use of GitHub and/or other open source communities. Or another: A rather conservative company, SAP, created an extended open source network that has a couple million participants who are learning with and from each other.
Paul Rosenbloom, professor of computer science at the University of Southern California is beginning to apply his AI platform, Sigma, to the ICT’s Virtual Humans program, which creates interactive, AI-driven 3D avatars. A virtual tutor with emotion, for instance, could show genuine enthusiasm when a student does well and unhappiness if a student is slacking off. “If you have a virtual human that doesn’t exhibit emotions, it’s creepy. It’s called uncanny valley, and it won’t have the impact it’s supposed to have,” Rosenbloom says.
Both Virtual Reality (VR) and Augmented Reality (AR) have an important place in the Artificial Intelligence revolution as it applies to education, training and development. The practical applicability of virtual reality and augmented reality in eLearning is a hotly discussed topic right now. A recent report produced by Horizon 2016, one of the most respected analytical groups, dedicates a number of pages to the question of using augmented and virtual reality in education. For now, potential applications in the fields of physics and medicine show the most promise. This being said, what good can these newfangled technologies do? First of all, virtual reality can transport students to the farthest corners of the observable universe in the blink of an eye and immerse them in a deep and engaging educational environment. Great motivational potential is another major benefit. Which is cooler? To read pages upon pages of text accompanied by black and white illustrations, or to find yourself on Mars and gather soil samples by hand? By the way, that was a rhetorical question.
In Summary: As you can see from the brief descriptions of the developments in Artificial Intelligence, they will have an enormous impact our personal and work lives. In the process, there will be much disruption, but it’s unlikely we will be able to stop this Fourth Industrial Revolution. But it does provide us with the opportunity to address ethical, moral, legal and social issues, including a proactive role of government in ensuring these developments are for the benefit of people.
Copyright, 2017 by Ray Williams.
We are on the verge of a technological revolution that will fundamentally alter the way we live, work, and relate to one another unlike anything humankind has experienced before. The main driver for this technological revolution is Artificial Intelligence (AI).