5 Takeaways: Artificial Intelligence

An amuse-bouche for the (non-machine) mind.
digest

In Scientific Controversies, hosted by our own Janna Levin, we tackle complex, conceptual, occasionally amorphous topics like animal consciousness and string theory. These rich, hour-long conversations take place in person at Pioneer Works, and feature big thinkers exchanging on big questions. To watch is a feast. “5 Takeaways” is a snack, an amuse-bouche for the mind. Because comprehension sometimes demands—or, at the very least, appreciates—distillation, and the internet loves a listicle. Below, for the benefit of lay science enthusiasts, contributor Anil Ananthaswamy serves up key takeaways from a conversation on artificial intelligence between guests Yann LeCun and Max Tegmark.

What is AI?

Artificial Intelligence (AI) refers to software or hardware that attempts to mimic human intelligence in ways small and large. The current approach to building AI involves machine learning, in which computer systems learn about patterns in data, and then use this knowledge to make predictions about new data and take actions. An email spam filter, for example, uses machine learning: it examines a large sample of emails that have already been annotated as spam or not-spam and figures out patterns that typify spam. It then uses this knowledge to filter out spam from future emails, while continually learning to keep abreast of new tactics by spammers. Such machine learning systems have been extremely successful at tasks such as face and speech recognition, and language translation.

Does AI pose an existential threat?

Current AI systems are good at specific tasks: image recognition, for example. Such AI is called weak or narrow AI. The effort is ongoing to develop artificial general intelligence, or AGI, which refers to AI that will ostensibly match or even exceed human intelligence, in that it’ll be capable of thinking and tackling a wide variety of problems, even those that it wasn’t explicitly trained to solve. Some worry that AGI will eventually become supremely intelligent and autonomous and pose a threat to human existence. But whether this will happen, let alone exactly when, is a topic of intense debate. Scientists also differ in their opinions about when we should start planning for such an eventuality. Some advocate that now is the right time, while others argue that we should wait until we know what such an AGI will look like.

Are minds machines?

Maybe. Certain materialist philosophers assert that humans are material beings and that everything about us—including our consciousness—can eventually be understood as the outcome of the interactions and organization of matter that comprises our brains and bodies. In this way of thinking, everything—from the tiniest to the most complex of lifeforms—is made up of a hierarchy of interacting systems built from the bottom up; and the mind—both the conscious and unconscious aspects of it—is the outcome of such machinery. In other words, we may be biological machines.

Could machines become sentient, and how could we tell?

If the materialist philosophers are correct—and that’s a huge if—then our subjective feeling of consciousness as something non-material is simply an illusion. If so, it’s possible that we will one day build machines that will have similarly subjective experiences of consciousness. To tell whether or not a machine is conscious, we’d have to design a suitably aggressive Turing Test. Such an aggressive test would probe a machine’s claims of internal conscious experience to gauge if it is truly intelligent. Also, if neuroscience deciphers the brain processes that give rise to the illusion of subjective consciousness in humans, then we could simply look for similar processes at work in machines.

Is intelligence limited to carbon-based life?

Not necessarily. Carbon, it turns out, is an excellent element for engendering life. Each carbon atom can chemically bond with up to four other atoms, enabling the formation of biologically important long-chained molecules such as proteins and DNA. Carbon is the substrate of life on Earth. But there is no reason to think that machinery built out of other elements, such as silicon, cannot be imbued with intelligence and consciousness; there is no scientific basis for carbon chauvinism. Artificial general intelligence built using software and hardware could one day demonstrate that properties considered the purview of carbon-based lifeforms may indeed be substrate-independent. ♩


MORE FROM BROADCAST
Change the frequency.
Subscribe to Broadcast