Our Machines, Our Selves

A pair of public lectures kicked off the new Mellichamp Initiative in Mind & Machine Intelligence

We live in a time of convergence of human and machine. Our human experience is augmented by machine applications, from internet-enabled sensors to human-assistive robotics, while we imbue our machines with human qualities, including context-awareness, vision and artificial intelligence (AI).

As technology brings us and our computers into an ever more seamless existence, we seek to understand our place in the complex relationship between human and machine intelligence. Enter UC Santa Barbara’s new Mellichamp Academic Initiative in Mind & Machine Intelligence, a multi-year research effort made possible by a generous gift from Duncan and Suzanne Mellichamp.

The overarching goal of the initiative is to identify the strengths and capabilities of both human and machine intelligence, in order to use the best of one to augment and benefit the other.

“It seems only natural for our society to ask these deep questions about AI and the human mind, and also to think of bold, new questions,” said Miguel Eckstein, a professor in UC Santa Barbara’s Department of Psychological and Brain Sciences. “The fascination with the impact of intelligent machines on human life and society, with understanding the limits of artificial intelligence and with pinpointing what is unique about the human mind has been around for many years and has involved scientists, philosophers, futurists and science fiction writers. With every new leap in the development of AI, we seem to return to these questions.” 

Joining Eckstein at the helm of this endeavor is UC Santa Barbara computer science professor William Wang, an expert in natural language processing (NLP) — a discipline of artificial intelligence that seeks to teach computers to understand and communicate using human language in text and verbal form. Rapid advances in the field have brought us closer to our computers than ever.

On February 19 and 20, the public was invited to explore these big questions in two talks presented as part of a workshop to kick off the new initiative. The hour-long lectures took place at the campus’s Marine Science Institute auditorium.


Can Analogy Unlock AI’s Barrier of Meaning?

Watch the lecture here: https://www.youtube.com/watch?v=QvLEmueHhqY&feature=youtu.be

In a lively discussion that kicked off the workshop, Santa Fe Institute and

Melanie Mitchell


Portland State University computer science professor Melanie Mitchell outlined what remains artificial intelligence’s greatest obstacle: the so-called “barrier of meaning” — the ability to extract both literal and abstract human-relevant concepts from the information that it processes. While computers and their human brain-inspired neural networks have shown amazing progress in the classification of images — leading to advances in self-driving cars, facial recognition and automatic captioning of photos — they currently still fall short of the optimistic predictions made in the late 1950s and 1960s, during the birth of artificial intelligence as a discipline.

“We’re still waiting,” said Mitchell, whose research focuses on conceptual abstraction, analogy-making  and visual recognition in artificial intelligence systems. The examples of image classification she presented ranged from precise and straightforward to surreal and hilarious. One neural net trained to spot wildlife from photos, for instance, did so not because it could identify animals, but because it learned from multitudes of wildlife photos that the background in such images is usually blurred. In another example, Mitchell revealed that self-driving cars often get rear-ended because they see what they interpret as an obstacle and stop abruptly, causing the human driver behind them — who may see the same thing and not interpret it as an obstacle — to smack into them.

But, said Mitchell, there could be a way to enhance the sophistication in AI learning, and that’s with a process humans are born doing.

“Machines are not very good — yet — in what in machine learning people call ‘transfer learning’,” she said, defining it as “taking what you learn in one domain, and transferring it to a similar domain.”

Humans do this type of learning by analogy — using one concept to elucidate a similar concept in another setting. If we could train machines to use analogies, they might be able to crash the barrier of meaning, Mitchell said. In fact, researchers have been trying to get their machines to think conceptually for decades with various exercises, such as the pattern-recognition problems first put forth in the 1970s by Russian scientist Mikhail Bongard. Despite the puzzles’ seeming simplicity — we encounter the same type of ‘spot-the-difference’ or ‘fill-in-the-blank’ type of challenge in our most basic cognitive tests — computers have not yet been successful in solving all the Bongard problems, Mitchell said.

Still, the next step in the evolution of artificial intelligence will likely lie in their ability to make decisions based on concept, context, the presence as well as the absence of information, as well as comparisons between examples — things humans can do automatically and effortlessly.

“I think it’s possible that analogy is really the missing piece,” said Mitchell. Because analogies often encode perspectives, emotions and values relevant to humans, they could “unlock meaning from perception” in artificial intelligence.

Bots and Tots

Watch the lecture here: https://www.youtube.com/watch?v=W8355bkqw9c&feature=youtu.be

Could the machines take our jobs? It’s a question we’ve been asking since

Hal Varian


the first machines came on the scene in the late 18th century with the onset of the Industrial Revolution.

According to Google chief economist Hal Varian in his Mind & Machine public lecture, the answer is yes — and no. While machines have gradually gotten more sophisticated, jobs, too, have evolved. And while automation has indeed been a huge influence on the labor markets, demographics have also had — and will continue to have — major impacts on the demand for labor. Spoiler alert: We’re probably going to need more automation in the decades to come just to help keep us going.

“As you know, there’s a lot of press out there where people are worried about the impact of automation on the labor market,” Varian said, pointing out various news stories over the last two centuries accusing machines of coming to steal jobs. In recent years, however, the story has been more about growing labor shortages, a trend that he expects will continue for a while.

The reason for this traces back to the mid-20th century, specifically the demographic roller coaster of the early 20th century up until the end of the WWII, the baby boom that followed in the 1950s and 60s, the “baby bust” of the 1970s and 80s, and the “echo of the baby boom” in the years that followed, as the children of boomers started to have their own kids. Another demographic impact came in the form of the increasing number of women entering the labor force, creating a huge supply of labor. However with the retirements of the baby boomer population, population growth itself at an all-time low in the U.S., and the leveling off of the percentage of women in the workforce, the labor market is now experiencing shortages.

Meanwhile, jobs are continuously changing as automation takes over the more repetitive and low-skilled tasks. Bookkeeping jobs, for instance, have faded as spreadsheet programs took over and automated the simpler math tasks.

“But at the same time those innovations increased the demand for accountants, auditors, management analysts, and so on,” he said. Other jobs, such as video store clerk, grew along with the technology, faded when the tech became obsolete, but some of the tasks involved (e.g. video recommendation and distribution) have become parts of other jobs. Automation generally eliminates tasks, he argued, but very rarely does it eliminate entire jobs, which tend to be more complicated than we have the automation for. In fact, jobs that take place in even a slightly complex environment are likely to be safe from automation — at least in the near term — because optimal automation requires a very standardized environment that accommodates very repetitive, specific actions.

Automation, which leads to ultra-productivity, could change the way we look at work, in terms of hours spent per week, flexible hours, training and education, according to Varian.

Currently, growth in the U.S. labor market is slow, and Varian sees it remaining “anemic” for at least a couple decades more, as the baby boomer generation heads into retirement.

“But every one of those retirees who withdraws from the labor force will expect to continue consuming,” he said. “And that means that everyone in this smaller labor force will have to be productive in order to provide for the consumption that is demanded by the population as a whole.” This is a situation already in play in countries such as Japan, Spain, Korea, Italy and Germany, who are investing heavily in robots for that necessary productivity boost. In the U.S., add to that the cost of (already expensive) healthcare, which becomes more costly as people live longer.

“We all have some challenges and I would say that if you look at healthcare, that’s one of the areas where people are most motivated to try to come up with cost-saving innovations,” he said.

The public lectures were part of a larger, research-focused, two-day workshop that aims to bring together some of the greatest minds on the topic of human and machine intelligence in a series of interdisciplinary interactions. Researchers from leading institutions including Harvard, Princeton, UC Berkeley, Carnegie Mellon, Google, University of Chicago Business School and Facebook AI were among the participants, providing insights from a variety of fields, including computer science, engineering, psychology, neuroscience and economics.

“Interdisciplinary meetings are fundamentally different from the more typical meetings,” said Eckstein, whose own research delves into the ways human brains conduct visual searches, recognize faces and direct attention — tasks humans do naturally, but that computers still struggle with. “Everybody needs to work a little harder to understand each other and concepts and approaches get shaken.”

The workshop’s meetings and discussions gave researchers the lay of the land by focusing on the similarities and differences in the capabilities of human and artificial intelligence, and the state of the art on AI that aims to achieve some of the mind’s unique abilities. It also looked at the role of AI in the economic world.

“How much screen time do we spend on our cellphones each day? Do you ask Alexa for the traffic?” asked Wang, who is also the director of the campus’s Center for Responsible Machine Learning (CRML). “The convergence of human and machine intelligence is already happening, and removing the blocks to communication would free up human time and energy, allowing us to think about more important questions.”

The explorations will continue beyond the workshop as well, with research collaborations bringing together faculty members from diverse disciplines, including four endowed chairs. The initiative also will bridge myriad centers and programs on campus, including the Sage Center for the Study of the Mind, CRML, the Cognitive Science Program, the Data Science Initiative and the Center for Information Technology and Society.

“UCSB has built a worldwide reputation by looking at important scientific problems from different perspectives and it was conducting interdisciplinary research before that became a trend,” Eckstein said. “We are approaching this initiative with the same philosophy. Our goal is to create a cluster of professors and researchers that like to get together, look at problems from very different perspectives and think hard about new questions and new approaches to answer those questions.”

The Mellichamp Academic Initiative in Mind & Machine Intelligence is part of a special new cluster that will include four endowed chairs. These will have connections to a number of academic departments, including psychological and brain sciences, economics and computer science, as well as to geography, electrical and computer engineering, linguistics, English and neuroscience and data science initiatives.

Updated 2/27/2019

Share this article