Collaboration between Human and Artificial Societies
We will increasingly come to understand there are many dimensions of intelligence, and many different kinds of intelligence possible through different combinations of those dimensions. There may, for example, be as many different kinds of intelligence as there are species of living things on our planet. And over the past decade, computers have become much better than people at certain kinds of pattern recognition made possible by machine learning.
It just means that, for this particular kind of thinking, if you want to call it that, computers are way better than people.
But there are plenty of other things that people are better at than computers. Schwartz: Early in your book Superminds , you discuss the characteristics of intelligent groups. Can you say a bit about this? Malone: We were essentially trying to develop an IQ test for groups. It turns out to be an empirical fact that people who perform well at a certain task, such as reading, also on average perform well at other things, such as math or three-dimensional figure rotations.
This is the broad general intelligence of individuals that traditional intelligence tests measure. But, as far as we could tell, nobody had tried to create a test of the general intelligence of groups. We wanted to see whether there was a similar kind of general intelligence for groups, and we found that yes, in fact, there is.
It appears there is for groups—just as for individuals—a single statistical factor that predicts how well a group will do on a wide range of very different tasks. What many people found even more interesting was what we found to be correlated with group intelligence. At first, we worried that the intelligence of individual group members would be pretty much the only thing that determined how smart the group was. The first was the degree to which the people in the group had what you might call social intelligence or social perceptiveness.
If you have one or two people in a group who dominate the conversation, then, on average, the group is less collectively intelligent than when people participate more evenly. Having more women was correlated with more intelligent groups. So one possible interpretation is that what you need for a group to be collectively intelligent is to have a number of people in the group who are high on that measure of social intelligence. But this is at least an intriguing set of suggestions about the kinds of things that can help make groups smart.
Guszcza: It often seems organizations reward individual performance, but hope for good teamwork. Is there enough of a movement toward actually trying to cultivate practices and standards around forming smart teams in large organizations? Malone: There is a great deal of work that could be done here. We could certainly do much more evaluation of teams and, perhaps even more importantly, we could do much more systematic analysis of what helps make teams work better. Malone: We have spent way too much time thinking about people versus computers, and not nearly enough time thinking about people and computers.
Way too much time thinking about what jobs computers are going to take away from people, and not nearly enough time thinking about what people and computers can do together that could never be done before. Schwartz: How should we be thinking about and exploring the different ways that people and machines will work together in the future? These groups of humans are examples of what I call superminds. They can be companies, or armies, or families, or many other kinds of things.
Almost everything we humans have ever done has been done not by lone individuals, but by groups of people working together, often across time and space. This includes everything from inventing language to making the turkey sandwiches I usually have for lunch. Even more importantly, we can also use computers to create what I call hyperconnectivity : connecting people at a scale and in rich new ways that were never possible before. If you think about it, almost everything we use computers for today is really some form of this. Most people use computers primarily for email or looking at the Web or word processing or social media or various things like that, none of which really involve much artificial intelligence or even much computation in the sense of arithmetic or logical reasoning.
Bridging disciplines to shape the future
These uses of computers today are really almost entirely about connecting people to other people. Could you give a few examples of that? Malone: We already know a lot about the different roles people can have relative to each other in groups. So that gives us at least some language for thinking about the roles computers can have as well. The most obvious one, and the one people talk about the most, is computers playing the role of tools.
The next level up is what you might call an assistant.
How Humans and AI Are Working Together in 1, Companies
We certainly use people as assistants for other people. And computers are increasingly taking on that role. Guszcza: Here at Deloitte, many of us have been doing data science and predictive analytics for about 20 years. One of our applications has been building predictive algorithms to help insurance underwriters better select and price risks, or claims adjusters better handle insurance claims. For the simplest cases the computer just completes the task.
For intermediate cases, the human might need to disambiguate some inputs. The human then spends more time on the complex cases that require context and common sense and judgment. Would this be an example of an assistant? The computer can actually do some of the tasks cheaper, faster, and often better than the person, just as an electric saw can cut things faster than a person can. But, unlike the electric saw, the underwriting assistant can also take more initiative when handling straightforward cases.
You could even say that things like the autocorrect function in text messaging is an example of an assistant that can take a little more initiative—often with amusingly off-the-mark results!
The next level up is what you might call a peer. One of my favorite examples is from a research project I did several years ago with Yiftach Nagar. We trained machine learning predictive algorithms to predict the next plays in American football games, and then let the computers participate in prediction markets along with humans. People can get freaked out about this, but if you think about it, we already have machines as managers in many situations that seem very normal. In the old days, police officers directed traffic at busy intersections.
Today, stoplights do this, and we think nothing of it. It seems completely natural and normal, as I think it should.
How AI Will Rewire Us
UKRI-JST Joint Call on Artificial Intelligence and Society
Colin Boyd. Moti Yung. Jean-Yves Girard. Agostino Cortesi. Wayne Luk. Marian Bubak. Ernst W.
Part I Languages and Systems
Michel Morvan. Maurice Nivat.
Zhiyuan Li. Alan J. Ramon Puigjaner. Jim Grundy. The experiments showed that agents were able to learn those competitive and collaborative relationships particularly well.
- Gluten-Free Artisan Bread in Five Minutes a Day.
- AI4EU – Advancing Europe through collaboration in AI?
- Janissaries Clan and Crown?
- Karski: how one man tried to stop the holocaust.
- Opening China: Karl F. A. Gutzlaff and Sino-Western Relations, 1827-1852: Karl F.A.Gutzlaff and Sino-western Relations, 1827-1852 (Studies in the History of Christian Missions (SHCM)).
- Ethics of AI;
To dynamics of the agents in the FTW model are clearly explained by visualizing their activation patterns of the neural network. In the following visual, clusters of dots in the figure below represent situations during play with nearby dots representing similar activation patterns. These dots are colored according to the high-level CTF game state in which the agent finds itself: In which room is the agent? What is the status of the flags? What teammates and opponents can be seen? We observe clusters of the same color, indicating that the agent represents similar high-level game states in a similar manner.
A key thing to highlight is that agents in the FTW model are never trained in the collaborative or competitive dynamics. The behavior of the agents is purely a result of constantly playing the game with other agents and human players. To evaluate the collaborative and competitive behaviors of the FTW agents, the DeepMind team performed a large tournament on procedurally generated maps with ad hoc matches that involved three types of agents as teammates and opponents: ablated versions of FTW including state-of-the-art baselines , Quake III Arena scripted bots of various levels, and human participants with first-person video game experience.
The DeepMind team wanted to challenge this surprising result and create a new challenge with a team of two professional games testers with full communication to play continuously against a fixed pair of FTW agents. An potential explanation for the superior performance of the FTW agents is their faster visual and motor control compared to human players. Even after this, the FTW agents were able to outperform the strongest human players most of the time. Even more remarkable was to observe how those behaviors evolve over time.