Practical Peace Machine, Part 2: What could we build right now?

In part 1 of this blog we discussed AI’s terminology and limitation. Now we can dive straight into the Peace Machine.

Timo Honkela describes three concepts in his book called the Peace Machine. Each of them are ideas on how artificial intelligence (AI) could help human communication.

  1. Meaning negotiating machine is about recognizing that people use different words to mean the same concepts and aid them to get to the same page.
  2. The feeling machine identifies feelings and helps people to understand others as we as themselves better.
  3. The million people meeting splits discussions from mass events to small groups that have the discussions simultaneously and then collects the information using data mining.

There’s two things that Timo Honkela is very clear about in his book. Firstly, the peace machine is not a black box put on the table at crisis resolution negotiations. And secondly, none of the three concepts can be implemented with today’s technology. With that in mind, how far could we get with today’s AI technology?

Meaning negotiation

The underlying technology of meaning negotiation is tensor analysis. Tensors are multidimensional arrays that in our use case would hold information about concepts and their connections.

To use meaning negotiation in a generic discussion we would need to have information about the individuals, their knowledge, their languages and maybe even their values. And then we should be able to connect that information to the subject of the discussion to understand the possible differences in meaning.

The amount of data and calculation power required is simply out of today’s scope. But could we build a subset of meaning negotiation? As mentioned earlier in part 1, AI works best in scenarios that are well defined. So, we could change the setting from plugging into a conversation into analysing content that we already have.

Honkela describes in his book an example of a case where research had been done on how US democrats and republicans talk about healthcare. From the large amount of speeches it can be seen that the two parties connect very different ideas and terms to one subject.

While this kind of analysis is pretty far from translating meanings from one individual to another, it can be useful in many practical solutions. Especially in politics it’s important to look behind the rethorics to understand what are the true differences between political parties.

Sentiment analysis

Sentiment analysis is at the same time the most and the least advanced of the three concepts. While there are several services and frameworks that can be used to do sentiment analysis right now, there are very few practical applications that could take use of it.

The current services take use of machine learning. They have been fed with a large amount of text data together with emotion labels. With this information the AI system can identify feelings from the text.

However, there are several reasons why sentiment analysis is far from perfect. Sentiment is hard to identify from written text even for humans. It lacks all the non-verbal parts of the communication, like the tone of voice and the rhythm. These restrictions make it practically impossible to identify, for example, sarcastic notes or irony.

Another restriction to the sentiment analysis comes from the context. AI systems don’t have it. If the teaching data is too general some topics might be rendered to certain feelings regardless of what has been said about them.

For example, “war” is a difficult word. Usually discussions about war include sadness or frustration. If we analyse a text that is very specific about war and peace there might be a lot of hope and happiness related to the sentences even if they contain the word “war”. AI would probably get confused and find sadness and frustration just because it sees the word “war”.

An important finding is that sentiment analysis tools should be taught using context specific data. If we want to analyse text about peace, war and security, our teaching data should also talk about peace, war and security.

This far we have talked about technical restrictions. But when feelings are concerned there’s always the human factor as well. People don’t like their feelings to be labeled. Think about a machine that says “I noticed that you are getting a bit angry”. This probably would make you even angrier.

When Honkela talks about feeling analysis, he talks about learning. Teaching us about our own feelings and reactions as well as how our actions affect others. And that’s where sentiment analysis for peace tech is at its best. As a tool of reflection.

Reflection is never easy for anyone. You have to question your own decisions and feelings. And there a machine just might be the right tool. Machine is neutral partner to talk to. So, you might be more acceptive for ideas that the machine gives you. And for that, the current services just might be enough.

Million people meeting

A million people meeting is simple enough as an idea. Our political system is based on chosen few who make the decisions for the rest of us. If people could discuss topics simultaneously in smaller group we could collect more of people’s collective knowledge to benefit everyone.

The idea is that a machine would mine information from millions of simultaneous discussions. The system would look for common patterns in the opinions and facts and try to summarize the collective knowledge. The machine could also feed the trends found back into the discussion.

How possible is this idea? Gathering a million people to talk about one topic is hard enough. Not to mention getting them to do that in one platform that could reach all the discussions.

Sounds hard, but this is actually happening everyday. On Twitter and Facebook trending topics definitely have millions of simultaneous discussions on-going. So, it seems that the data is already there.

Next we need something called data mining. It combines statistics and machine learning to find patterns in data.

One example of data mining is to find the most commonly used words and how they connect to each other. You might remember tensors from meaning negotiation. That’s what we are looking at here as well.

Based on those connections, through some kind of visualisation, we could show what has been discussed. From this point on we need a human to figure out how the terms and relations connect to the topic discussed. To put it in another way: Machines could visualise the discussion topic and structures, but they couldn’t form opinions or suggestions.

Another approach would be to automatically summarize the discussions. The problem here would be that a million people produce a lot of discussion and it would be hard for a machine to make a compact and meaningful summary out of it. An expert approach would be more likely to find interesting point of views from the data.


Timo Honkela describes an utopia. The Peace Machine just might be part of the future, but taking machines to be a part of human communications have significant risks as well.

Today’s AI technology performs best in specific problems. If we are able to narrow down the questions that we want to answer, we can utilize AI efficiently. But for generic cases humans still outperform machines.

Practical Peace Machine, Part 1: A short introduction to artificial intelligence

Futurice has been investigating artificial intelligence and peace tech in a project called Peace Machine. Timo Honkela, the father of the Peace Machine -concept, describes that his ideas are for the future. Something that cannot be done today. However we wanted to understand how far can we get with today’s technologies.

This blog post in divided into two parts. In this first part we’ll discuss what AI is and what is it good for. In part two we’ll take a deep dive into the concepts of the Peace Machine to understand how far we could implement them.

So, what is artificial intelligence?

Artificial intelligence (AI) is about machines making autonomous decisions based on the data it perceives from its surroundings. The data could be from another computer system, added by a human or measured with sensors.

Quite often you hear people talking about algorithms and the word has become almost a synonym for AI. But not all algorithms are intelligent. An algorithm is generally just a set of rules or instructions.

Try it out: Waving as an algorithm
1. Raise your arm
2. Move your arm left
3. Move your arm right
4. Repeat from step 2

For intelligent algorithm you need more than just instructions. The algorithm should interpret its surroundings, the data, and be able to choose the actions based on it.

You can approach AI in two ways. Either you make a set of precise rules on how the system should work, or you allow the system to learn from previous data. The latter approach is also known as machine learning (ML).

For example, if we would like to create a system that predicts the eye color of children based on their parents’ eyes, we can define: if both parents have blue eyes, the child will likely have blue eyes as well. If either of the parents have brown eyes, the children will likely have brown eyes. (The truth is not as straightforward, but we can use this for our example.)

The ML approach would be to gather a data set with parents, children and everyone’s eye colors. The algorithm would then calculate a set of probabilities for child’s eye color based on the eye colors of parents.

Choosing the approach

Both approaches have good and bad sides. Some phenomenons are so complicated that describing them with simple rules would be either extremely difficult or simply impossible. Spoken language is a good example of such phenomena. That’s why NLP (natural language processing) tools often use ML.

On the other hand, using ML is far from perfect. The results are at best as good as the teaching data. Is the amount of data sufficient? Do we have some systematic error in the teaching data? The more sensitive data the system is using, the more important these questions become.

Think about a banking system that defines whether a person is given loan or not. The system is taught using history data of loans given and declined. If the bank would’ve historically given more declines to a certain postal code area than others, the algorithm couldn’t reason why the people in that specific area have been declined. It would just assume that postal code is an important factor in loan decisions, and decline any future requests from the given area.

These kind of biases in machine learning systems are not at all rare.

It is up to creators of the services to make sure that the systems avoid biases. When we talk about peace tech it is hugely important not to have a system that divides people in an unjust manner. Bad teaching data might actually cause the system to worsen the situation rather than making it better.

You can also combine the two approaches. For sensitive data, if it’s needed in the first place, it’s better to choose rule based approach. For non-sensitive and complex phenomena we can use ML systems. Together the algorithms can form better systems than just strictly keeping to one or another.

Be specific to get results

A lot of AI utopias might give people the idea that machines are very close to human intelligence. This is very far from the truth. In fact, AI performs best (if not only) in very restricted tasks.

Computers don’t work like humans. Human brains have specialized in handling concepts that can have complex connections to one another. Computers don’t have concepts. Computers look at data points like numbers, letters and words. They can see connections between data points but don’t understand why the connection is there.

With enough data, an AI system could answer questions like: “Girl is to queen the same as what is to king?” It should find the connections and answer “Boy is to king the same as girl is to queen”. But unlike a human, who could see connections in gender and status, the computer only sees that these words are used similarly.


For the Peace Machine it’s important to understand that AI systems have different approaches, they work best in restricted environments, and are better at seeing details than describing entities from data. In the second part we’ll dig deeper into the Peace Machine concept and discuss how these theories would like in practice.

How to get trustworthy data?

On Thu 17th of May we our project took a deep dive into the world of artificial intelligence. Antti Rauhala from talked about what AI is and how it can be used. After that the groups continued from our previous workshop’s case example of South Sudan.

To summarize Rauhala’s speech, one could say that there is always a decision related to any AI system. You can find patterns, make predictions or mimic a human expert, but in the end there is a decision to make. That decision should probably be made by a human aided by the machine.

Our group work included two example ideas that might be used in a crisis area. One was an early warning system and the other a system for collecting and showing up-to-date information of the crisis.

In both cases there were three concerns for the data:

1) Data quality is something that can be hard to maintain even in perfect conditions. Inconsistent format or missing pieces of information cause defects that are hard to fix.

2) Data trustworthiness should be questioned in conflict areas. There are few (if any) trusted sources in conflict areas. Extra care needs to be taken to avoid deliberately altered data.

3) Data ethics is a trending term and a very important topic. Which data is used? How privacy is protected? Questions that don’t yet have industry-wide answers.

There was plenty of discussion on the topics. In conflict areas nothing can be taken for granted. For example, taking notes is not as straight forward as you might think. In some cases not even paper and pen are allowed in discussions.

Another problem is the rigidness of infrastructure. Assuming, for example, that electricity is available for you solution, might render you piece of peace tech useless very fast.

Two key findings were made from the discussion.

Gathering live situation brought up an idea of phone line were reports could be called in. While peace builders have discussions with locals the notes often take time to be gathered and sent to all the parties. When notes would be gathered over phone and transcripts made by machines the notes would be available immediately.

From technology’s point of view there are two interesting upsides. Phones are very reliable technology. Sure, there are situations when even they are out, but it’s still much more likely to get a phone call through than get a good internet connection working. And for data security, the access to the system can be restricted better when the data input isn’t online but a phone call.

Another key finding in the workshop was that we can follow not only the data that we are able to collect, but also the data that is missing. For example, we can identify active members of the community and follow their actions in social media. If some key actor in the area suddenly stops posting or there is a clear change in the style of communication, we can raise a question whether something is happening.

As always, both of these ideas should be customized and fitted to local needs.


Our project’s last event will be held on the 31st of May. Then we are testing out some peace tech design tools that have been developed during our project. The results, as always, can be found in this blog.

Humans and machines. Together.

When a doctor makes a diagnoses she is pretty accurate. When a machines makes a diagnoses it is not quite as accurate. But when a machine and human make a diagnoses together they are more accurate than the doctor alone.

This is the example Minna Mustakallio from Futurice used to in her opening keynote in our Co-operating with the machines -workshop. The take away was that we, humans and machines, have very different strengths and weaknesses. That means that instead of fearing machines taking our jobs or planet we should consider how we could work together.

Continue reading “Humans and machines. Together.”

How machines and human could co-operate?

It seems to be a never ending discussion. Are the machines going to enslave us all?

What if we wouldn’t talk about who controls who but instead would try to find ways how humans and machines could complement each other? Humans are better in asking questions and machines are better in answering them. So, when you think about it, the obvious way to go is co-operation.

Human and machine co-operation workshop from Futurice on Vimeo.

Continue reading “How machines and human could co-operate?”

What is peace tech?

Peace tech is an umbrella term for all technology that is used to build peace. That’s pretty clear from the term itself. But can actually be called peace tech?

There are many opinions and definitions on the term. Some believe that all technology that increases happiness is peace tech. Others would rather have a more restricting definition.

Maria Mekri from an independent peace and security think thank SaferGlobe believes that only technology that reduces or prevents violence can be called peace tech.

On 22nd of March Futurice will host a cross competence workshop on the topic. We’ll take a deep dive into the very definition of peace tech and ask our participants: “If peace tech would have a Wikipedia-page, what would it say?”

We’ll keep you up to date on the results of the workshop!

Peace, machine and the Peace Machine

World peace

Most of us who have been born in the 80’s and grown up in the post cold war world consider it to be a myth. Something that Miss Universe contestants talk about. Or a left-wing dream of the political minority. Definitely not something that could be achieved.

On the other hand, the same generation has been struck by the fact that world peace is not an ever increasing thing by default. History has changed direction drastically in the past ten years, and world peace is becoming a thing that raises interest again.

Continue reading “Peace, machine and the Peace Machine”