Fixed link
0
vote

FuturICT and Social Sciences: Big Data, Big Thinking

Rosaria Conte, Nigel Gilbert, Giulia Bonelli, and Dirk Helbing

posted on 19 July 2011

 

Imagine storing all the computational information produced in the world in just one year: you would have a pile of DVDs able to reach the Moon and back. How about all the data collected since the beginning of the computer era? The quantity is so huge that traditional units of measurement cannot cope. For this reason a few years ago computer scientists started talking about “Big Data”, referring to the gathering, formatting, analysing and manipulating of a massive amount of digital information.

 

For social scientists, the challenge is now to make sense of all these data in a way that illuminates our social world.  This will involve a collaboration between social scientists, who have the concepts, theories and analytical expertise that are needed, and scientists and engineers, especially computer engineers, physicists, and complexity scientists, who are used to handling vast amounts of data.

 

The European Commission, as part of its Framework 7 Programme, has launched a competition for proposals for “Flagship” research initiatives that will make major advances.  Six themes have been accepted for further development; of these only two will be selected and then funded with €10 billion over ten years.  Among the themes being considered are research programmes in nanotechnology, robots, personalised medicine, and one on understanding complex social systems, called FuturICT.

 

Unleashing the power of information for a sustainable future


FuturICT aims at understanding and managing complex social systems, with a focus on sustainability and resilience. Its starting point is Big Data: models of techno-socio-economic systems will be developed, grounded in data from existing and new information technology systems. Computational Social Science will play a crucial role: revealing the processes underlying the emergence and maintenance of societies constitutes a major challenge of the project. FuturICT will help the social sciences take advantage of the computational instruments and the data-driven knowledge required for building and testing social science knowledge.

 

But is Big Data enough? Social scientists working within FuturICT believe it is not. Just because we have billions of bits of information does not mean we know the intention behind them or their consequences. For this reason, the Social Sciences have a vital role in the project in order that it not only has good data but also good theories.

 

The greatest difficulty for a project like FuturICT lies in asking good questions.  For example, why do financial markets crash again and again? How can we construct resilient institutions? What determines human happiness and well-being, and how are they influenced by personal wealth? How can society change behaviours that destroy our environment and other important public goods? These are just some of the crucial social problems FuturICT will work on. After an analysis of these issues and their consequences, data mining and large-scale computer simulations will step in to provide empirical tests.  However, “why” questions cannot be answered through data analysis alone. Therefore, fundamental questions will be discussed during so-called “Hilbert Workshops”—think tanks dedicated to triggering new approaches and ideas. These will naturally lead to the construction of novel theories, developed with the help of an innovative ICT, both responsive and responsible.

 

So, big questions, big data and big theories: FuturICT goes beyond pure information, and takes the path of big thinking.

 

To find out more about FuturICT, see http://www.futurICT.eu

Discussion

that sounds intersting

your comments is very good!

“FuturICT’s starting point is Big Data: models of techno-socio-economic systems will be developed, grounded in data from existing and new information technology systems.”

 

That was not Isaac Newton’s starting point.  Mario Livio (Is God a Mathematician? p. 112) writes:

 

“Newton took observations and experiments that were accurate to only about 4 percent and established from those a mathematical law of gravity that turned out to be accurate to better than one part in a million.”

 

I think you guys’ fascination with large data sets is misplaced.  If you are looking for another Newton, you are looking in the wrong place.

 

http://www3.unifr.ch/econophysics/?q=content/simplified-exposition-axiomatic-economics

 

Just as Newton did, I start by clearly stating some reasonable axioms and then proving theorems based on those axioms and on nothing else.  Have you considered this approach?