The President of the French Republic presented his vision and strategy to make France a leader in artificial intelligence (AI) at the Collège de France on 29 March 2018.
Artificial intelligence often sounds like a promise for the future, but we must not fool ourselves: this revolution is happening here and now.
This radical transformation is both an unprecedented opportunity and an immense responsibility.
We have to fully seize the opportunities offered by artificial intelligence now, while designing the framework to regulate it.
That is the ambition of the President and he is committed to:
France is already home to much talent. These experts are what make France a leader in artificial intelligence. It is up to us to support this promising ecosystem.
Tech giants including Facebook, Google, Samsung, DeepMind, Fujitsu and IBM have understood the potential of French talent.
They have chosen to establish their new artificial intelligence research and innovation centres in France.
To transform trials, support our ecosystem and ensure its success in global competition, here are three tangible measures:
Speeding up the emergence of artificial intelligence also requires us to provide resources equal to our ambition: the government will dedicate €1.5 billion to development of artificial intelligence by the end of the current five-year term, including €700 million for research.
France has a key asset: massive centralized databases.
The problem is that they are under-exploited.
The President is committed to opening them up so as to allow players in each sector (health care, transport, agriculture, etc.) to best make use of them and innovate, together.
But we must be clear: using aggregated data does not mean impinging on the privacy of users. This open data policy will be accompanied by a European framework for the protection of personal data.
Artificial intelligence can be frightening.
Enabling the development of AI also means addressing the issues it raises.
The President is committed to ensuring that transparency and fair use are central to algorithms.
An international group of experts in artificial intelligence will be created, based on the IPCC (Intergovernmental Panel on Climate Change) model.
It will aim to organize independent global expertise.
These two priorities of transparency and fair use will be subject to education programmes so that our future citizens will be prepared for these transformations.
Defining artificial intelligence (AI) is not easy. The field is so vast that it cannot be restricted to a specific area of research: it is more like a multidisciplinary programme. Originally, it sought to imitate the cognitive processes of human beings. Its current objectives are to develop automatons that solve some problems better than humans, by all means available.
AI is at the crossroads of several disciplines: computer science, mathematics (logic, optimization, analysis, probabilities, linear algebra), and cognitive science, not to mention the specialized knowledge of the fields to which we want to apply it. The algorithms that underpin it are based on equally varied approaches: semantic analysis, symbolic representation, statistical and exploratory learning, neural networks, and so on. The recent boom in AI is due to significant advances in machine learning. Learning techniques are revolutionary compared to AI's historical approaches: instead of the machine being programmed with the rules that govern a task (often much more complex than one might think), it now discovers them itself.
AI is also developing quickly due to the international “dataization” of all sectors (i.e. big data) and the exponential increase in computing power and data storage capacities. Applications are multiplying and directly affecting our daily lives: image recognition, self-driving cars, disease detection, and content recommendation are some of the many possibilities being explored. The universal nature of AI and its many variations herald a new revolution, with its share of pitfalls and opportunities.
Many artificial intelligence (AI) strategies start with the collection of large bodies of data.
Data is a key competitive advantage in the global AI race. Digital giants in China, Russia and the United States, which have built up their positions by focusing on data collection and use, have a considerable head start. This asymmetry is clearly visible: for instance, in France, large American platforms capture approximately 80% of visits to the 25 most popular sites every month.
A data policy taking into account AI requirements is therefore essential if France and the European Union wish to attain the goals of sovereignty and strategic autonomy. Although these goals are ambitious, they are necessary steps in the creation of a French and European AI industry.
The government must encourage the creation of data commons and support an alternative data production and governance model based on reciprocity, cooperation and sharing. The goal is to boost data sharing between actors in the same sector.
The government must also encourage data sharing between private actors, and assist businesses in this respect. It must organize for certain data held by private entities to be released on a case-by-case basis, and support data and text mining practices without delay.
Most of the actors heard by the mission were in favour of progressively opening up access to some data sets on a case-by-case and sector-specific basis for public interest reasons. This could be in one of two ways: by making the data accessible only to the government, or by making the data more widely available, for example to other economic actors.
The right to data portability is one of the most important innovations in recent French and European texts. It will give any individual the ability to migrate from one service ecosystem to another without losing their data history.
This right could be extended to all citizen-centred artificial intelligence applications. In this case, it would involve making personal data available to government authorities or researchers. This would be beneficial for three reasons:
Dominant players and emerging countries in the AI field have adopted radically different development models. France and Europe will not be able to claim a place on the global stage if they simply attempt to create a “European Google”.
France must instead draw on its economy’s comparative advantages and areas of excellence, focusing on priority sectors where our industries can play key roles at the global level.
The sectors with sufficient maturity to launch major transformation operations are health, transport, the environment and defence and security.
Efforts must focus on achieving these three goals:
Industrial policy must focus on the main issues and challenges facing our era, including the early detection of pathologies, P4 medicine, medical deserts and zero-emission urban mobility. These issues could be identified by sector-specific commissions in charge of publicizing and running activities for their ecosystems.
To support innovation, sector-specific platforms must be created to compile relevant data and organize its capture and collection; to provide access to large-scale computing infrastructures suitable for AI; to facilitate innovation by creating controlled environments for experiments; and to enable the development, testing and deployment of operational and commercial products.
The AI innovation process must be streamlined by creating testing areas (sandboxes) with three characteristics:
The goal of these sandboxes will be to facilitate the testing, iterative design and deployment of AI technologies in coordination with future users.
France’s research and higher education institutes in the artificial intelligence (AI) field have always been widely renowned at the international level. French scientific training has a reputation for excellence and helps create a world-class pool of researchers.
Nevertheless, the AI research field has changed considerably in recent years. There is increasing competition from private-sector research institutes, with major AI firms opening fundamental research centres. This has accelerated the “brain drain” of students and experienced researchers.
Another difficulty facing French research is its weak performance in terms of the transfer and use of this knowledge by industry, in both startups and large groups.
To better connect geographical regions and AI research areas, the mission has developed three key proposals:
Create interdisciplinary AI institutes (3IA) in selected public higher education and research establishments. These institutes must be spread throughout France and cover a specific application or field of research.
Allocate appropriate resources to research, including a supercomputer designed especially for AI applications in partnership with manufacturers. In addition, researchers must be given facilitated access to a European cloud service.
Make careers in public research more attractive by boosting France’s appeal to expatriate or foreign talents: increasing the number of masters and doctoral students studying AI, increasing the salaries of researchers and enhancing exchanges between academics and industry.
While it is not known how many jobs will be created or destroyed due to the automation of tasks, it is likely that most occupations and organizations will change.
This problem must be tackled head on by acknowledging that a major shift is taking place, and that production processes will be distributed between humans and machines in the future. France must set aside the necessary resources to plan and prepare for this transition. Priority must be given to developing complementarity between human labour and machine activity.
More than 50% of tasks in 50% of occupations could be automated, according to France’s Employment Orientation Council.
93% of the Mediametrie survey respondents believe that AI technologies will modify the way they work.
New training models must be planned and tested to prepare for these professional transitions. Three main proposals have been put forward:
The creation of a public laboratory on the transformation of work will encourage reflection on the ways in which automation is changing occupations. It will also make it possible to test tools supporting professional transitions, especially for those likely to be most affected by automation.
To improve future working conditions, reflections must focus on developing a “complementarity index” for businesses, and including all aspects of the digital transition in social dialogue. This could result in a legislative project on working conditions in the automated era.
This testing would make it possible to address AI-related changes to value chains. Currently, businesses fund the vocational training of their own employees. However, for their digital transformation, they often call on other actors who capture value and play a key role in automating tasks but do not help fund vocational training for employees. New funding methods must therefore be tested through social dialogue.
Global warming is now a scientific certainty. Taking into account the environmental impacts related to the development of digital practices and services is therefore essential.
Although the growth of artificial intelligence (AI) adds to the negative environmental impact of digital technologies, it could also contribute to environmentally friendly solutions. AI offers many opportunities in the ecological field, including better knowledge of biological ecosystems evolution, optimized resource management, environmental preservation and improved protection for biodiversity.
An ambitious political AI policy must do more than just optimize resource use. It must promote growth that is characterized by economy and solidarity, helping to contribute to a smart ecological transition.
The government must use AI to support the ecological transition:
As part of this approach, it must help AI become less energy-intensive by supporting the ecological transition of the European cloud industry.
Lastly, ecological transition must go hand in hand with the liberation of “ecological data”. AI can help reduce our energy consumption and restore and protect nature – for instance, by using drones to carry out reforestation, or by mapping living species through image recognition technology.
Artificial intelligence (AI) is already omnipresent. Every day, we unknowingly interact with smart systems that make our lives easier – or, at least, that are supposed to make our lives easier.
However, many questions are being asked today: does AI really seek to improve our well-being? If not, how can we make sure it does?
These questions have led to a wide-ranging discussion on the ethical issues related to the development of AI technologies and, more generally speaking, algorithms.
To ensure that new AI technologies respect our social values and rules, we must take action now by mobilizing scientists, government, industry, entrepreneurs and civil society.
In the long term, artificial intelligence technologies must be explainable if they are to be socially acceptable. For this reason, the government must take several steps:
Develop algorithm transparency and audits
Consider the responsibility of AI actors for the ethical issues at stake:
Create a consultative ethics committee for digital technologies and AI, which would organize public debate in this field. This committee would have a high level of expertise and independence. Indeed, 94% of those interviewed considered that the development of AI in our society should be regularly addressed in public debates.
Guarantee the principle of human responsibility, particularly when AI tools are used in public services. This includes setting boundaries for the use of predictive algorithms in the law enforcement context. It also means extensively discussing any development of lethal autonomous weapons systems (LAWS) at the international level, and creating an observatory for the non-proliferation of these weapons.
In a world where technologies are becoming key to our future, artificial intelligence (AI) must not become yet another tool for exclusion.
Women only account for 33% of people in the digital sector. Minorities are also underrepresented.
Given the fast-changing nature of AI technologies and practices, our society has a collective duty to be aware of and discuss the issues this raises. This is especially relevant for fragile populations and groups already excluded from the digital sector, for whom AI represents an even greater danger.
AI could lead to a better, fairer and more efficient society, or it could lead to wealth being concentrated in the hands of a very small group of digital elites. Therefore, in the AI field, inclusive policies must seek to attain two goals: ensure that the development of these technologies does not increase social and economic equalities, and use AI to reduce these inequalities.
This recommendation was supported by more than 85% of those interviewed. To attain this goal, an incentive policy could be implemented. This initiative must be accompanied by a policy to train and raise awareness of diversity issues among educators in the AI industry.
To address the growing inaccessibility of public services and rollback of rights caused by dematerialization, administrative procedures must be modified and mediation skills enhanced.
The government could launch an automated system managing administrative procedures to help individuals better understand administrative rules and how they apply to their personal situations. At the same time, new mediation tools must be implemented to provide support to those who need it.
The government must support social innovation programmes based on AI (dependency, health, social action and solidarity) to ensure that technological advances also benefit those working in the social action field.
On 8 September 2017, French Prime Minister Édouard Philippe tasked Cédric Villani, Mathematician and Deputy for the Essonne, with a mission on artificial intelligence (AI). His goal was to lay the foundations of an ambitious French strategy in the AI field.
Composition of the mission:
Cédric Villani is a French mathematician and a former student of the Ecole normale supérieure. He received a doctorate in mathematics and he won the Fields Medal in 2010 and the Doob prize in 2014. He is now a professor at the University of Lyon. He directed the Institut Henri Poincaré in Paris from 2009 to 2017. He has held various visiting positions at several foreign universities. He is a member of the National Assembly for the Fifth Constituency of the Essonne and he is vice-president of the OPECST (parliamentary office for scientific and technological options assessment ). He is a member of the Academy of Sciences and has published several books, including Alive Theorem, which has been translated into 12 languages.
Marc Schoenauer is Principal Senior Researcher with INRIA since 2001. He graduated at Ecole normale supérieure. For 20 years, he has been full time researcher with CNRS (the French National Research Center), working at CMAP (the Applied Maths Laboratory) at Ecole Polytechnique. He then joined INRIA, and later founded the TAO team (Thème Apprentissage et Optimisation, i.e., Machine Learning and Optimization Theme) at INRIA Saclay in September 2003 together with Michèle Sebag. He has co-authored more than hundred articles and has supervised 35 doctorate dissertations. He has been president of the AFIA (the French Association for Artificial Intelligence) from 2002 to 2004.
An engineer by training, Yann Bonnet began his career as a consultant. He joined the French Digital Council in 2013 as General Rapporteur, before becoming Secretary General in 2015. He was in charge of steering the national consultation on digital transformations, which was launched by the Prime Minister in 2014. This initiative eventually led to the Law for a Digital Republic. Yann Bonnet was also in charge of multiple reports, including taxation in the digital age, the digital dimension of the TTIP negotiations and online platforms fairness.
Charly Berthet is a French lawyer working at the French Digital Council as head of legal and institutional affairs. He has worked specifically on regulation matters, on data protection and civil liberties. He has been a consultant for the Ministry of Foreign Affairs where he helped elaborate the digital international strategy. He graduated at University Paris II and University Paris Dauphine.
Anne-Charlotte Cornut graduated from Sciences Po and HEC and is rapporteur of the French Digital Council since april 2016. She worked on the digital transformation of the SMEs and of the higher education and research. She formerly was adviser to the CEO of 1000mercis/numberly, a data marketing company.
François Levin has graduated in philosophy from Ecole normale supérieure of Lyon and in public administration at University Paris I. He joined the French Digital Council in 2015 and is now head of economic and social affairs. He has specifically worked on the digital transformation of work and formation as well as of culture and copyright law.
Bertrand Rondepierre graduated from Ecole polytechnique, holds an engineering degree from Telecom Paristech and is an alumnus of the Master degree Mathematics, Vision, Learning at ENS Paris-Saclay. He works as a system architect for the DGA, where he runs projects in digital and artificial intelligence fields.
Stella Biabiany-Rosier has spent her career as a Assistant Manager in consulting and law firms, then in ministerial offices. Since July 2017, she has been assisting the General Secretary of the French Digital Council.
Assisted by Anne-Lise Meurier, Zineb Ghafoor, Candice Foehrenbach, Camille Hartmann, Judith Herzog, Marylou Le Roy, Jan Krewer, Lofred Madzou et Ruben Narzul
The mission’s work was carried out between 8 September 2017 and 8 March 2018.
Its tasks included: