Archivi tag: gigafactory

PERCHE’ ABBIAMO BISOGNO DI UN ISTITUTO ITALIANO PER L’INTELLIGENZA ARTIFICIALE?

La Germania è il solo Paese europeo che, con l’ Istituto Fraunhofer, abbia un unico istituto centralizzato per le tecnologie

Per quanto la notizia che il Ministro Colao abbia annunziato che l’Istituto per l’Intelligenza Artificiale, in quanto Ente unitario con competenze generali, non si farà, non giunga del tutto inaspettata, essa costituisce comunque  un ennesimo colpo per Torino e anche per la credibilità della classe politica, che ha nutrito la cittadinanza di aspettative immotivate.

Non per nulla, si sta giustamente creando (anche se un po’ tardivamente) una “lobby” torinese per convincere il Governo a tornare sulle sue posizioni.

1.Lo “spezzatino” non può funzionare

Come avevamo  cercato di spiegare nel nostro Libro Bianco, una suddivisione delle attività italiane nel settore dell’ AI attraverso vari hub sul territorio non è la stessa cosa di un Istituto Italiano per l’Intelligenza Artificiale, quale proposto nella Strategia Italiana per l’ Intelligenza Artificiale presentata lo scorso anno dal comitato di esperti. Infatti, nelle diverse “missioni”, e, in particolare, in quelle proposte per Torino, mancherebbe proprio la cosa più importante -la riflessione di carattere umanistico e politico- che avrebbe dovuto costituire la base per lo sviluppo ulteriore della strategia italiana, secondo la formula olivettiana della congiunzione fra cultura, politica e tecnologia.

Le  considerazioni esposte nel nostro Libro Bianco restano quindi perfettamente valide. Non si tratta di continuare con un “trasferimento” di tecnologie create altrove, bensì di riflettere obiettivamente sul ruolo dell’ intelligenza artificiale nel futuro dell’ Europa, come sarebbe necessario se si vuole dare un minimo di serietà alla Conferenza in corso (e di cui non si sente proprio  parlare).

Come ha detto giustamente don Luca Peyron, con l’Istituto è in gioco innanzitutto il nostro ruolo in Europa. Infatti, da un lato, tutto il mondo si sta rendendo conto di aver bisogno di una “testa pensante” sull’ Intelligenza Artificiale, ma nessun Paese riesce a darsela, per le stesse difficoltà politiche e culturali che incontriamo in Italia. Alleghiamo, come testimonianza di questo dibattito internazionale, la sintesi del rapporto finale della commissione parlamentare tedesca.

2.O trasformarsi o perire

Se non si parla del ruolo dell’ Europa nella società dell’ Intelligenza Artificiale che inizia proprio adesso con Made in China 2025 e con la “Commissione NSCAI” del Congresso, di che cosa vogliamo parlare alla Conferenza sul Futuro dell’ Europa? In  capo a un decennio, le società di tutto il mondo saranno, infatti, irriconoscibili.

La rete e l’intelligenza artificiale sottrarranno a finanza e cultura, politica e forze armate, imprese e privati, ogni forma di autonomia. Chiunque sarà fuori della piramide dio potere dei guru dell’informatica, dei servizi segreti, dei governi delle Grandi Potenze, non conterà nulla. Altro che fuga dei cervelli! Avremo una vera e propria schiavizzazione dei cervelli più efficienti, che saranno costretti a lavorare solo per i poteri forti, come successe dopo la II guerra mondiale a von Braun e ad Antonov!

Anche  per ciò che concerne i singoli lavoratori, chi non si sarà trasformato in un piccolo imprenditore informatico, che svolga automaticamente  il lavoro prima effettuato da centinaia di addetti, non potrà proprio più lavorare. Nessuno riuscirà a fare alcun lavoro da solo, ma solo assoggettandosi a grandi catene (come oggi Amazon o Uber), o ottenendo un solido supporto dallo dal proprio Stato, che potrà, e vorrà, sostenere questa piccola imprenditoria così come si fa ora con l’agricoltura mediante i mutui agricoli che permettono a semplici agricoltori di divenire proprietari di colossali macchine agricole automatizzate.

Orbene, perfino l’America sta incontrando notevoli difficoltà a darsi una “testa” siffatta, per il braccio di ferro per la conquista del potere  fra Google e il Pentagono, e l’Europa, dal canto suo,  non sta facendo proprio  nulla, perché ancora dominata all’idea di una ricerca decentrata, del tutto inutile per gestire una trasformazione tumultuosa come quella in corso.

Qualora l’Italia si desse un suo Istituto centralizzato, andrebbe in controtendenza, e potrebbe addirittura influenzare il resto d’Europa.

Le società che non si organizzeranno con questi criteri periranno. Non c’è nessuna scusa per non fare queste cose in Europa (come quando Valletta diceva che l’ Italia non poteva permettersele), perché l’Europa unita ha tutte le capacità organizzative e finanziarie per realizzare anche io programmi più ambiziosi.

E li deve realizzare prima dell’America e della Cina, perché, altrimenti, la sua pretesa di costituire un modello sistemico (il “Trendsetter del dibattito mondiale” di cui tanto parla la Commissione) evaporerà come neve al sole, di fronte all’incapacità di farsi valere nei confronti delle multinazionali che dominano il mercato europeo.

I politici che tanto si agitano per la cosiddetta “Gigafactory” (che vorrebbero contendere a Pomigliano, con una penosa guerra fra poveri) non tengono conto che, in quel caso, si tratterebbe di 500 posti di lavoro esecutivi e diretti dall’ esterno, che potrebbero crearsi nella migliore delle ipotesi nel 2030 (quando, se non cambiamo le nostre politiche, non ci sarà più Stellantis, e forse nemmeno l’ Italia e l’ Europa), mentre i 600 posti dell’ Istituto sarebbero posti di ricercatori, pensatori e tecnologi operativi fin da subito per riorientare il panorama italiano ed europeo della ricerca, della tecnologia e della società, prima che la nostra decadenza (inevitabile senza la rivoluzione digitale) divenga irreversibile.

AI Arm wrestling

ALLEGATO

Deutscher Bundestag

SCHLUSSBERICHT DER ENQUETE-KOMMISSION KÜNSTLICHE INTELLIGENZ

EXECUTIVE SUMMARY OF THE FINAL REPORT OF THE STUDY COMMISSION ON ARTIFICIAL INTELLIGENCE

The text below summarises the main findings of the German Bundestag’s Study Commission on Artificial Intelligence -Social Responsibility and Economic, Social and Ecological Potential established in 2018.

Introduction

Artificial intelligence (AI) is set to play a relevant role in increasing areas of our lives in the future. AI systems recognise voice commands, filter out spam, recognise images, sort search results, correct typos and suggest products. They translate texts and play Go or chess, the latter long since better than

a human. These systems control robot vacuum cleaners, driver assistance systems and entire production plants. AI systems are increasingly helping doctors make diagnoses and select the best therapy for the individual patient. This entails various advantages such as convenience and efficiency, but it is also a matter of safety and health. Furthermore, AI and intelligent systems harbour great potential for solving current societal challenges, such as an ageing society or climate change.

What definition of AI has the Study Commission agreed on? To lay the foundation for discussion, the Study Commission agreed on a description of AI. During the Commission’s work, there was recurring criticism of the awkward and emotionally charged term “AI”, which can trigger exaggerated expectations and fears alike. The Study Commission deliberately refrained from defining AI itself and instead sought to clarify the term (see relevant chapter). In its work, it primarily addressed the aspect of learning systems. Why should policymakers and society actively address this issue? The use of AI in ever more areas will continuously change our working and private lives far more

drastically going forward. It is neither possible nor would it make sense to halt this change. The challenge and aspiration is to shape this change and ensure that it is guided by values for the good of humans and the environment. To manage this feat, Germany and Europe must assume a leading role in the development and use of this key technology. The benefits and opportunities arising from the new technological possibilities should be fostered and harnessed, at the same time weighing up the risks and, if need be, limiting them.

What was the Study Commission’s brief? For this reason, on 26 June 2018 the German Bundestag established a Study Commission with the brief of closely examining AI and its societal, economic and ecological impacts. Based on a common understanding of the technologies, existing and future impacts on different areas of society were to be investigated and recommendations for action for lawmakers were to be developed jointly.

27 October 2020

Who was involved in the Study Commission? The Study Commission comprised members of the Bundestag and experts in equal numbers. In

addition, numerous further experts were invited to both the meetings of the project groups and the meetings of the Study Commission, enriching the discussions with ideas and in-depth knowledge. How did the Study Commission involve the public? Even though a Study Commission is established first and foremost to make recommendations to the

German Bundestag, there was a cross-party consensus that the public should be involved. This is why all the presentations given by experts in the Study Commission’s meetings are available to the general public.1 The Study Commission published the summaries of the individual reports at the end of each project group phase and in spring 2020 set up a digital platform enabling interested citizens to enter into dialogue with each other and with the members of the Study Commission. The presentation of the findings on 28 September 2020 was also broadcast as a livestream where it was possible to put questions to the members of the Study Commission. The publication of this final report will potentially contribute to a broad debate on AI. The Study Commission would like to take this opportunity to thank all citizens and experts for their valuable contributions once again.

What was the setting for the Study Commission’s work? The work of this Study Commission is embedded in a variety of policy initiatives addressing the implications of an increasingly widespread use of AI in all areas of society. These include, for instance, updating the Federal Government’s AI strategy, the work by the Data Ethics Commission, the European Commission’s White Paper on AI and the numerous AI initiatives by European partners. It is of course important to continue this dialogue at all political levels going forward, too. How did the corona pandemic impact the Study Commission’s work? The Covid-19 pandemic was a watershed for the Study Commission and its work, too. Instead of meeting in person, the individual groups then began working first and foremost in video conferences and using digital platforms. Meetings of the entire commission took place online or in hybrid form. The experiences with the pandemic also gave the Study Commission new food for thought in terms of content, which has been included in the final report.2 In addition to this, it was no longer possible to hold focus groups and a delegation trip to Russia and Finland that had been planned had to be cancelled.

Overview of the project groups and the general report

The report in its entirety is the product of in-depth study of the technology, its requirements and areas of use, as well as the opportunities and risks it gives rise to. The Study Commission decided to divide it into six project groups, whose brief was to examine specific cases of AI use in various policy areas. The project group members discussed the current state of play, future challenges and resulting recommendations for action, documenting this in their project group reports. On the basis of these specialist and yet practice-related discussions, the members of the Study Commission then jointly identified overarching topics cutting across all areas of use. These were pooled in the general section of the report. The report concludes with a chapter on the Study Commission’s working methods. The text below briefly outlines the different parts of the report and their content and structure.

1     They are available at https://www.bundestag.de/ausschuesse/weitere_gremien/enquete_ki (last consulted on 13 October 2020).

2     See also chapter 10 of the general report [AI and SARS-CoV-2].

General report: overarching topics The general report starts with the chapter entitled “Clarification of the term Artificial Intelligence” explaining the key basic terms used in the different sections of the report. The following chapters address meta topics such as data or law. Basic principles and findings that are important for the reader’s overall understanding of the report are described and general recommendations for action are made.3 Artificial Intelligence and Business (project group 1) The “AI and Business” project group commences its report with an objective stocktake of the current

situation and a common objective for the year 2030. Using specific scenarios, it discusses the situation and options for action available to the three key players -start-ups, SMEs and corporations. A SWOT analysis then ascertains the current state of play in business-related research and in AI implementation in selected sectors (industry/production, commerce, finance and insurance, the agricultural economy and agriculture) and for the three players cited above. This was then the basis for a catalogue of recommendations for action.

Artificial Intelligence and Government (project group 2) Due to the broad scope of government use of AI, the project group report was divided into three parts, each of which was compiled by a working group (WG). WG 1 examined AI in public administration, WG 2 addressed the issues of smart cities and open data and WG 3 discussed AI in the context of public safety, national security and IT security. The WG reports are preceded by a general section containing a comprehensive catalogue of recommendations for action that cut across the different subjects. In addition, subject-specific recommendations for action are listed at the end of the relevant chapter in the WG report. Artificial Intelligence and Health (project group 3) The report by the “AI and Health” project group starts with an overview of examples of the specific areas of use (such as early diagnosis, care and monitoring, personalised therapies, nursing), followed by a SWOT analysis for Germany. This is followed by an overview of AI-specific fields of action (in particular digitisation and data availability, Germany as a centre of research and business, liability and approval, intelligent assistance systems, for instance in nursing care). For each of the fields of action, specific recommendations are made, which are summarised in the introduction in the form of ten selected recommendations for action.

Artificial Intelligence and Work, Education and Research (project group 4) This project group examined first the use cases and impacts of AI on the world of work, and second how AI can be used in education and continued education and training, in which fields of education instruction and continued training should be provided on the subject of AI and finally also which research fields are relevant to AI. The report looks at use cases to study where AI is being tried and tested in business and administrative settings, and how AI is already being used. Similarly, it cites examples of where AI can or already is being used in schools and universities and in research. The use cases are shored up with a vision for the year 2030 and as such what the world of work, education and research of tomorrow might look like, as well as an examination of the drivers and brakes to this development. Following this overview, the main challenges in all areas are identified and corresponding recommendations for action developed. Artificial Intelligence and Mobility (project group 5) In addition to its executive summary, preliminary remarks and introduction, the report by this project

group consists of a number of thematic focal points. First, it discusses AI-based visions of the future

of mobility as well as intermodality and platforms. It then studies road, rail, air and water transport in terms of the use of AI, and finally analyses the meta issues of the economy, competition and urban development. Each of the resulting chapters on each thematic focus contains its own recommendations for action, addressing both passenger and freight transport.

Artificial Intelligence and the Media (project group 6) The “AI and the Media” project group took into account the multi-faceted nature of the media. The report first addresses the links between AI and the media in a broad sense. These sections examine both the perspective of the users/consumers of media and that of the providers/the market. The report studies both information and entertainment media. In addition to this, in the scope of its market analysis the report takes a comprehensive look at the platform markets. Second, the report explores specific issues such as deep fakes, recommendation systems, automated journalism, social bots and political microtargeting in depth. Third, the report puts the spotlight on media regulation, dealing with AI relevance in the context of hate speech or upload filters in the context of large platforms. Each of the sections, which adopted different approaches, is rounded off by specific recommendations for action taking into account the diversity of the media and AI linkages. Brief and background of the Study Commission The chapter “The Study Commission’s Brief and Working Methods” provides an overview of the background, composition and work of the Study Commission. The list of external experts is provided in annex 2.4.11 [guests invited to Study Commission consultations] and in annex 2.4.12 [guests invited to project group consultations].

Summary and recommendations for action (selection)

Some aspects were omnipresent in the work of the Study Commission. A selection is presented below.

AI’s potential to change our society

AI is the next stage of digitisation driven by technological progress. Its potential to bring about far¬reaching changes in many areas of life and society is evident in the analyses of the status quo of the project group reports (see chapter 3.1 of the report by the “AI and Work, Education and Research” project group [Basic Principles and Stocktake of the Current Situation], chapter 2.1 of the report by the “AI and Government” project group [Introduction], chapter 4.4.2 of the report by the “AI and Health” project group [Status Quo of AI Applications in Nursing Care], chapter 4.1 of the report by the “AI and Business” project group [Status Quo of AI in Business], chapter 4.1 of the report by the “AI and Mobility” project group [Future of Mobility], chapter 3.2 of the report by the “AI and the Media” project group [Introduction to the Technical Foundations]). The change in values that goes hand in hand with technological change is not bad per se; changing values are part and parcel of the development of humankind and society. This means that technological development needs to be shaped democratically -on the basis of an agreement on a good and just way of living now and for future generations (see chapter 6.1 of the general report [Aims and Objectives of AI Ethics]). The Study Commission identified a need for society to reflect on the impact of AI systems, outlined direct impacts on society of using AI systems and the discourses on them, and explored the possibilities of sustainable and prosperity¬oriented policymaking to shape of the opportunities and impacts of AI systems (see chapter 7 of the general report [AI and Society]).

Humans front and centre

In its debates, the Study Commission was guided by the model of human-centred AI. This means that AI applications should be geared first and foremost towards human well-being and dignity and should bring benefits to society. Here, it should be borne in mind that the use of AI systems preserves and possibly even bolsters people’s control over how they live and act and their freedom of decision. The Study Commission is confident that this premise enables the positive potential of AI applications to be fully harnessed and the confidence and trust of users in the use of AI systems to be best developed and strengthened. This trust is the fundamental key to the societal acceptance and economic success of this technology. And this success, in turn, is the key to establishing this as “AI made in Europe”, to ensuring a future-proof economy and to our society not being shaped by AI with different underlying fundamental values.

New technology highlights and sometimes reinforces the need for action

AI systems sometimes make a need for action in existing societal, economic and government tasks more visible or even reinforce it. This includes areas such as educational and gender justice, combating racism and other forms of discrimination and overseeing ecological and economic structural change. The Study Commission’s debates repeatedly highlighted that AI systems are a powerful tool, but ultimately just a tool. Parliament and the Government still need to find political solutions to societal challenges -AI can then be harnessed for implementation. Sometimes, though, AI can also open up new approaches to challenges society faces. It is worth noting that even the discussion about AI itself is leading business, workers and policymakers alike to not just look closely at the technological aspects of AI, but also issues such as distributive justice and ways to design fair digital markets (see chapter 4.1.3 of the report by the AI and Business project group [Current State of the Market]).

A common European AI strategy

A strong, recurring element in the Study Commission’s discussions was thinking about a future AI

strategy in European terms. AI development hinges on the cooperation of different players in the fields of research, development and application. On its own, Germany has little chance of shaping the development of AI systems to meet the aforementioned objectives. So what is needed is a European understanding in order to be able to design AI applications in line with European ideas.

This was also reflected in many central recommendations for action, which recommend a European dimension with regard to a digital infrastructure (see chapter 9.2 of the general report [Guidelines]) and an accelerated expansion of capacities throughout Europe and Germany, for instance in cloud computing and network buildout. Securing technological sovereignty (see chapter 5.1.3 of the report by the “AI and Business” project group [Technological Sovereignty]), a joint research strategy (see chapter

9.5 of the general report [Central Recommendations for Action for Government]), a data policy rooted in European values (see chapter 2.6 of the general report [Political Framework for Action on AI and Data]) and uniform regulation throughout Europe (see chapter 4.4 of the general report [AI-specific Risk Management]) have also all been called for.

Interdisciplinarity unlocks potential

An interdisciplinary dialogue between the different players and society is necessary to unlock the potential surrounding AI, to identify possible risks early on and to duly reflect the complexity of this subject. This means initial and continuing education and training on AI will have to be broad-based to facilitate this interdisciplinary dialogue. Education and information campaigns will also help address fears and preferences relating to AI-driven societal development at an early stage and paint a more realistic picture of the opportunities and risks of using AI systems.

Likewise, technical interdisciplinarity is key to successful AI innovation in Germany: AI software, AI hardware and AI use must be considered together, as only together can an energy-efficient solution be achieved, the safety (robustness, reliability) of the overall solution (for instance for autonomous means of transport) be ensured or -in the case of the commercial use of an AI solution – the costs compared.

Foster standardisation

Standardisation and certification processes are tried and tested means in many sectors of the economy to foster exchanges between companies and to establish products and services on the market quickly and easily. They also often make it possible to dovetail technologies across different sectors. There are therefore high expectations that standardisation and certification will help propel companies to success in the AI sector. The Study Commission sees a need for adjustment here, inter alia in regulations or standards issued for introducing AI into industrial processes and products.

Innovation and experimental spaces

Experimental spaces, also known as sandboxes, are a frequently cited way to push AI innovation forward. Experimental spaces are needed in order to be able to safely test and further develop AI technologies in real environments. This also supports research findings being swiftly translated into applications as is often called for. Particularly in the business, mobility and health project groups, but also in the chapter on research, experimental spaces were mentioned as an effective method. Lawmakers need to shore this up by defining the legal framework and supporting the designation of experimental spaces.

Digital infrastructure is a prerequisite for using AI

A performing digital infrastructure in public administration (see the report by the “AI and Government” project group in chapter C III [Artificial Intelligence and Government (project group 2)]), in the health sector (see the report by the “AI and Health” project group in C IV [Artificial Intelligence and Health (project group 3)]), in educational establishments and nationwide is a must for AI to be able to be used in various different sectors. Here, the Federal Government and the federal states must work together even more closely to close existing gaps in the supply of broadband, but also in hardware and software in public institutions.

The following chapters quote selected recommendations for action from the overall report in abbreviated form. The aim of this list is to help readers identify and find central recommendations for action.

1 Data

Data plays a central role for AI systems in use, testing, but above all in training. In reflection of this, many sections of the report contain recommendations for action to improve how data is handled. The sample recommendations for action listed here address better availability through trust centres, higher interoperability thanks to the use of standards, promoting open data and more precise data protection provisions.

2.Data availability

Additional policy measures can improve data availability outside of government and administration, too. In science and academia, for instance, there are often insufficient resources to make data collated in research projects more widely available. The exchange or shared use of data between companies entails legal uncertainty, especially in relation to antitrust law. Here there is a need for action

3

For this chapter, there is a dissenting opinion from the CDU/CSU parliamentary group.

4

[Dissenting opinion on chapter 1 of the executive summary of the report (“Data”) as well as chapter 5.7 of the general report (“AI and Law -Recommendations for Action”) by the expert member Dr Sebastian Wieczorek and Members of the Bundestag Marc Biadacz, Hansjörg Durz, Ronja Kemmer, Jan Metzler, Stefan Sauer, Professor Claudia Schmidtke, Andreas Steier und Nadine Schön and the expert members Susanne Dehmel, Professor Wolfgang Ecker, Professor Alexander Filipović,

Professor Antonio Krüger and Professor Jörg Müller-Lietzkow].

5 Use of data

The [Study Commission] has the expectation that the amount of available training data could increase by propagating trust-building concepts for the anonymisation and pseudonymisation of data. It therefore recommends putting in place trust structures for the interdisciplinary, trustworthy sharing of non-personal data.

6. Data release

The [Study Commission] recommends enabling phased, voluntary and revocable data release in close consultation with the data protection supervisory authorities, using coordinated, interoperable and, where possible, open standards […], putting in place a national health care register or a group of registers and the associated decentralised registers and swiftly harmonising data protection legislation for the field of health on the basis of the General Data Protection Regulation (GDPR).

7 Networked data infrastructure

Dependence on providers based outside of the EU can only be curbed by developing or strengthening our own expertise. Public administration has an important lever at its disposal here in the form of public procurement. Furthermore, the skills of European companies in this field should be bolstered. With the GAIA-X initiative, the Federal Government has launched a European initiative to set up a networked data infrastructure. In the area of research, the development of a National Research Data Infrastructure is designed to connect and strengthen expertise in the management of research data. When putting infrastructure in place, the sustainable use of resources must be heeded.

8.Data standards promote interoperability

Data standards foster the cross-organisational use of data and support broad application possibilities of and interoperability between AI systems. Standards also facilitate the merging of data sets from different sources. The Study Commission therefore recommends linking decentralised data sets, for instance in value chains, research networks and public administrations more interoperably. This should entail supporting flagship initiatives to network and connect decentralised data, such as International Data Spaces, the aforementioned National Research Data Infrastructure or the Open Knowledge Foundation, by appropriate underlying legal conditions and targeted funding.

9 Further develop open data legislation

Further developing the extremely varied open data legislation at federal, federal state and European level is also central to the development of a data policy. It must stress the protection of fundamental rights and be positioned as an alternative to data models driven by state security and control interests like in China, and heavily influenced by the interests of large Internet platforms and the tech industry like in the US.

10 Data protection

The balance struck by the GDPR between data protection and innovation should be preserved. Legal uncertainties that persist in the interpretation of the GDPR rules with regard to the functioning of AI systems need to be clarified. This should be done in part by further specifying the rules applying through the regulated self-regulation provided for in the GDPR, so in the form of codes of conduct and certification. The voluntary commitments should be evaluated after five years and, if need be, replaced by appropriate legal provisions. Second, problems identified during the GDPR evaluation should be eliminated by way of clarification. This is without prejudice to the fundamental principles of the GDPR. […] Attempts to link anonymised data to individuals to date do not constitute a criminal offence. It should be examined whether and to what extent it would make sense to criminalise intentional de¬anonymisation of data.

11 Research

In many sub-areas of AI, research in Germany enjoys an outstanding reputation internationally. Europe as a whole is on a par with the US and China, depending on the data available. Germany has a lot of catching up to do in the field of cutting-edge research, both in terms of its remuneration system and research conditions, and in terms of attracting foreign researchers and keeping researchers here. Leading German research institutions are not very visible by international comparison. Targeted additional investments could enable Germany to set its own priorities, building on existing strengths and developing selected core topics of relevance to society as a whole in particular (see chapter 9.1 [Introduction and Overview], chapter 9.4.1 [What are the Strengths of AI Research in Germany?], chapter 9.4.2 [What are the Problems of AI Research in Germany?], chapter 9.4.3 [What Potential can be Harnessed?] and chapter 9.5 [Central Recommendations for Action for Government] of the general report).

Values

Societal values, human well-being and the acquisition of knowledge must take centre stage in the endeavours by science and research. The findings and the applications based on them should be sustainable, trustworthy and mindful of resources.

12 Funding

To have a say in shaping AI, Germany, in concert with other European states, must invest far more resources in research on AI. This will also make it possible to ensure technological sovereignty. Here, it is not just national flagship projects that are important and needed -European efforts to establish centres based on broad research and industry networks require backing. This also entails making Germany more attractive as a place to conduct research for international researchers. Foundational AI research on algorithms, systems, hardware and software also need to be expanded and permanently embedded in universities and research institutions. Emerging fields, that is to say fields harbouring high potential in terms of development and success, already need to be established and heavily promoted now.

13 Transfer

The cooperation between research, business and industry and society is essential in order to transfer technologies out of the realm of research and onto the market and into society. A central issue here is providing the data and technologies that research needs. To enable this transfer, processes should be simplified at universities and research institutions and special rules should be developed for the collaboration with start-ups. To ensure society as a whole benefits from the progress made in AI research, a high-performing, nationwide research infrastructure needs to be put in place and interconnected.

14 Research topics

The opportunity and challenge for research funding in the field of AI consists of identifying medium to long-term topics of major strategic, economic and societal importance in the areas of foundational research and applications. Alongside the bases of AI algorithms and AI systems, these include above all the energy supply, industrial manufacturing, transport and logistics, smart cities, e-democracy and societal discourse, education and continuing education and training, social inclusion by means of assistance and communication systems, security and defence, diagnostics and, overall, improving prevention, intervention and care in the health sector. The mechanisms and impacts of algorithmically personalised messages, microtargeting, filter bubbles and hate speech also need to be researched.

15  Sustainability thanks to AI and sustainable AI

Sustainability in its holistic sense was a subject in almost all of the Study Commission’s project groups. Various aspects of the social, economic and ecological dimensions of sustainability were also described in the general report (see chapter 7.3 [Developing and Using AI Systems for Sustainability and Prosperity] and chapter 8 [AI and Ecological Sustainability] of the general report).

AI systems have the potential to contribute to the sustainable development of mobility (see chapter 4.1 of the report by the “AI and Mobility” project group [Future of Mobility]), to a more efficient use of resources and to a successful energy transition (see chapter 8.3 of the general report [AI’s Potential for Advancing the Energy Transition]), in turn supporting the attainment of the climate goals. The Study Commission advocates AI systems also being used in a targeted manner to support societal progress – for instance for less discrimination, more equal opportunities, better working conditions and attaining the UN Sustainable Development Goals (SDGs).

At the same time, it is important to remeber that using AI solutions is not per se economically, ecologically and socially sustainable. Here, an environment with clear conditions must be established that fosters sustainable innovation (see chapter 8.6 of the general report [Conclusion]).

Sustainable and prosperity-oriented use of AI

AI harbours wide-ranging potential to solve pressing future problems -from climate change to demographic change. Whether this potential can be tapped depends largely on whether these approaches are deliberately promoted at the level of research and economic development funding, particularly in fields that are not yet market-ready.

16 Sustainable AI as a brand

It is recommended that the (market) potential of a “Sustainable AI” brand (see chapter 1 of the report by the “AI and Business” project group [Executive Summary of the Project Group Report]), say AI applications that are optimised in terms of energy and resource use and their efficiency potential when in use, be a key consideration in further developing the AI Strategy. This ties in with a recommendation for more research on the systematic analysis of the potential to save on CO2 harboured by AI applications in the key sectors of energy, industry, agriculture, housing and mobility. This should take into account sufficiency issues.

17 Improve the data base on energy consumption and sustainable IT

It is recommended that the data base on AI applications’ contribution to the energy consumption trend, both in terms of positive and negative impacts, be improved. The Study Commission also recommends more funding for sustainable IT as an infrastructural pre-condition for reducing AI’s ecological footprint.

18Business and work 19

The disruptive nature of AI technologies enables not just totally new products, but also novel business models. New competitors will come onto the scene to challenge established companies, but there will also be opportunities for new business. The failure to rapidly scale up ideas and pilots into effective large-scale projects and players, the lagging expansion of nationwide digital infrastructure and the absence of technological sovereignty, for instance when it comes to the development of computing power (including hardware and quantum computing), cloud structures or data pooling, were identified as key problems in asserting German and European approaches in the field of AI. Recommendations for action addressing these issues are included in the report by the “AI and Business” project group in chapter C II [Artificial Intelligence and Business (project group 1)].

AI also makes new forms of automation possible, which on the one hand enable monotonous, dangerous or strenuous activities to be performed by machines, but on the other hand also eliminate jobs and create new ones with new demands and requirements. AI also enables new personnel management methods.

19. Systematic monitoring of AI

For law and policymaking to effectively strategically steer the important topic of AI, a sound analysis of strengths and weaknesses and realistic technical and economic expectations are needed. The Study Commission therefore suggests compiling a valid, differentiating data base on the economic impacts of the use of AI for Germany (and Europe) as a foundation for decision-making. Furthermore, a dynamic goal and monitoring system should be designed that supports a central control structure for AI with the power to issue instructions. To better prepare and shape structural change, evidence-based research and reliable projections of the economic and employment impacts of using AI are indispensable. In addition to the activities of the AI Observatory, special funding programmes need to be set up to systematically record and analyse the impacts of AI that have a bearing on the labour market.

20 Start-ups driving the AI transformation

Start-ups are seen as a major driving force behind the AI transformation, leading to various recommendations to bolster an AI start-up ecosystem. These include measures such as funds and

funding opportunities in the growth phase of fledgling companies provided by the EU, the Federal Government and the federal states and proposals for improving the translation of current research into new business models through spin-off processes and research spin-offs. Awarding more public administration contracts to German start-ups is seen not only as a way to strength the start-up ecosystem, but also to enable more collaboration between AI start-ups and SMEs. This requires barriers to participating in public procurement processes being lowered further and these processes being made more start-up-friendly, for example by reducing red tape further, through quick award decisions and award procedures that promote innovation, based on the “competitive dialogue” and “innovation partnerships” under European public procurement law.

21 Incentives for SMEs/economic development funding

When it comes to SMEs, advice and concrete support services for technology scouting and transfer provided through the SME 4.0 competence centres, AI trainers and specific skills development measures should be intensified. The creation of data pools, for instance in the form of interdisciplinary data cooperatives, and the continued promotion of regional clusters and hubs appear key. Furthermore, greater incentives should be created for SMEs and ways to share non-personal or anonymised data securely and jointly with other companies and organisations need to be demonstrated so as to generate added value for all those involved, for instance through trust centres for data sharing or by creating interdisciplinary data cooperatives […]. This will allow concentration effects and monopolisation tendencies in the data economy to be curbed, which give major international players (especially GAFAM) a competitive edge in the AI market thanks to their extensive data stocks and data expertise.

22 AI Moonshot projects

AI harbours wide-ranging potential to solve pressing problems of the future. Whether this potential can be tapped hinges crucially on whether such approaches are singled out for research and business development funding though, especially in fields that are not yet market-ready or whose use is not yet rewarded by competition incentives. As an instrument for this, the Study Commission proposes funding and implementing “AI Moonshot Projects” that are beneficial to society.

23 Promote the transfer from research to practice

AI is more than just a technology; the changes it engenders are already disrupting some economic sectors and markets, and in other areas changes are highly probable. […] Policymakers and government must help shape this transformation. The Study Commission recommends expanding advice for companies on the transformation of their own business processes and models and the sharing of best practices further […], merging existing decentralised AI resources on a platform under neutral, non¬commercial leadership and with political support, and putting in place “regulatory sandboxes” […] or free experimental spaces which researchers can use to conduct real-life experiments under suitable conditions.

24 Use AI to secure decent work

To nurture the potential for emancipation, sustainability and decent work, and to minimise risks for employees posed by the downgrading of their skills, to their personal rights and their ability to secure work in the future, and to avoid unjustified control and scrutiny, employees becoming disempowered, work concentration and job losses, work design needs to be guided by special principles. It makes sense to gear the influence of lawmakers and other standard-setters inter alia towards the following aims: The potential of AI for increasing productivity and improving the well-being of the working population should be leveraged to develop and promote new business models which help secure and expand employment, to develop “decent work by design” and to first and foremost transfer monotonous or dangerous tasks to machines, […] and to ensure that as social beings humans have the opportunity to interact socially with other humans at their place of work, receive human feedback and see themselves as part of a workforce.

25 Modernise co-determination

Acceptance among employees and the successful implementation of AI hinges significantly on early information and involvement. To preserve the opportunities for employees to influence the protection of their personal rights, to avoid excessive strain, to cope with the transformation of their place of work and to design employment conditions, co-determination needs to be updated taking into account technological developments and evolving the previous balance struck between employee rights and property rights. In reflection of the process characteristics of learning machines and in order to have a forward-looking, effective and rapid impact, co-determination at plant level needs be geared towards the concept of developing, using and further developing systems. It also needs to be able to address the normative effect of all major issues relating to personal rights and effectively influence the amount of work, organisation of work and requisite skills development arising in connection with the use of AI systems.

26 Conditions for AI use in the field of human resources

When using AI applications, it needs to be ensured that humans continue to decide on personnel matters. When it comes to managing human resources, it must be ensured that no data is allowed be collected and used for automated programmes or AI solutions that are no longer under the deliberate control of the people concerned.

27 Further develop social security systems

The increasing prevalence of AI systems in business and society makes the debate on the further development of social security systems already under way all the more important. The recommendation is to establish an Expert Commission on this issue during the German Bundestag’s next electoral term. Taking empirical research findings as the basis, it should be reviewed whether and to what extent suitable criteria and provisions can be created for designating vulnerable workers at platform companies as requiring coverage under social security law.

28 Skills, education, empowerment

Nearly all project groups formulated recommendations on the investments needed to build up AI expertise and skills. These recommendations relate to all facets of the education sector, with a special emphasis on ensuring the requisite foundation is laid for AI (especially in the MINT subjects and soft skills), the general development of AI skills starting at school – for girls and boys equally – and in continuing professional development. At school, it should also be reviewed whether and how AI can be used to assist teaching. It is also a matter of measures which enable society to deal with AI in an empowered way.

Expand education policy to include AI-specific issues

Another key field is education policy. Here, government is called upon to initiate comprehensive measures starting in school that promote and foster education in the field of AI, especially in the MINT subjects, but also in the sense of overarching, interdisciplinary education, so that enough young people are able to fully take advantage of the courses offered at universities. Only then will it be possible in the medium to long term to train a sufficiently large number of AI specialists at universities, who are needed in all areas, and make them available for research as well as for applications in business and industry and government.

29 Explore the use of AI systems in the classroom further

To use AI in learning processes in an educationally meaningful way, even more research should be done on how AI systems impact learners and teachers and how they can support them in achieving educational goals (inter alia inclusion). When introducing AI systems and the data infrastructure this entails, media-education process support should be provided.

30 Promote diversity

Imbalances that exist between girls and boys or women and men in terms of their knowledge and use of AI should be redressed. This can entail both schools and universities developing programmes and courses which encourage girls’ and young women’s interest in information technology and AI and give them the opportunity to get involved. During their training, teachers should be sensitised towards this. Universities should examine the possibilities of specific programmes for girls and boys within computer science courses. The general public’s knowledge of AI should be expanded in an inclusive way, that is to say reflecting both the heterogeneity of society and the different areas of use.

31 Create initial and continuing education and training programmes on AI

In the field of initial and continuing education and training, education programmes need to be put in place that promote the workforce’s AI skills and expertise. These training courses should comply with uniform standards. […] Boosting continuing professional development at companies is key to enabling lifelong learning, which AI is making increasingly important. The mismatch problem, so there simultaneously being job losses and a shortage of skilled workers on the labour market, can only be tackled by tangibly expanding a functioning knowledge infrastructure. Continuing professional development is a task for education policy and it must be accessible to everyone.

32 Educate and inform people about the use of AI

People need to be prepared as optimally as possible for the social upheavals ahead (both positive and negative) stemming from the use of AI systems through opinion-forming, empowerment, transparency, participation and protection to secure broad acceptance of AI systems. An important field of action is fostering understanding and awareness of the opportunities offered by AI systems and with regard to one’s own skills and knowledge about how they work and their impact.

33 Create a publicly available continuing education and training platform for AI

To empower the general public to understand fundamental interrelationships in the field of AI and how it works, a continuing education and training platform should be developed. […] Here, attention should be paid to ensuring that a government continuing education and training platform is not limited to just pooling different offers and courses, but that access is low-threshold.

34 Study the impact of AI recommendations on decision-making autonomy

It is unclear what influence recommendations by AI systems have on final decisions by humans. For instance, it is questionable whether and to what extent administration employees contradict AI recommendations in their everyday work and in turn help avoid mistakes being made. This gives rise to a need to study the sociological and psychological impacts of AI recommendations on humans and their decision-making autonomy. AI systems should always be designed in such a way that they do not run contrary to the autonomy of the individual. There is a clear and interdisciplinary need for research here, which is why studies on this issue need to be actively promoted.

35  People and society

36 AI-based systems are already impacting the behaviour and knowledge of individuals in many areas of society today and so in turn are also a factor impacting collective behaviour. Examples are vehicle navigation and the content displayed or recommended on social networks and video portals. The Study Commission discussed the design processes and design of such systems in many contexts. Recommendations for action in the areas of mobility and the media are listed below, with a special focus on the issues of freedom and diversity of opinion, non-discrimination, transparency and traceability.

Holistic view of mobility

The mobility of the future and in turn AI applications in mobility have to be viewed holistically. […] This entails combining innovative and expedient endeavours in a holistic approach, to in turn advance AI for the entire mobility sector. This requires greater interconnectedness and networking in transport planning, research and development and the legal framework in Germany and Europe alike.

37 Preserve media diversity

The potency or leverage effect of using AI in recommendation systems is evident and strengthens intermediaries in the media markets in particular, even if they do not offer their own media content. […] If media diversity is to be preserved, a useful instrument from this perspective -in addition to the application of antitrust law – remains the introduction of a digital tax on the AI-based services of platform and social media providers, which secure them a disproportionate share of advertising markets.

38 Recommendations on Decision-making Autonomy].

Limit political microtargeting

Similar to personalised targeting offline (for instance in election advertising sent by mail), there should be limits to what data on personal behaviour is allowed to be used for political microtargeting. This limitation should apply to both targeting (by advertisers) and display (by the platforms’ AI). Legal rules should replace the voluntary measures of some commercial platforms here.

39 No upload filters for the time being

The uncontrolled use of upload filters should not be allowed as far as possible when it comes to assessments that depend on context or that are not trivial in legal terms. This does not preclude using AI-based filtering systems for pre-sorting prior to review by a human. Against this background, it seems advisable that systems currently in use be improved and their use be subject to regulatory oversight, whereby an automation of law enforcement should definitely be avoided. Automated erasure or non¬publication should be limited to cases where the dissemination of specific content has to be prevented regardless of any and every conceivable context.

40 Research transfer in the detection of discrimination

There has been a great deal of research in recent years on detecting and preventing discrimination in AI systems. The next step, transferring these findings to everyday software development, should be promoted so that the findings can be translated into practice as swiftly and broadly as possible and overseen by researchers.

41Review AI-based decisions regularly to verify non-discrimination

It must be ensured that AI systems developed and used by government are non-discriminatory […]. There must be reviews of whether the data in the algorithmic decision-making system is used in a field of application where fundamental rights have to be protected and where equal treatment is especially important (for instance access to social benefits). If so, the result of the machine decision and […] that of the final human decision must be regularly examined to determine whether the decision is discriminatory.

42 Make AI use transparent

Rules governing the use of AI therefore need to be developed that reflect the diversity of society and, where appropriate, involving those affected. Depending on how critical the context is, citizens must be informed of the use of AI and generally educated in how to deal with AI. […] Wherever people are affected by the consequences of a decision based on an AI system, they must receive sufficient information to be able to exercise their rights adequately and, potentially, question the decision.

43 Regulation and government

As a body established by lawmakers, the Study Commission repeatedly examined regulatory issues relating to AI. The Basic Law of the Federal Republic of Germany and the Charter of Fundamental Rights of the European Union, with the concept of human dignity as the yardstick for all policymaking, form the broader framework for shaping AI. As evidenced by the recommendations for action cited here, the issues addressed included the definition of principles, questions of proportionality, the need for risk¬specific and sector-specific regulation and liability questions. The general and ex ante categorisation of AI systems into risk classes, as recommended by the Data Ethics Commission, was controversial in the Study Commission.

Build trust through a trustworthy AI

Trust is an important factor for the success AI. This is why when using AI systems, sufficient traceability and transparency has to be ensured for consumers and employees alike. Concerns voiced by the public should be actively addressed and allayed through suitable information campaigns, protection mechanisms and requirements. Here it is important to strike the right balance between consumer and businesses’ interests – measures must be transparent and practicable for both sides so as not to be an obstacle to innovation.

44 Ensure proportionality

When assessing the use of AI systems in the field of public safety, alongside the cost-benefit ratio, the proportionality of measures should be verified. Here, the fundamental rights of those concerned must be carefully weighed up.

45 Sector-specific regulation

Existing sector-specific regulatory regimes should be reviewed and expanded to include AI-specific requirements where the use of AI in the specific use case gives rise to additional risks. […] The supervision and enforcement of rules should primarily be the role of sectoral supervisory authorities that have already developed sector-specific expertise.

46 Liability

In the view of the Study Commission, the existing liability system is fundamentally suited to ensuring compensation for damages caused by AI systems as well. It does not currently see any urgent need to put in place new liability provisions specifically for AI systems. When standardising AI systems, however, particular attention should be paid to ensuring that processes in AI systems are traceable and thus demonstrable.

47 Government as a service provider

AI systems should make life easier both for citizens when it comes to obtaining information and lodging applications and administrative staff when it comes to processing them. AI systems should help make it possible to extend the services offered so that they include a round-the-clock, multilingual, barrier¬free and free-of-charge range of services. AI systems can improve accessibility and fulfil people’s right to participate. AI systems should be used to home in on lowering bureaucratic hurdles, which can in turn fundamentally simplify access to information and the entire application process.

48 International ban on lethal autonomous weapon systems

In the future, too, at international level the Federal Government must advocate and work towards a worldwide ban on lethal autonomous weapon systems, adopting a path that allows the largest possible group of states to be involved.

49   See also chapter 1 of the report by the “AI and Business” project group [Executive Summary of the Project Group Report].

45   See also chapter 3.1 of the report by the “AI and Government” project group [Public Safety].

46   See also chapter 4.5 of the general report [Recommendations for Action].

47   See also chapter 5.5 of the general report [Liability Law].

48   See also chapter 1.1 of the report by the “AI and Government” project group [Introduction].

49   See also chapter 3.2.3 of the report by the “AI and Government” project group [Recommendations for Action and Operationalisation].