Editor Statement on AI in Today's Society:
Following the rise of AI from technological perspective the ethical issues of AI application in different industry sectors and life aspects are gaining increased attention. Expanding development of artificial intelligence poses a myriad of questions regarding the responsible AI development (in terms of AI methods and applications) and its implementation. The prevailing notion is that AI should be accountable, explainable, transparent, and fair for all organizations and individuals.
The evolution of AI bursts with contradictions. AI possesses a huge potential to improve human lives, and, at the same time, it could widen the social and digital divides. In order to utilize its positive potential and to minimize the threats, it is critical to involve people (experts, scientists, policy makers, funders and investors, etc.) with various background and organizations from different industries to build a solid background of common language and shared understanding of AI capabilities and risks to guide all stakeholders to positive impacts. A broader engagement of civil society on the values that need to be embedded in AI and the directions for future development are also needed. The key to success is to balance the transformational potential of AI with human safety and privacy.
Despite the increasing adoption of AI tools and applications there are still some limitations which are purely technical, practical limitations and limitations in use. Future research directions will focus these general limitations. Regarding responsible AI, future research areas should cover the following topics: bias, transparency, and fairness in AI algorithms; responsible AI operationalization, geopolitical impacts of AI.
The AI benefits are obvious but there are certain societal risks related to the diffusion of AI technologies in products and services which requires an open debate about AI governance. The main focus should be placed on developing an internationally recognised ethical and legal framework for the design, production and application of AI. This framework should be based on common AI principles and should provide a roadmap for protecting humanity by responsible uses of AI technologies. Unless AI is still at a relatively early stage of development and large scale industrial applications are yet to be developed, the societal challenges of AI applications should be explored and prioritized especially within the context of the AI ecosystem.
– Prof. Bistra Vassileva, University of Economics-Varna
The chapter "What we should have learned from Cybersyn" by Dietmar Koering explores the ethical considerations and societal implications of digital transformation in the context of Industry 4.0. The paper discusses the potential impact of digitalization on employment, the concept of universal basic income and the lessons that can be learned from the Cybersyn project. It highlights the importance of human involvement in the development and implementation of AI and machine learning systems and emphasizes the need for an ethical discourse on the involvement of people in the digital transformation of society. Overall, the paper provides valuable insights and guidance for the ethical debate on the digital transformation of society.
– Dietmar Koering, Researcher