How can we democratize AI to help shape a beneficial human-centric future? Whereas a previous article “AI’s Impact on Society, Governments, and the Public Sector” amongst others discussed the risks, concerns and challenges of AI for society, this ninth article in the Democratizing AI series, starts by sharing some solutions to counter AI’s potential negative impacts. This article specifically shares some text and audio extracts from Chapter 11, “Democratizing AI to Help Shape a Beneficial Human-centric Future” in the book Democratizing Artificial Intelligence to Benefit Everyone: Shaping a Better Future in the Smart Technology Era. It is also evident that democratizing AI is a multi-faceted problem for which a strategic planning framework is needed along with careful design to ensure AI is used for social good and beneficial outcomes. Furthermore, we cannot democratize AI to benefit everyone if we do not build human-compatible, ethical, trustworthy, and beneficial AI, and address bias and discrimination in a meaningful way. Given the accelerating pace of AI-driven automation and its impact on people’s required skills, competencies, and knowledge in the dynamic job market, people need to become lifelong and life-wide learners that can make proactive smart choices about where the needs and opportunities are shifting and where they can make meaningful contributions.
The following topics will also be discussed on 5 May 2022 at BiCstreet‘s “AI World Series” Live event (see more details at the bottom of the article):
(Previous articles in this series cover Beneficial Outcomes for Humanity in the Smart Technology Era, The Debates, Progress and Likely Future Paths of Artificial Intelligence, AI’s Impact on Society, Governments, and the Public Sector, Ultra-personalized AI-enabled Education, Precision Healthcare, and Wellness, “AI Revolutionizing Personalized Engagement for Consumer Facing Businesses“, “AI-powered Process and Equipment Enhancement across the Industrial World“, “AI-driven Digital Transformation of the Business Enterprise” as well as “AI as Key Exponential Technology in the Smart Technology Era” as further background.)
As seen in the previous chapters, there is a strong expectancy and belief that AI and its applications can have a very positive and beneficial impact on humanity if implemented wisely, but also a wariness and circumspection of its potential risks and challenges. In this section I will outline some of the solutions, countermeasures, and antidotes to address some of AI’s potential negative impacts and worries. I will start with the Pew Research Center’s Artificial Intelligence and the Future of Humans report that prescribes three solutions categories, namely a focus on the global good through enhanced human collaboration across borders and stakeholder groups, implementing value-based systems that involve developing policies to assure AI will be directed at the common good and human-centricity, and prioritizing people through updating or reforming political and economic systems that ensures human-machine collaboration is done in ways that benefit people in the workplace and society more broadly.[i] These solutions also tie in with the proposed Massive Transformative Purpose (MTP) for Humanity and some specific MTP goals discussed in the previous chapter. From a global good perspective, it is proposed that digital cooperation should be used to further humanity’s needs and requirements, that people across the globe should be better aligned and have agreement on how to tackle some of humanity’s biggest problems through widely recognized innovative approaches and keeping control over intricate human-digital networks. This solution is also in line with MTP Goal 6 that focuses on collaborating in optimal human-centric ways to use our growing knowledge base and general-purpose technologies in a wise, value-based, and ethical manner to solve humanity’s most pressing problems and creating abundance for everyone. The same holds for MTP Goal 7 with its focal point on democratizing AI and smart technology from a use and benefits perspective to help society thrive, as well as MTP Goal 9 that addresses the implementation of improved collective sensemaking for all of humanity and better alignment with respect to our common goals and visions. I am also a big proponent of implementing value-based systems and making sure that we have policies that help direct AI for beneficial outcomes. Their proposed solution of building decentralized intelligent digital networks that are inclusive, empathic, and have built-in social and ethical responsibilities is also in line with the above MTP goals as well as MTP Goal 10 that seeks to build local and virtual empathic communities connected via a global network with more meaningful work and relationships. MTP goals 11 and 12 are both also value based as they are fixed on helping people live more meaningful lives and improving on virtues and character strengths. The third solutions category of prioritizing people through expanding their capacities and capabilities for improved human-AI collaboration can be addressed through some significant changes to our economic and political systems to better support humans. The first 5 MTP goals are geared towards just that through decentralized, community-based, and self-optimized governance, a more elastic and direct democracy, a compassionate human-centric society, and associated incentives that complement and extend the currently evolving workplace by rewarding active participation and positive contributions to society and civilization, and a dynamically controlled form of capitalism to maximize the benefit to all stakeholders.
As outlined in Chapter 8, McKinsey Global Institute’s report on Applying AI for Social Good maps AI use cases to domains for social good and discusses how AI solutions can be used for social benefit, as well as how risks or negative impacts can be managed and difficulties can be handled.[ii] From a risks’ perspective, MGI mentions a number of them which include bias that leads to unjust outcomes such as machine learning algorithms trained on historical data that are skewed or potentially prejudiced, the difficulty in explaining the outputs from large complex machine learning models for regulatory use cases, violating privacy over personal information could cause damage, and deploying insecure and unsafe AI applications for social good. Such risks can be alleviated by ensuring that people are kept in the loop through cross-functional teams interceding as appropriate, examining data to detect bias and determine if there is a representation deficiency, having separate dedicated teams that perform solution tests similar to the red versus blue teams in cyber security use cases, guiding users to follow specific procedures to avoid them impulsively trust AI solutions, and having AI researchers developing methods to enhance model transparency and explainability.[iii] In order to scale up the use of AI for Social Good, they recommend two areas that many other sources also reference namely addressing the scarcity of people with AI research and application skills and experience through growing the talent pool and making data more attainable for social impact cases through data collection and generator projects. MGI also provides a checklist for deploying AI solutions in the social sector starting with the basics of clearly defining the problem; formulating the technical problem structure; alleviating the risks described above and making sure of regulatory limitations, organization acceptance, efficient deployment, and technology accessibility; deploying AI solutions at scale with committed resources; making sure of data availability, integration, accessibility, and quality; having AI practitioners that can properly train and test AI models using sufficient computing capacity; deploying AI models in the target environment that deliver adequate value to drive significant adoption by organization; and having the required technical capabilities in the organization to run and maintain AI solutions in sustainable fashion.[iv] This checklist is fairly generic and also relevant for development and deployment of AI solutions in the private and public sector more broadly.
As a response to the vast changes in the global threat landscape a report called The Malicious Use of Artificial Intelligence: Forecasting, Prevention, and Mitigation by contributors from Oxford University, Future of Humanity Institute, Centre for the Study of Existential Risk, Center for a New American Security, Electronic Frontier Foundation, OpenAI, Stanford University, University of Cambridge and a number of other universities and organizations provided a general framework for AI and security treats, various scenarios and security domains within the digital, physical and political security domains, a strategic analysis and recommended interventions.[v] They specifically highlight how the security threat landscape is affected by AI systems that inject new potential threats, broaden existing threats, and even change the typical nature of threats. Some specific high-level recommendations include a much tighter collaboration between policymakers and technical researchers about understanding, preventing and alleviating the potential ill-natured and damaging AI use cases; ensuring that stakeholders and domain experts from across the spectrum are involved in these discussions and helping to determine the best path forward; the identification of best practices and guidelines in research areas focused on dual-use concerns where smart technology can specifically be misused in computer security; and AI practitioners and researchers should carefully consider the dual-use nature of their applications and research, making sure that their research priorities and standards are not affected and directed by misapplications and harmful use cases, and proactively alert the relevant people about such potential outcomes. In addition, they also advise on advancing a culture of responsibility, collaborative learning from and with the cybersecurity community, investigating current openness of research and publications in areas that might pose potential risk, and developing policy and technology-driven solutions to help drive towards a safer future where privacy is safeguarded, and AI is used for common good and public-good security. As it is clear that the challenges are formidable and the consequences are important not only in the security risk category, we need the participation of all stakeholders in the private and public sectors to act on these types of recommendations.
A recent BBC article asks, “what would it take for a global totalitarian government to rise to power indefinitely” and that this could be a horrendous outcome that could be “worse than extinction”.[vi] Totalitarianism refers to a governmental or political system where the state has complete authority and controls public and private life and where opposition is outlawed.[vii] It is a more extreme form of authoritarianism where citizens blindly accept and comply with authority. Although a global totalitarian government still looks improbable, we already observe AI enabling a form of authoritarianism in a few countries and reinforcing infrastructure that could potentially be captured by a dictator or oppressor. So, this is a real and present danger. Apart from AI enhancing surveillance of citizens, it is also being used to spread online misinformation, propaganda, and fabricated political messages in personalized fashion via social media. So how does one avoid these digital authoritarian scenarios? Apart from solutions mentioned above, the execution of goals linked to the proposed MTP for Humanity would clearly be preventative steps in the right direction as the focus is on building a decentralized and community-based city-state civilization with self-optimized governance and a more elastic, dynamic, and direct democracy which is diametrically opposed to centralized control and digital authoritarianism. Tucker Davey from the Future of Life Institute strongly recommends that we make a decision about what are “acceptable and unacceptable uses of AI” and that we need to be “careful about letting it control so much of our infrastructure”. He states that we are already on the wrong track “if we’re arming police with facial recognition and the federal government is collecting all of our data”.[viii]
Can we steer AI towards positive outcomes? Can we advance AI in a way that is most likely going to benefit humanity as a whole and help solve some of our most pressing real-world problems? Can we shape AI to be an extension of individual human wills and as broadly and evenly distributed as possible? The answer is yes to all these questions. If society approaches AI with an open mind, the technologies emerging from the field could profoundly transform society for the better in the coming decades. Like other technologies, AI has the potential to be used for good or criminal purposes. A robust and knowledgeable debate about how to best steer AI in ways that enrich our lives, and our society is an urgent and vital need. It is incumbent on all of us to make sure we are building a world in which every individual has an opportunity to thrive. As also discussed in previous chapters it is likely that the future of AI will impact our everyday life through automating transportation, enhancing us with cyborg technology, taking over dangerous jobs, helping to address or potentially solve climate change, providing robots or AI agents as friends, and improving elder care.[ix] Stanford University’s The One Hundred Year Study on Artificial Intelligencehighlights substantial increases in the future uses of AI applications, including more self-driving cars, healthcare diagnostics and targeted treatment, and physical assistance for elder care.[x] Though quality education will likely always require active engagement by human teachers, AI promises to enhance education at all levels, especially by providing personalization at scale, AI will increasingly enable entertainment that is more interactive, personalized, and engaging. Research should be directed toward understanding how to leverage these attributes for individuals’ and society’s benefit. With targeted incentives and funding priorities, AI could help address the needs of low resource communities. In the longer term, AI may be thought of as a radically different mechanism for wealth creation in which everyone should be entitled to a portion of the world’s AI-produced treasures. The measure of success for AI applications is the value they create for human lives. Misunderstandings about what AI is and is not could fuel opposition to technologies with the potential to benefit everyone. Poorly informed regulation that stifles innovation would be a terrible mistake. Going forward, the ease with which people use and adapt to AI applications will likewise largely determine their success. Society is now at a crucial juncture in determining how to deploy AI-based technologies in ways that promote rather than hinder democratic values such as freedom, equality, and transparency. Machine intelligence already pervades our lives and will likely replace tasks rather than jobs in the near term and will also create new kinds of jobs. However, the new jobs that will emerge are harder to imagine in advance than the existing jobs that will likely be lost.
(For more on this, read the paperback or e-book, or listen to the audiobook or podcast – see jacquesludik.com)
We know that democratizing AI is a problem with many dimensions and is not only dependent on progress in AI, smart technology, science, and policy, but excellent collaboration between the public sector, academic institutions, and private sector as well as global organizations and governments developing policies, laws and have task forces focused on beneficial outcomes of all stakeholders and protecting the rights of citizens. A feasible solution is to democratize AI throughout the lifecycle starting with its development, deployment, distribution, and its use. When AI business strategy consultants and AI infrastructure companies refer to “democratizing AI”, they imply enabling more people, including those with no background in AI, machine learning or data science to be able to use the technology to innovate and build AI models or systems to solve real-world problems. We are seeing some of the tech giants such as Google, Amazon and Microsoft providing ready-to-use AI application programming interfaces (APIs), tools, and drag-and-drop components that can be integrated into applications without having to know the details behind it or how to train and test machine learning models. An AI company such as DataRobot with their end-to-end enterprise AI platform that automates machine learning models development and deployment has for example a whitepaper on Democratizing AI for All: Transforming Your Operating Model to Support AI Adoption that discusses aligning AI to business objectives and drivers, identifying impactful use cases, building trust in AI, and making sure that the AI strategy can be executed as it relates to the business vision and strategy, services and customers, processes and channels, people and organization, technology and enablers, and governance and reporting.[i] These aspects were also covered in more detail in Chapter 4 where I have elaborated on AI-driven digital transformation of the business enterprise. From an application’s perspective, democratization of AI is necessary to help address the dearth of people with AI, machine learning and data science experience, knowledge, and skills on the one hand, but also to get better adoption of AI in accordance to best practices for digital transformation and more people in business involved in the implementation of AI-driven solutions. Automated machine learning solutions such as Google’s AutoML and DataRobot assist with reducing the risk, cost, and complexity of deploying AI models in a production environment, provides transparency and documentation and tools for understanding model accuracy, supports developing many different types of models simultaneously to get to the best possible model to solve a specific problem, retraining and redeployment of models, and monitor model performance.[ii]
Anand Rao, who is an AI lead in PwC’s analytics practice in the USA, reckons that democratizing AI is a “double-edged sword” and advises that although more and better access to AI software and hardware will likely lead to more application-related innovation, one needs to manage the process of access in a cautious manner to avoid misuse, abuse, bias, and related problems.[iii] Having assisted many companies across multiple industries with their AI-driven digital transformation, I fully endorse this sentiment. He makes the point that AI companies providing products, tools or platforms across the technology spectrum need to carefully decide which part of the value chain they want to democratize and contribute to responsible, trustworthy AI with respect to design, training, testing, support, and maintenance aspects. One can see this technology spectrum as covering data ingestion, storing, processing, and exploring; then the broad range of machine learning algorithms that are democratized and accessible via open source repositories such as GitHub; then there is the storage and computing platforms such as Google Compute Engine, Amazon Web Services, and Microsoft Azure which is less democratized but does provide cloud-based environments and hardware resources for training machine learning models within their own environments at scale; this is followed-up by the actual model development for specific solutions where automated machine learning platforms and tools such as Google’s AutoML and DataRobot as mentioned above can help to democratize model development, but where mistakes can easily creep in for non-expert users; and the development of a marketplace for data, AI models and algorithms such as Kaggle and Zindi where one also need to be careful in how to apply models in the correct context.[iv] Anand also emphasizes the importance of knowing the actual users of these systems as the beneficiaries of AI democratization can predominantly be categorized into specialist developers such as AI experts and data scientists, power users that are well-trained but not experts, and casual users which are business users that does not have theoretical or practical know-how with respect to data science and its practical implementation. Once it has been determined what specifically needs to be democratized and what tools will be used, the focus can shift to how to democratize AI within an organization which involves training, data governance, AI model or solution governance, intellectual property rights and open sourcing related matters. Organizational leadership has an important responsibility to ensure that people that are involved in AI development and deployment are properly trained with respect to the foundations and practical aspects of data science and machine learning. If this is not the case, AI implementations can easily lead to basic errors, inaccuracies and unintentional or undesired consequences. Another area that Anand highlights is data governance that involves the thorough monitoring and management of data integrity and security to help reduce risk as well as a clear understanding of the ownership and control of the data that flows through the AI solutions and the rights with respect to the model outputs and insights obtained. This includes ownership of intellectual property rights which is important to ensure that the benefits of AI democratization are shared appropriately. Machine learning also needs specific governance in order to check for accuracy, generalization capability, fairness, and explainability. Anand also emphasizes the importance of open sourcing as a key vehicle for democratizing AI and making sure that all participants contribute to this as far as it is possible and not just benefiting in a one-way flow fashion. There clearly needs to be a balance between democratized innovation on the one hand and responsible trustworthy implementation with full governance, transparency, and adherence to best practices and standards on the other hand.[v]
In an essay on the importance of democratizing AI, Francois Chollet mentions that although we have seen tremendous progress with AI and deep learning research and applications specifically over the last number of years, we are really only at the start of unlocking the potential of AI, figuring out the “killer apps”, and seeing how AI will become the interface to our information driven world and have a significant economic and social impact as it reshapes our scientific research, healthcare, education, transportation, communication, culture, and every part of our civilization.[vi] He compares AI and where it is now as a world-changing smart technology that increasingly automates cognitive tasks to that of the Internet at the time when commercial use restrictions were lifted in 1995. Initially the Internet did not have any major impact on society, but we know everything changed over the last quarter of a century. The impact was dramatic, and our civilization effectively got a “nervous system” or at the very least upgraded it in a significant way with “instant communications, a supercomputer in your pocket, and the entire knowledge of humanity available at your fingertips”. I’m also in agreement with Francois’ assessment that AI’s impact will be even bigger and will be disrupting every industry, all business models, all jobs, every application, every process in our society, even culture and art, and every aspect of our lives, and change what it means to be human.[vii] As elaborated in the previous chapters, I also believe that the evolution of AI is opening up tremendous opportunities for humanity if we can execute on them wisely. Apart from a wide range of exhilarating opportunities, it also has the potential to create abundance, affluence and help people to live more meaningful lives. Similar to how the Internet remained open and has democratized people’s ability to express themselves, connect with people at will, start businesses and leverage it for their own benefit, he reckons that we can do the same with AI, except that it is “not a given that every technological revolution should turn out as a net positive for humanity, empowering individuals and bringing us higher potential for learning and creating, for self-direction and self-actualization”.[viii] We should be laser focused on ensuring that everyone is given the opportunity to learn and unlock value from AI in an easy and free fashion in order for us to maximize tapping into people’s potential for positive, innovative and creative contributions to society. As we did with the internet being a net positive for humanity against the background of many mistakes made and lessons learned (especially with social media), it is our responsibility to ensure that we steer the civilization ship in the right direction when it comes to smart technology such as AI. We know the potential for beneficial outcomes is enormous. Francois also mentions in a tweet that in corporate speak “centralized private control” becomes democratization and that one does not democratize AI by building a for-profit proprietary platform. We currently see a number of for-profit proprietary platforms claiming to democratize AI.[ix] A better example of democratizing AI is Francois Chollet’s Keras, a deep learning open-source library for Python that is simple, modular and extensible and makes deep learning accessible to any person with some basic computer science skills and knowledge that want to build deep learning neural networks. It follows a similar approach to another very popular open-source machine learning library for Python called scikit-learn, where the latteris more focused on abstracting and making traditional machine learning techniques easy to use.[x] Keras now has many contributors and a community of hundreds of thousands of users. According to Wikipedia, Keras asserts over 375,000 individual users as of early-2020 and are being used in scientific research by hundreds of researchers, thousands of graduate students, and many businesses.[xi] There is also an automated machine learning version of Keras called AutoKeras, developed by DATA Lab at Texas A&M University, that is aimed at making machine learning accessible for everyone.[xii] Keras also supports TensorFlow, which is Google’s free and open-source software library for machine learning.[xiii] PyTorch, developed by Facebook, is another open source machine learning framework that accelerates the path from research prototyping to production deployment.[xiv] There are also many excellent examples of democratizing AI-related knowledge with a multitude of training courses, demos, tutorials, videos, and blogs explaining the foundations, intricacies and practical aspects of designing, developing, and deploying AI solutions. I also agree with Francois’ sentiment that “democratizing AI is the best way, maybe the only way, to make sure that the future we are creating will be a good one.”[xv]
Machine Learning for Machine Learning or MLsquare is an open-source initiative that focuses on addressing machine learning limitations using machine learning and a space for machine learning enthusiasts to collaborate and find solutions for such limitations.[xvi] This initiative from a team in India has also recently published a framework for democratizing AI and shared an extensible Python framework that provides a single point of interface to a range of solutions in desirable AI attribute categories to make it responsible and responsive. These attributes include portability through separating the development of the machine learning models from its consumption; explainability by providing plausible explanations along with predictions; credibility by providing coverage intervals and other uncertainty quantification metrics; fairness via making machine learning bias free and equitable; decentralization and distributability by having models deployed where the data is instead of the reverse; declarative by specifying what the model requires and what it should do, and not being concerned about its actual workings, and reproducibility through reproducing any result in an on-demand fashion.[xvii] With respect to achieving portability, as an example, one can go the route of having an intermediate representation of the machine learning models such as Predictive Model Markup Language (PMML) and Open Neural Network Exchange (ONNX).[xviii] The ONNX in particular is a format that is supported by Deep Learning frameworks such as TensorFlow, PyTorch and MXNet as well as WinML which allow saving scikit-learn xgboost models in the ONNX format. The mlsquare team has presented the design details, APIs of the framework, reference implementations, roadmap for development, and guidelines for contributions. Their mlsquare framework currently provides support for porting a subset of scikit-learn models to their approximately deep neural network counterparts which are represented in ONNX format. Instead of providing one-to-one operator level mappings of machine learning models, they propose a more generic semantic mapping of models as an efficient alternative that could be an exact semantic map (i.e., exact equivalence between the model the user provides, the primal model, and its neural network counterpart, the proxy model), an approximate semantic map (i.e., the proxy model is trained on the same data as the primal model, but its target labels are the predictions from the primal model), or a universal map (where both the intent and implementation can be delegated to the proxy model). The mlsquare framework approaches explainability by defining explanations as predicates in a first order logic system, which is represented in conjunctive normal form (which is a formula in Boolean logic that is a conjunction of one or more clauses that is a product of sums or an AND or ORs) and then has the proxy model producing such predicates as its output.[xix] They see producing explanations as a synthetic language generation problem and use recurrent neural networks to train on the outputs of a localized decision tree for the given training dataset. When the system is given a new unseen data input, this recurrent neural network would output the corresponding decision tree path traversed and is interpreted as the decision taken by the model at each feature.[xx] The design goals of the mlsquare framework use well-documented APIs such as those in scikit-learn; ensure that there are minimal changes in the development workflow; have consistency by making sure that all the quality attributes of AI mentioned above to be first class methods of a model object with a consistent interface; ensure compositionality by using a Lego block computational framework for composing models such as with deep learning; modularity through the inherent object-orientedness of many algorithms of deep learning; and require implementations to be extensible.[xxi] The mlsquare framework currently supports some widely used frameworks to assist with making deep learning techniques more accessible to a broader audience. They currently use Keras to define the neural networks and plan to extend this to PyTorch as well and are also extending neural architecture search capabilities with AutoKeras.[xxii]
(For more on this, read the paperback or e-book, or listen to the audiobook or podcast – see jacquesludik.com)
Before we launch into a framework for strategic planning and adopting new technologies, innovating, and embracing Machine Learning in our daily lives, it is important to unmask and scrutinize our assumptions. Any ‘knowledge’ is the foundation for our values, behaviors, and motivations. The problem is that much of our knowledge is based on assumptions. We assume our best friend did not call us back because they do not care about us, so we send a scathing text. We assume the world is flat, so we flog people who say it is round. The decisions we make are based on what we think we know.[i] However normal it might be for humans to make assumptions and to not question what we think we know, when our premises are incorrect, our conclusions are invalid. On the other hand, we may be very well informed. We may have all the knowledge and expertise in the world on a certain topic or project. The problem here lies when we communicate it. This “curse of knowledge” assumes that those on the receiving end have all the context, details, and certainty that we do – so we start communicating in the middle, or at the end and meaninglessly drone on to lost audiences.[ii] Not only have we missed the why (the goal or the purpose), we have assumed that others have the background, knowledge and understanding that we have. So, before implementing any change, starting a new project, or simply communicating we have two tasks where the one is not to assume we have all the facts and the other is to not assume that others have the same facts and context that we do. The first task has to do with our own self-questioning, learning and in some cases unlearning. The second task has to do with communication and leadership, where leadership also involves leading people to stand on our grounds and see things clearly through the same looking glass. Both tasks share the same theme – find and share knowledge that harbors data instead of assumption and use that knowledge to inspire why you want to do something. Leadership is not a new idea. In times long lost to us, in all areas of society leadership has formed to construct rules, maintain processes, inspire action or compliance, and make us feel as if we are a part of something. We do not always love our leaders. In fact, some would argue that we rarely do. Our bosses might create anguish, our political leaders seem in most instances more concerned with power, our CEOs seems in many instances more focused on self-gain, our religious leaders give us hope in exchange for money or sometimes strange and extreme ways of seeing or doing things, and even our families and communities develop informal leaders that tacitly dictate how we ‘should’ be doing is. All leaders have an aim, an outcome they are trying to achieve (well, they should). Simon Sinek dedicated his book Start with Why to uncover how no aim or outcome can successfully be achieved without first asking the right questions.[iii] These questions, always starting with the most important question – why. Why are we doing this? What is our purpose? Our vision? What motivates us to do what we are doing? If we are asking the wrong questions, getting those answers right simply does not matter. It is the role of the leader to start with why, and to inspire and motivate those around them to act out of will and connection to that why – not simply because of arbitrary rules, targets or KPIs.
We understand that with changes as great as those that smart technology offers, there are a few seemingly more complicated things to consider. Smart technology brings utter disruption to our ways of life and through this it is met with great deal of fear. Not to mention the power in our new technologies, codes, and scientific discoveries to elicit large-scale ‘good’ or ‘bad’. On this, I have two things to say – ignoring the changes is more dangerous than drinking them in and fearing the bad makes us powerless to ensure the good. Now let us talk about why we need to change – the first step in our framework for embracing, adopting, and democratizing AI, one step at a time, but always towards the betterment of the world at large. As discussed in the previous chapter, democratizing AI fits within the proposed broader massive transformative purpose framework for humanity.
The current reality is that we are stepping into an unknown future and the ability to change and adapt has never been so important to our survival. Agility has been thrown around as the trendy new word, along with words and terms like digital transformation, innovation and so on. The thing about these words is that while we might roll our eyes when we hear them, they are overused (without real understanding) for a reason. Our future is unknown. The pace of development, knowledge growth, technology advancement, and analysis make it so. We do know that at the core of our unknown future is smart technology. We also know that smart technology keeps getting smarter, moving faster and fundamentally altering our lives. A ground-breaking invention or advancement could make what we think, how we think and what we do today irrelevant tomorrow. Therefore, current business models, strategies and roadmaps no longer work. These often rely on an amount of predictability that we just do not have. While we have more data than we have ever had before and more ways to analyze this data than ever before, we have less certainty on what next year will be like. The bottom line is we must be prepared to change directions, stop what we are doing, and completely reinvent ourselves if we have to. We also must be prepared to do this often. Being comfortably uncomfortable is vital if we want to avoid irrelevance, non-competitiveness and possibly business extinction.[iv] The same is true for governments who risk falling far behind the global economy, removing their countries from global dialogue, advancements, and information sharing, and not taking advantage of what the world offers to properly serve and protect their citizens. Agility, innovation, and digital transformation are some of the key action words for any business, organization, or government that aims to thrive in the smart technology era. We must be willing, prepared and inspired to change quickly where in the past we have steered and controlled. We must follow our data, be digitally savvy, responsive, and available for our customers, employees, or citizens, and prepared to chuck our processes, strategies, and services aside (or repurpose them) based on trends, demands, advancements and inventions.
We can no longer have five-year plans that direct our futures with minimal deterrents or interruptions. We can plan for the short term, and direct our vision to solving long-term problems, but must remain agile in our processes, thinking and approach. The pace of global digital transformation and innovation makes this so. In this regard Amy Webb, a Quantitative Futurist and Professor of Strategic Foresight at the New York University Stern School of Business, has said in a Harvard Business Review article, How to Do Strategic Planning Like a Futurist, that “deep uncertainty merits deep questions, and the answers aren’t necessarily tied to a fixed date in the future”.[v] She further asks “where do you want to have impact? What will it take to achieve success? How will the organization evolve to meet challenges on the horizon?” and reckons that “these are the kinds of deep, foundational questions that are best addressed with long-term planning”.[vi] Without this kind of thinking, we are at risk of becoming the likes of out-of-business publishers who did not account for internet content consumption and advertising, movie renters who did not account for online streaming and so on, because changing their business model was not in their five year plans. In her futurist’s framework for strategic planning, she recommends that we need to think about time differently: use time cones instead of timelines that arbitrarily assign goals on a quarterly or yearly basis and look at our planning based on what is most certain to what is least certain. One can then divide the time cone up in four parts where each section of the cone is a strategic approach that encloses the one before and starts with tactics, which is then followed by strategy, vision, and systems level evolution. Because we have most certainty about the trends and probable events of the most immediate future, we can direct our tactics (actionable strategic outcomes) for the next year or two towards achieving related goals. After that, our strategies become less certain and cannot be formulated as tactics or plans. For the following 2-5 years, we focus more on priorities, shifts in the organization’s structure or staffing requirements. In the more distant future (5-10 years), we have even less certainty other than the vision of what we want to achieve and where we aim to take the organization. When we reach past 10 years, things are wholly uncertain. Granted, things may change drastically after the first 6 months requiring a change in tactics, but after 10 years, we can assume no certainty. Internal and external systems, trends and processes which are all part of systems level evolution will evolve, fall away, be replaced, and disrupted. We must think about how this uncertainty and possibility might affect us, and how we can direct it.[vii]
Perhaps the most important thing to note is the importance of agility. The moment there is a new invention, a new player in the industry, new technology, new trends, or new business and consumer intelligence, we need to be willing to shift, discover and perhaps seek a completely new path. So, this might change our tactics and our vision. Amy recommends that “the beginning of your cone and your tactical category is always reset to the present day” and that the ideal result is “a flexible organization that is positioned to continually iterate and respond to external developments”.[viii] If we have a strong sense of what our industries might look like, we can address the entire cone simultaneously.[ix] By doing this, leaders are in a much better position to assess whether their more immediate tactics and strategies allow and account for the future landscape, effects of other industries, and the potential state of our own industry. Conversely, Amy warns that “if leaders do not have a strong sense of how their industry must evolve to meet the challenges of new technology, market forces, regulation, and the like, then someone else will be in a position to dictate the terms of your future”.[x] We can also use this type of holistic thinking to imagine, lead and direct the future developments and be central to systems level evolution through our tactics and strategies.
Bring your attention back to your massive transformative purpose: that vision you have of what you want to achieve for your family, business, customers, industry, citizens, and the world. If you keep focused on that, then the most important thing will always be achieving it. You become less bogged down about following strategies because they sounded good two years ago and they are now part of your key performance indicators. If new developments mean that you can achieve your MTP in a new way or if a break-through discovery means you can achieve it in a way you did not consider possible before, these are now the paths you follow. The same holds for the proposed MTP for humanity and the associated MTP goals. My personal MTP of helping to shape a better future in the Smart Technology Era fits in with the MTPs of my business ventures and non-profit organizations as well as the MTP for humanity. In order to follow the most optimal paths to achieving these MTPs and associated goals, I and we as a collective need to be as agile as possible.
Now, what is left is to understand all the little pieces that will come into how we actually go about achieving transformation, innovation and digitization. Once a business or organization has a clearly defined MTP, the focus shifts to embarking on an AI-driven digital transformation journey as described in Chapter 4. Some of the key elements of successful AI-driven digital transformation include vision or intent, data, technology, process, and people. This is not only relevant for business and organizations, but also communities, governments, as well as regional and global organizations. If you are thinking about your business or country as a futurist would, or in terms of your massive transformative purpose, you are already thinking about intent. You are already taking the first step. As discussed in previous chapters, AI should be a part of every company or organization’s strategy.[xi] It should not be regarded as another information technology project. Rather, automation and machine learning should define change, growth, products, reach and is a strategic tool to achieve organizational vision. If we are not considering machine learning in our strategy, no matter what institution or organization we are a part of, we are already behind. Furthermore, we are ignoring efficient, innovative, and scalable solutions to our current problems. Whoever you are, think about this – if your current solutions, strategies, and plans involve using the tools that have been available to you since you are founding to stay profitable, become profitable or scale, you are in danger of becoming irrelevant and wasting inordinate amounts of money. If your solutions involve more or improved physical structures, this year’s Christmas campaign, improvements to your products or services, or latest specials you do not think the public can refuse – you are in danger of becoming irrelevant. The things that have worked for you in the past, that even seem foolproof, do not work anymore. And if you are in the business of public service and it is not profit you are after, achieving inclusion and development and ensuring resources and rights for your communities simply cannot happen quickly enough with the traditional methods you are used to. If they could, I imagine you would be in a quite different place today. If smart technology such as machine learning, IoT, distributed ledger technology, automation and robotics are not part of your strategies, you are simply ignoring some of the most powerful and scalable tools and solutions for the problems that you face. If you only see AI and any automation or technological advancements as an IT project, not only are you not seeing the full picture, but you probably have no idea how AI can change the path of your business, its impact and maybe even the world. Misunderstandings about what AI is and is not could fuel opposition to technologies with the potential to benefit everyone. From a governmental perspective, poorly informed regulation that stifles innovation would be a tragic mistake.[xii]
(For more on this, read the paperback or e-book, or listen to the audiobook or podcast – see jacquesludik.com)
In a recent Nature article AI for Social Good: Unlocking the Opportunity for Positive Impact by authors from the UK, Europe, Japan and Africa that are part of the AI for Social Good (AI4SG) movement and representing academic institutions, global organizations, non-profit organizations and tech companies (like Google and Microsoft) provided a “set of guidelines for establishing successful long-term collaborations between AI researchers and application-domain experts, relate them to existing AI4SG projects and identify key opportunities for future AI applications targeted towards social good”.[i] The AI4SG movement is putting interdisciplinary partnerships together that are focused on AI applications with respect to helping to achieve the United Nations’ 17 Sustainable Development Goals (SDGs). The same guidelines are also relevant to the proposed MTP for Humanity and the 14 associated complementary MTP goals described in Chapter 10. These guidelines include ensuring that the expectations of what is possible with AI are well-grounded; acknowledging that there is value in simple solutions; ensuring that applications of AI are inclusive and accessible, and reviewed at every stage for ethics and human rights compliance; making sure that goals and use cases are clear and well-defined; understanding that deep, long-term partnerships are required to solve large problems in a successful manner; making sure that planning aligns incentives and incorporate the limitations of both the research and the practitioner communities; recognizing that establishing and maintaining trust is key to overcoming organizational barriers; exploring options for reducing the development cost of AI solutions; improving data readiness; and ensuring that data is processed securely with the greatest respect for human rights and privacy.[ii] The AI4SG group also highlighted some case studies to illustrate how their collaboration guidelines can be used with new, mature, and community-wide projects. These projects included the use of AI to improve citizen feedback in Somalia via a non-governmental organization called Shaqodoon, the Troll Patrol project that used AI to quantify and analyze abuse against women on Twitter and potentially make abusive tweet detection easier, and the Deep Learning Indaba being an AI community in Africa that supports AI4SG projects and use AI for sustainable development.[iii]
A team from Oxford University’s Digital Ethics Lab as well as the Alan Turing Institute which are also part of the AI4SG movement shared seven factors that are key when designing AI for social good in a Science and Engineering Ethics journal article, How to Design AI for Social Good: Seven Essential Factors.[iv] They argue that our understanding of what makes AI socially good in theory is limited, that many practical aspects of AI4SG still needs to be figured out, and that we still need to reproduce the initial successes of these projects in terms of policies. Their analysis is supported by 27 AI4SG projects that function as use case examples. The team identified the following key factors to help ensure successful project delivery: (1) falsifiability and incremental deployment to improve the trustworthiness of AI applications; (2) safeguards against the manipulation of predictors; (3) receiver-contextualized intervention; (4) receiver-contextualized explanation and transparent purposes; (5) privacy protection and data subject consent; (6) situational fairness; and (7) human-friendly semanticization.[v] Each of these factors have a corresponding best practice for AI4SG designers. These include (a) identifying falsifiable requirements (identifying essential conditions for which the systems cannot fully operate) and testing them in incremental steps from the lab to the “outside world”; (b) adopting safeguards which ensure that non-causal indicators do not unsuitably skew interventions, and limiting knowledge (in appropriate fashion) of how inputs affect outputs to prevent manipulation; (c) building decision-making systems in discussion with users that engage with or are impacted by the AI systems and take into consideration the users’ characteristics, the methods of coordination, and the purposes and effects of an engagement, and respecting the users’ right to ignore or modify engagements; (d) choosing a level of abstraction for AI explanation that fulfils the expected explanatory purpose and is appropriate to the system and the receivers; then deploy arguments that are rationally and suitably convincing for the receivers to deliver the explanation; and ensure that the AI system’s purpose is knowable to receivers of its outputs by default; (e) respecting the threshold of permission established for the processing of datasets of personal data; (f) removing from relevant datasets variables and proxies that are irrelevant to an outcome, except when their inclusion supports inclusivity, safety, or other ethical imperatives; and (g) not obstructing the ability for people to semanticize or give meaning to and make sense of something.[vi] The team also makes the point that the essential factors that they have identified correspond to the five principles of AI ethics which are beneficence (i.e., do only good that includes promoting well-being, preserving dignity, and sustaining the planet), non-maleficence (i.e., do no harm that includes privacy, security and “capability caution”), justice (i.e., promoting prosperity, preserving solidarity, and avoiding unfairness), autonomy (i.e., the power to decide), and explicability (i.e., enabling the other principles through intelligibility and accountability).[vii] Of these principles beneficence is seen as a precondition for AI4SG. They also recommend that well-executed AI4SG projects require that factors should be balanced intrinsically on an individual factor level (not overdoing or understating a factor) as well as on a systemic one that struck a balance between multiple factors.
We live in times ruled by scientific discovery and economic development – although it may not seem so from a village in Ethiopia. With all the information, discovery, and development in the last years, combined with how little we actually know about what will be discovered and exactly what our future will look like, we urgently need a way to answer the ethical questions that keep our discoveries and developments from having a negative impact on planet earth, humanity and the individual. Growing concern and debates around the world are doing a good job at spreading fear and making some of us more accountable and intentional about today’s effects on the future of life. But most of us still do not know exactly how to get there or how to truly hold those with power to the correct standards, laws and policies. We need ways to ensure that the human experience, human rights, and the protection of our planet come first. Our affinity towards growth has welcomed innovation and development before thinking about the implications of this innovation and development. By no means should we stop innovation and development, but watch it, steer it, and understand it. We should always consider its potential consequences with the same vigor that we embrace the ways it benefits us or solves problems. In this we not only need to monitor the use and impact of our new developments, but to find meaningful ways to steer them towards greater economic, social and knowledge inclusion and general individual well-being. Smart technologies such as AI have the power to innovate, solve, enhance, and develop at an electrifying pace. This speed of our changing world, Yuval Harari warns, makes it difficult for us to make sense of our present circumstances and to predict what the future holds – for politics, economics, medicine, production, agriculture, the environment and humanity.[i] By not considering the impact of what we are doing today, our future is at stake.[ii] More so, if we are not directing this future towards one that puts humanity first, we could be leading future generations into a world we would never choose for ourselves. The future might be unknown, but that does not mean it is out of our control. This forms the basis for policies, guidelines, and global discussions on how to use AI (and Smart Technology in general) ethically, responsibly and in ways that protect, develop and nurture life. Trustworthy, ethically responsible, transparent, and unbiased AI are critical for the transformational purposes of AI, and for businesses and society to thrive in the Smart Technology Era.
As discussed in earlier chapters, AI applications can on a high-level be categorized into customer facing applications and industrial applications. The different intended uses of these systems inspire different ethical considerations. For industrial applications, AI in collaboration with other smart technology can be used to predict equipment and asset maintenance, enhance industrial processes, assess issues, aid in mining, manufacturing, and farming and any instance when the outcome is for the optimal running, intelligence and issue detection of systems and machines. Industrial and consumer facing applications can effectively be categorized into whether they are intended to maximize the safety and efficiency of industrial processes or whether the human being is the intended audience. Because of their differences, when we are thinking about ethics in AI, we need to think about whether they are, by nature, industrial or customer-facing. For customer-facing applications, fairness, privacy, data governance and explicability are incredibly important. Industrial applications need to be less concerned with privacy, but extremely strict when it comes to safety, trustworthiness and robustness of systems and outcomes. In the rest of this chapter, I will focus more on the systems that are intended for human benefit, consumption, use, knowledge, and adoption. Research, world summits, conferences and even policies have taken the stage in AI over the last few years. Many more are popping up all the time and dialogue is flowing in AI, data, and economic communities. Singapore’s FEAT (Fairness, Ethics, Accountability and Transparency) Principles in the financial sector have given us a great foundational framework for the responsible use of AI.[iii] Many countries are following suit and are incorporating similar thinking into the way they are thinking about the potential consequences of AI. The European Union published their Ethics Guidelines for Trustworthy AI in 2019 – helping us grapple with what the operationalization of AI should look like in a way that protects the trustworthiness, transparency and impact of AI systems.[iv] For the EU trustworthy AI means three main things: (1) it should be lawful, complying with all applicable laws and regulations; (2) it should be ethical, ensuring adherence to ethical principles and values; and (3) it should be robust, both from a technical and social perspective, since, even with good intentions, AI systems can cause unintentional harm. Based on fundamental rights and ethical principles, the guidelines list seven key requirements that AI systems should meet: human agency and oversight, technical robustness and safety, privacy and data governance, transparency, diversity, non-discrimination and fairness, societal and environmental well-being and accountability.[v]
Since the availability and impacts of technology are borderless, whose laws and regulations are we talking about? Which ethical principles are we talking about? Whose social perspective are we talking about? Only those of the EU? Let us forget about the rest of the world for a second. Even when we look at ethical AI within the confines of the EU, each country has their own laws, guidelines, cultures, values, and foundational ethical principles. The EU’s Constitution may celebrate and insist upon the respective rights, laws, and values of each nation, but could it still protect these when machine learning, drones and IoT are in full play?[vi] How could processes and legislation ensure that a development in Germany does not negatively impact the value or culture in Greece, for example? It gets even more complicated when we look outside the EU and expand our horizon to the rest of the world. The digital economy is one where trade, services, information, and work have no boundaries. Data and AI applications in one country might infringe human rights in another country. Global outcry to China’s Social Credit System that aims to promote the “traditional value of creditworthiness” by incentivizing trustworthiness and punishing untrustworthiness might seem acceptable based on collectivist values of harmony and transparency.[vii] It might even appeal (and has) to many others building ‘behavioral change’ platforms and applications from a standpoint of wanting to make the world a better place. But what does it mean for privacy or the right not to be punished for potential behavior? What does it mean to be innocent until proven guilty? And what does it mean to those currently outside of these systems? These applications only need to be developed once. Once they exist, they exist everywhere. Simple tweaks, knowledge sharing, and integrations makes it so what is developed in one country or by one business has the potential to affect the entire world. So how do we decide what is acceptable to develop? For what purposes and for what uses? If the EU makes ethical decisions, do they really matter if the rest of the world is making these decisions in a different way? Globalization and Digitization mean that the effects of creations, developments, and laws in one country do not only affect that country. Not anymore.
The Organization for Economic Co-operation and Development (OECD) seem to understand this, and are trying to get as many countries as possible on board in their search to practically affect values-based AI principles which is very much in line with the proposed MTP for Humanity and its associated MTP goals: (1) AI should benefit people and the planet by driving inclusive growth, sustainable development and well-being; (2) AI systems should be designed in a way that respects the rule of law, human rights, democratic values and diversity, and they should include appropriate safeguards – for example, enabling human intervention where necessary – to ensure a fair and just society; (3) There should be transparency and responsible disclosure around AI systems to ensure that people understand AI-based outcomes and can challenge them; (4) AI systems must function in a robust, secure and safe way throughout their life cycles and potential risks should be continually assessed and managed. (5) Organisations and individuals developing, deploying, or operating AI systems should be held accountable for their proper functioning in line with the above principles.[viii] But again we run into issues when looking into the practicality of applying these rather idealistic theories. Whose rule of law? Whose human rights? Whose interpretation of democracy? And what about those who do not govern democratically? Is it safe to leave anyone out of the principles, guidelines or laws that really affect the entire world? If the very nature is unclear, impractical or does not allow for and account for practical adoption throughout the world, then we are dooming ourselves to theory, with no actionable outcomes.
Data biases in AI perpetuate biases that we already have. In asking who benefits, who is harmed, who is using and who has the power to use AI systems and products, we are not only asking about the overt implications, but also how these questions are reflected in the ways the systems are built.[i] Who is doing the designing always impacts what is being designed and the results it produces. However unaware we may be, or unintentional the biases. For example, in the development of melanoma detection algorithms, the technology only works on white skin.[ii] What about everyone else? Another algorithm used to find criminals quicker, tends to ignore those living in wealthier areas and white people in general. Do white people not commit crimes? These two variably different technologies both show biases. The first bias being that only people of a certain kind can benefit from life-saving technology and the second being that existing cultural and societal discriminations are being taught to algorithms that the rest of the world is told are completely objective. This problem is further perpetuated by the widespread effects of technology. It is no longer one person or group of people who must be present to do this job, it is technology that can be used anywhere in the world, on streams of people at a time. In this, smart technology and machine learning have an immense power to perpetuate, spread and expand social injustices and exclusions, prejudices, personal feelings and cultural, racial and gender inequalities and already entrenched systems.
Given the accelerating pace of smart technology driven automation and its impact on people’s required skills and knowledge in the dynamic job market, there is a growing need for an always accessible type of continuous learning that covers life-wide and lifelong learning. To thrive as a citizen and participant in the workplace, people need to not only be equipped with the relevant literacies, competencies, and character qualities, but also make proactive smart choices about where the needs and opportunities are shifting and where they can make meaningful contributions. As discussed in Chapter 7, ultra-personalized AI-enabled education is a gamechanger to assist with more productive learning experiences. AI can also help to find relevant jobs and tasks in an evolving marketplace that match people’s interests, knowledge, skills, competencies, and experiences. AI is at the same time increasingly having a disruptive impact on many traditional jobs and tasks in the workplace and job market. In AI Superpowers, Kai-Fu Lee discusses a blueprint for human coexistence with AI where he recommends that the private sector who drives the AI revolution should take the lead in creating a more human-centric AI-driven workplace with new humanistic jobs that are more social, compassionate, creative, or strategy-based in nature to complement the AI-driven software and machines that can focus on automation, optimization, and non-social related tasks. There are already events and webinars around the globe such as the one organized by the UK India Business Council in 2020 about Rewiring the 21st Century Workplace – How to make it Human-centric and Tech-driven which brainstorms how organisations can re-design the future workplace, how people’s health and wellbeing can affect the health and wellbeing of the company or organization, how we can better integrate human behavior and technology to achieve long-term success, and what best practices can be shared to ensure people and organizations are making the most of both their human capital and technological assets.[i]
A few years ago, the World Economic Forum presented a meta-analysis of research about 21st-century skills in primary and secondary education and extracted 16 skills in three broad categories of foundational literacies, competencies, and character qualities.[ii] Foundational literacies is focused on how people apply core skills to everyday tasks and provides an underpinning to build competencies and character qualities. Apart from literacy and numeracy skills, it also includes scientific, ICT, financial, and cultural and civic literacy. Competencies involve the ability of people to tackle complex challenges using skills such as critical thinking, problem solving, creativity, communication, and collaboration. Character qualities are about how people approach their changing environment where they need skills such as initiative and curiosity to help with originating new ideas and concepts, adaptability and persistence to be more flexible and tough when faced with problems, and leadership along with social and cultural awareness to help people to have positive engagements in ways that are culturally, socially and ethically acceptable and suitable.[iii] The WEF has also defined the top ten job skills of 2025 in the categories of problem solving, self-management, working with people and technology use and development.[iv] The problem solving skills involve analytical thinking and innovation, complex problem-solving, critical thinking and analysis, creativity, originality and initiative. Self-management skills include active learning and learning strategies and resilience, stress tolerance and flexibility, whereas working with people skills involve leadership and social influence. Examples of technology use and development related skills include technology design and programming and technology use, monitoring, and control. In Critical Skills for the 21st Century Workforce, Ryan Whorton, Alex Casillas, and Fred Oswald identify three major forces that fundamentally changed the nature of work in the 21st century which are interpersonal, technological, and international in nature and analyzed a core subset of 21st century skills related to these forces, namely teamwork, safety, customer service, creativity, critical thinking, meta-cognition, cross-cultural knowledge and competence, and integrity and ethics.[v]
Lasse Rouhiainen in Artificial Intelligence: 101 things you must know today about our future references a description of 24 skills that he published in a previous book The Future of Higher Education — How Emerging Technologies Will Change Education Forever and categorized as either people or business skills for the future.[vi] The people skills for the future involve self-awareness and self-assessment, emotional intelligence, social intelligence, interpersonal intelligence, empathy and active listening, cultural flexibility, perseverance and passion, a focus on the common good, mindfulness and meditation, physical training, and storytelling. The business skills for the future have to do with problem solving, creativity, adaptability to new technology, entrepreneurial mindset, sales and marketing, data analysis, presentation skills, environmental intelligence, large-scale thinking, accounting and money management, the ability to unplug, design thinking and design mindset, and spotting trends. In addition to these, Lasse Rouhiainen also emphasizes five future skills and competencies which include AI (covering for example deep learning, machine learning, robotics, and self-driving car engineering) and blockchain (including cryptocurrency), social intelligence (which involves coaching, consulting, emotional intelligence, empathy, and helping others), creativity mindset (which involves the ability to create something out of nothing, design thinking, design mindset, and personal brand cultivation), computational thinking (which involves computational sense-making, contextualized intelligence, and virtual collaboration) and learning how to learn (which covers self-awareness, resilience, and mindfulness).[vii] From all of the above perspectives we are getting a clearer picture of what skills and competencies we should develop to give us a better chance to survive and thrive in the Smart Technology Era.
The Institute for the Future has communicated their research findings about the essential drivers that will reshape the workplace landscape as well as the salient work skills needed over the next decade in a report called Future Work Skills 2020 which was sponsored by the University of Phoenix Research Institute.[viii]This report specifically considers future work skills and competencies across a variety of jobs and work settings and not the possible future jobs (which I’ll elaborate on later in this section). The specific key drivers that they list include extreme longevity which is about the impact of increasing global lifespans on the nature of learning, professions and career trajectories; the ascent of AI-driven machines and systems where workplace automation affects human workers doing mechanical or habitual repetitive tasks; the computational world where the world is being instrumented and made programmable through enormous increases in sensors, communications and IoT devices and processing power; novel media ecology where communication tools are enhanced with novel multimedia literacies and technology for digital animation, video production, media editing, augmented reality, and gaming; super-structured organizations where social media platforms and technologies drive novel forms of production and value creation at scale on a spectrum of highly personalized to global reach; and globally connected world where increased global networking, interconnectivity, and interdependence provides tremendous opportunities for agile and diverse companies, organizations, communities, cities and countries to innovate and grow.[ix] These six disruptive forces that are all contributing to the transformation of civilization and the societal landscape make it even more possible to implement the proposed MTP for humanity to drive beneficial outcomes for all through decentralized, adaptive and agile economic, social and governance systems that democratizes knowledge, science, smart technology and other tools in optimal values-based and human-centric ways.
The Institute of the Future further outlines ten future work skills that would be crucial for people to be successful in the workplace going forward. (1) The first one is sense-making as the “ability to determine the deeper meaning or significance of what is being expressed” which I have also identified as a key area of competence as expressed through the ninth MTP goal about implementing better collective sensemaking for all of humanity and better alignment with respect to our common goals and visions.[x] Whereas the Institute of the Future primarily links sense-making to the rise of smart machines and systems, I think it can be linked to sense-making more broadly as we are dealing with opposing views, perspectives, and conspiracy theories on social, cultural, economic, political and many other levels. Other future work skills also linked to the identified disruptive drivers include: (2) social intelligence as the “ability to connect to others in a deep and direct way, to sense and stimulate reactions and desired interactions”; (3) novel and adaptive thinking as the “proficiency at thinking and coming up with solutions and responses beyond that which is rote or rule-based”; (4) cross-cultural competency as the “ability to operate in different cultural settings”; (5) computational thinking as the “ability to translate vast amounts of data into abstract concepts and to understand data-based reasoning”; (6) new-media literacy as the “ability to critically assess and develop content that uses new media forms, and to leverage these media for persuasive communication”; (7) transdisciplinarity as the “literacy in and ability to understand concepts across multiple disciplines”; (8) design mindset as the “ability to represent and develop tasks and work processes for desired outcomes”; (9) cognitive load management as the “ability to discriminate and filter information for importance, and to understand how to maximize cognitive functioning using a variety of tools and techniques”; and (10) virtual collaboration as the “ability to work productively, drive engagement, and demonstrate presence as a member of a virtual team”.[xi]
The OECD also produced a Future of Education and Skills 2030 Conceptual learning framework report on Transformative Competencies for 2030 that recommend three transformative competencies to help shape the future where well-being and sustainability is achievable, namely creating new value, reconciling tensions and dilemmas, and taking responsibility.[xii] More specifically, (1) they describe creating new value as “innovating to shape better lives, such as creating new jobs, businesses and services, and developing new knowledge, insights, ideas, techniques, strategies and solutions, and applying them to problems both old and new”, as well as questioning the current situation, collaborating with other people and attempt to think “outside the box”. This ties in with applying creativity and critical thinking to innovate in line with a purpose or even better – a massive transformative purpose. The sixth MTP Goal is relevant there as it is about collaborating in optimal human-centric ways to use our growing knowledge base and general-purpose technologies in a wise, value-based, and ethical manner to solve humanity’s most pressing problems and creating abundance for everyone. (2) Reconciling strain, friction and predicaments implies making sense of the many inter-relations and interconnections between ostensible opposing or irreconcilable ideas, positions, and logical thinking, and considering the outcomes of actions over the short- and long-term. This kind of sensemaking is exactly what I am advocating for in an interdependent globally connected world as we consider democratizing AI and shaping a better future for as many people as possible. This is again also in line with ninth MTP Goal about implementing better collective sensemaking for all of humanity and better alignment with respect to our common goals and visions. A deeper understanding of contrasting views and a spectrum of positions not only assist in developing better arguments and reasoning to support specific positions, but also help to harmonize and find constructive and pragmatic solutions to problems and disputes in respectful and empathic ways. (3) Taking responsibility is all about being “connected to the ability to reflect upon and evaluate one’s own actions in light of one’s experience and education, and by considering personal, ethical and societal goals”.[xiii] This is absolutely key and in congruence with MTP goals 10, 11 and 12 that are focused on helping people to live more meaningful lives, developing virtues and character strengths which includes wisdom and knowledge, courage, humanity, justice, temperance and transcendence as well as building local and virtual empathic communities with more meaningful work and relationships.
This Democratizing AI Newsletter coincides with the launch of BiCstreet‘s “AI World Series” Live event, which kicked off both virtually and in-person (limited) from 10 March 2022, where Democratizing AI to Benefit Everyone is discussed in more detail over a 10-week AI World Series programme. The event is an excellent opportunity for companies, startups, governments, organisations and white collar professionals all over the world, to understand why Artificial Intelligence is critical towards strategic growth for any department or genre. (To book your tickets to this global event click the link below and enter this Coupon Code to get 5% Discount: Enter Coupon Code: JACQUES001 (Purchase Tickets here: https://www.BiCflix.com; See the 10 Weekly Program here: https://www.BiCstreet.com)).
The audio book version of “Democratizing Artificial Intelligence to Benefit Everyone” is also available via major audio book market place world-wide. See details on my website as well as below. You can also listen to audio content of this book on the Jacques Ludik YouTube Channel or Jacques Ludik Podcasts. This release is in follow-up to the e-book (Kindle) and paperback version of the book that was released earlier this year on Amazon with some further updates recently.
For some background, see also the following introductory articles Democratizing AI to Benefit Everyone and AI Perspectives, Democratizing Human-centric AI in Africa, and Acknowledgements – Democratizing AI to Benefit Everyone (as well as United Nations & Democratizing AI to Benefit Everyone; World Economic Forum and Democratizing AI to Benefit Everyone; OECD and Democratizing AI to Benefit Everyone; AI for Good and Democratizing AI to Benefit Everyone).
For further details, see jacquesludik.com.
[ix] François Chollet (@fchollet) | Twitter
[i] Simon Sinek, Start with Why: How Great Leaders Inspire Everyone Around Them to Take Action, pg. 11.
[ii] Brad Shorkend and Andy Golding, We are Still Human and Work Shouldn’t Suck, pg. 143 – 146.
[iii] Simon Sinek, Start with Why: How Great Leaders Inspire Everyone Around Them to Take Action.
[iv] Brad Shorkend and Andy Golding, We are Still Human and Work Shouldn’t Suck.
[i] Yuval Harari, Homo Deus.
[vi] Lasse Rouhiainen, Artificial Intelligence: 101 things you must know today about our future.
[vii] Lasse Rouhiainen, Artificial Intelligence: 101 things you must know today about our future.