David Frazee is vice president of 3M’s CRSL group, which has long been the 3M organization working on the leading edge of AI technology, including machine learning, data, analytics, cloud and the many areas of the future of AI. CRSL’s AI Lab is a focal point of new talent for 3M through hiring and internships, and the technologies that comprise AI.
We spoke with David about the ways ethics and AI intersect.
What should people know about ethics and AI? How are they related?
There is an ongoing and increasing dialog about ethical use of AI, and rightfully so. With the large amount of data that describes both the world and the people of the world being created or analyzed continuously, most believe that some form of consent is reasonable when such analysis includes data that is or can be personal.
The spectrum may range from simple location information captured when using maps to detailed health information or individual financial data.
Ethics is more than just getting consent for collection and use of personal data; the ethics of using data at a corporate or government/state level will increasingly require clarity of value and purpose.
Concerns could include the use of AI for influencing elections, AI only being available to the wealthy (nations or people), AI for ill-managed medical research, AI for degrading information security — and so on.
What are the primary ethical concerns around AI? (Privacy? User data? Other?)
Ubiquitous cloud services, low-cost compute and storage, plus AI and machine learning’s capabilities and accessibility have made it far easier for others to perform sophisticated analyses for purposes from admirable to potentially criminal.
Admirable uses of AI include vaccine development, risk analysis for finance or transportation, and customized content for Spotify, Netflix and e-commerce, for example.
The end goal of these applications is to progress a medical solution or service to better fit societal or customer needs.
Not so admirable could include analyzing a company’s networks to exploit (hack) an organization or individual, using patient health information to deny care or treatment, using large data advantages and AI to control media or create or enable misinformation.
The above become nefarious uses designed to cause harm or manipulate outcomes with data.
It is not always black and white either, which is why a broader conversation about the role of AI in the hands of any user requires a greater understanding of its capabilities and ethics.
What ethical considerations should a company take into account when using AI? / How should companies decide when and why to use AI?
Many companies have long had ethics boards or IRBs for considering new products, treatments, studies for health care products or procedures. Companies will need to evolve to handle the complexity and speed of the ethical parameters of using data and AI in a principled and ethical way.
Governing bodies for the ethical use of data will have to understand data, breadth and depth, potential uses, now and in the future, tradeoffs between possible very high benefit at a cost of using data without governance.
Understanding the behind-the-scenes of AI will likely be very important in some cases and less so in many others. One may not really care why the next song in the playlist came up but may care greatly why a certain medication was chosen for treatment.
What type of oversight will keep AI in check?
It’s not all from the center nor all on the individual. The breadth and depth of this is a balance based on strong policy, but with flexibility and adaptability to meet the policy. This does not mean it is acceptable to work around a policy or requirement. Requiring a certain single tool to cover all oversight could be restrictive. A combination of tools and processes may be how to govern this subject matter.
How is 3M approaching AI from an ethical standpoint? What is the line between logical and ethical?
We view AI as a notable technology that can evolve science and humankind, but we believe its vast applications and implications make it important to analyze.
3M has an integrity and value model we apply to everything we do. We’ve earned our reputation for integrity over many decades, and every decision we make must be guided by our Code of Conduct.
AI is a unique technology that has many uses but raises broader questions as well. Logic and legality are necessary but never sufficient when viewing its broad application.
There are ethical questions about the use of such data that require important conversations about practical applications, personal privacy, representation and data that far outweigh any “logical use” of AI, for example.
What ethical issues around AI does 3M believe in?
AI is a technology that is incredibly exciting. But we believe all companies utilizing AI should have strong values, transparency and integrity models around its application.
We also believe it’s important for customers and other stakeholders to be aware of the power of AI and their data, which is part of the same conversation.
Although right now most is self-governing, we believe that through collaborations we can likely gain great ideas and progress that would benefit all parties in the conversation of ethics.
As AI becomes more ubiquitous, it’s important that we all understand the role of AI in broad ways. We want to protect individuals by creating an ethical model for use, access and application.