• The AI Pulse
  • Posts
  • šŸ§  AI and Ethics: Googleā€™s Protein Structure Predictor

šŸ§  AI and Ethics: Googleā€™s Protein Structure Predictor

PLUS: The Case for Both Utilitarian and Deontological Ethics

Welcome back AI prodigies!

In todayā€™s Sunday Special:

  • šŸ™Ethical Decision-Making

  • šŸ¤²The Greater Good

  • šŸ§¬Genetics, AI, and Ethics

  • šŸ”‘Key Takeaway

Read Time: 5 minutes

šŸŽ“Key Terms

  • Utilitarianism: a philosophical theory that states moral rightness should be judged based on the outcomes of an action, not the intent behind it.

  • Deontology: a philosophical theory that states moral rightness should be judged based on the intent of the individual, not the outcomes behind it.

  • Proteins: complex molecules that help perform dozens of biological and physiological functions in the human body.

šŸ™ETHICAL DECISION-MAKING

Public anxieties about job displacement and AI misuse often paint a bleak picture of the future. Many concerns are valid, as disinformation, deepfakes, and job displacement are already proliferating through the digital sphere. In a recent Pew Research poll, 58% of Americans expressed more concerns than excitement about AIā€™s growing role in daily life. These sentiments are likely a recent phenomenon, as media outlets have historically given AI a fair shake. A sentiment analysis conducted by Colin Garvey and Chandler Maskal of Stanford University found that from 1956 to 2018, news coverage of AI was, on balance, positive.

Regardless of public sentiment, all technologies bring pros and cons, most of which are relative in the short term and absolute in the long term. An Amazon warehouse stockerā€™s job loss puts more money in the shareholderā€™s pocket. Fast-forward a few decades, and no one will want to sort boxes. Even benign AI use cases, like advancing health equity in India and Africa, face opportunity costs. However, calculating the precise social benefit of each AI application is a foolā€™s errand. Nevertheless, building intuition to make complex ethical decisions is critical for deploying robust computational systems responsibly.

šŸ¤²THE GREATER GOOD

Every day, we make about 35,000 conscious decisions. All maximize our internal satisfaction in the present or the future, and most involve just ourselves. For instance, while completing an assignment, we optimize for our own interests. However, several decisions surrounding the taskā€”when to do it, how much time to spend on it, whether to do it in the company of a friend or notā€”force us to broaden our criteria for others. In more nuanced circumstances, like who to invite to a party, the dynamics between family, partners, friends, acquaintances, enemies, or strangers complicate the decision-making process.

Two ethical theoriesā€”utilitarianism and deontologyā€”support many of our daily decisions, whether we refer to them or not. As you may know, utilitarians believe the morality of an action depends solely upon how much total well-being it produces. On the other hand, deontologists maintain that an actionā€™s morality depends solely upon its nature as opposed to its consequences. No one falls strictly into one camp or the other, but a simple test can place you on one side of the spectrum:

Imagine youā€™re an observer bystander in the driverā€™s seat of a self-driving car hurtling towards five people sitting in the crosswalk. Your brakes have failed, and the car only has two possible trajectories. If you swerve, youā€™ll kill one person. If you donā€™t, the car will kill five. Do you:

1. Swerve, sacrificing one person to save five.

2. Do nothing, letting the vehicle kill five, to avoid taking an active role in someoneā€™s death.

(AI Version of the Trolly Problem)
šŸ©ŗ PULSE CHECK

What would you do?

Vote Below to View Live Results

Login or Subscribe to participate in polls.

šŸ§¬GENETICS, AI, AND ETHICS

Utilitarians tend to swerve, while deontologists prefer to do nothing. Though simple decision rules are futile in the multi-faceted real world, ethical theories allow us to analyze the consequences of highly impactful AI deployment decisions.

Consider Google DeepMindā€™s AlphaFold, which accelerated research on diseases like Alzheimerā€™s and cancer by predicting the structures of 200 million individual proteins. Proteins are as critical to the human body as people are to society. Intricate intracellular and extracellular processes enable critical functions like breath, sight, thought, digestion, and immunity. By understanding their structure, researchers can identify what drugs to develop and how they should target our cells. Despite achieving this revolutionary feat, researchers must still run experiments to validate their hypotheses about structure, which are at most 90% accurate.

Nevertheless, Google has estimated its potential addition to the ā€œpool of well-being.ā€ According to DeepMindā€™s blog, a quarter of research that leverages AlphaFold seeks to understand and tackle diseases that cause millions of deaths globally. AlphaFold also likely saved the research world trillions of dollars, as mapping a single protein structure costs an estimated $100,000, and many require multiple attempts.

Though its tangible benefits to human health will be realized over the coming decades and centuries, developing AlphaFold undoubtedly matches utilitarian values, speeding up healthcare breakthroughs. Deontologists, on the other hand, may have some qualms about the model training process. These ethical concerns include:

  • Informed Consent: Genetic data is typically collected by direct-to-consumer genetic testing companies and sold to pharmaceutical companies and research projects like DeepMindā€™s AlphaFold. The DNA owners may not have consented to its use in AI model training.

  • Data Privacy: These individuals may have yet to fully understand the magnitude or implications of storing and selling their DNA.

  • Liability Issues: Is the DNA of an individual eligible for patenting? Do consumers have a right to a portion of the profit generated by selling their genetic information to help AlphaFold develop life-saving drugs?

šŸ”‘KEY TAKEAWAY

To be clear, the answers to these concerns have yet to be made public. Regardless, one instance of ethical oversight would make staunch deontologists malign AlphaFold. To most, this seems extreme, considering the net benefit of an AI project. However, this approach has its limitations. Developing Artificial General Intelligence (AGI) may free us from drudgery forever. But is it the right path for humanity? Time will tell.

šŸ“’FINAL NOTE

If you found this useful, follow us on Twitter or provide honest feedback below. It helps us improve our content.

How was todayā€™s newsletter?

ā¤ļøAI Pulse Review of The Week

ā€œContent is well-written and accurate. Keepā€™em coming!ā€

-Luke (ā­ļøā­ļøā­ļøā­ļøā­ļøNailed it!)

šŸŽNOTION TEMPLATES

šŸšØSubscribe to our newsletter for free and receive these powerful Notion templates:

  • āš™ļø150 ChatGPT prompts for Copywriting

  • āš™ļø325 ChatGPT prompts for Email Marketing

  • šŸ“†Simple Project Management Board

  • ā±Time Tracker

Reply

or to participate.