(when you start digging into it, it's a very large umbrella)
From the RAND Corporation, March 21:
Tens of thousands of Americans live with the debilitating pain of sickle cell disease. Sufferers say it can feel like being stabbed repeatedly, like having broken glass flowing through their veins. An announcement this past December finally gave them a reason for hope.
For the first time ever, federal regulators approved a procedure to edit human genes for the treatment of disease—theirs. The procedure makes one small change to relieve a genetic glitch that warps their blood cells into destructive sickles. Experts hailed it as an example of what's possible as scientists learn to manipulate the very building blocks of life. “This is the first mile of a marathon,” one doctor told Scientific American.
Another fast-rising technology could turn it into a sprint. Machine learning is already helping scientists make sense of the genetic keys that could unlock new crops, new drugs and vaccines—or new viruses. But a recent RAND study warned that policymakers may not be fully prepared for the impact those two fields could have together. To make the most of the opportunities to come, and to avoid the dangers, that has to change.
“These fields are both accelerating and transforming the way we do things,” said Sana Zakaria, a research leader at RAND Europe. “What happens when they combine? Do they grow into something bigger and better? Or something worse?”
Genetic engineering has given us corn that repels caterpillars, rice that resists blight, and chickens that can fend off disease. Recent advances in gene editing—especially the tool known as CRISPR, used in the sickle cell treatment—now allow scientists to operate directly on strands of DNA. Machine learning, meanwhile, can devour huge amounts of genetic data and point the way to new medical breakthroughs and better understanding of human health. The two technologies together hold the promise of a future in which our own bodies attack cancer cells and food crops thrive in the heat of a changing climate.
That's the glass-half-full view. The careful-what's-in-that-glass view might point to what happened a few years ago at a small U.S. drug company.
The company was using a machine learning model to search for molecules that could be used to treat rare diseases. It had trained a computer to screen out molecules that might be toxic. As an experiment for an international security conference, and with safeguards in place, it flipped the rules. It asked the computer to identify harmful molecules, not screen them out. Within six hours, the computer had generated 40,000 candidates. Some were known chemical warfare agents. Some appeared to be even more lethal.
For policymakers, that's the challenge: How do you push open the door for new crops and cancer treatments, without leaving it open for a computer-generated catastrophe?
“There's this fear with machine learning and artificial intelligence that it could become this monster that takes over,” said Timothy Marler, a senior research engineer at RAND. “The same happens with gene editing. There's a lot of discussion about the risks these technologies could pose, really existential risks. That's necessary—but it can also overshadow the opportunities. There has to be a balance here.”
RAND's team looked at how China, the United States, the United Kingdom, and the European Union are approaching the confluence of gene editing and machine learning. They found strict regulations and outright bans in some countries on genetic engineering and genetically modified organisms. They found a growing push to establish some guardrails around machine learning. But where the two fields meet, they found, policymakers are just beginning to take action....
....MUCH MORE
RAND has a very deep history in artificial intelligence. From Jeremy Norman's History of Information:
Newell, Simon & Shaw Develop the First Artificial Intelligence Program
During 1955 and 1956 computer scientist and cognitive psychologist Allen Newell, political scientist, economist and sociologist Herbert A. Simon, and systems programmer John Clifford Shaw, all working at the Rand Corporation in Santa Monica, California, developed the Logic Theorist, the first program deliberately engineered to mimic the problem solving skills of a human being. They decided to write a program that could prove theorems in the propositional calculus like those in Principia Mathematica by Alfred North Whitehead and Bertrand Russell. As Simon later wrote,
"LT was based on the system of Principia mathematica, largely because a copy of that work happened to sit in my bookshelf. There was no intention of making a contribution to symbolic logic, and the system of Principia was sufficiently outmoded by that time as to be inappropriate for that purpose. For us, the important consideration was not the precise task, but its suitability for demonstrating that a computer could discover problem solutions in a complex nonnumerical domain by heuristic search that used humanoid heuristics" (Simon,"Allen Newell: 1927-1992," Annals of the History of Computing 20 [1998] 68).
The collaborators wrote the first version of the program by hand on 3 x 5 inch cards. As Simon recalled....
*We Are Going To Hear More And More About Digital Biology (NVDA)
17 key takeaways from the 2024 J.P. Morgan Healthcare Conference"