Update: Winding Down
Coronavirus ArchiveThe end is very nigh. Barring anything spectacularly unexpected, I think the next update will be the last one, roughly around the time the US announced the end to emergency COVID measures (this coming May). I’m leaning towards a “what have we learned” theme on that one, but we’ll see.
This one is really just an excuse to be timely with some thoughts on AI, because there are touchpoints in some of the socioeconomic trends that were accelerated by the pandemic and pandemic response…
Coronavirus
–Case numbers globally are as muted as you would expect given the absence of headlines. XB.1.5 has pushed out the older omicron cousins. Despite a still steady 16% booster rate in the US, there has been no dramatic new wave, as this variant has very much followed the “more contagious less severe trend.” This should shock none of the readership–this was our base case. The virus has mutated by now into a sufficiently adapted form, and herd immunity is high enough including huge amounts of natural immunity (likely more protective and durable based on more recent studies), that this whisper of COVID activity will continue.
You can still get a pretty nasty case of COVID though. Let’s not completely minimize things. I definitely do still hear about people popping positive. I would definitely still take the acute treatments if you do get COVID (and qualify for them), just to minimize the risk of serious disease and its higher risks of some very bad things.
–Johns Hopkins and Case Western, two of my go tos for current COVID data and leading indicators, have stopped collecting new data since our last update. Yes, Virginia, that is how 2020 COVID has finally become. CDC is still reporting though–none of the numbers are particularly worrisome.
–I don’t have anything different from the last 18 updates regarding Fauci “misstatements” and correlation/causation errors in vaccine side effect “gotchas”. As far as the latter, here is the best meta-analysis out there of the risks of myocarditis from the vaccine versus infection. Myocarditis is more common in men, and younger men especially–overall the risk from the infection is 7 fold higher than from the vaccine (that is averaged across all age groups–there are some studies that show in young men in particular this ratio is much closer, and risk/benefit is not as strong there). There are additional government bodies that will be convening over the summer and I expect risks of the vaccines to be updated. I also doubt that anyone who was ever going to get the vaccines hasn’t already got them. I also think future boosters, if required, recommended, or even made at all, will be very narrowly restricted to high severe disease risk populations.
–The Pfizer “directed evolution” Project Veritas “gotcha” has largely disappeared from consciousness. For what it’s worth, in their recent quarterly report, Pfizer forecast >50% declines of its COVID vaccine and Paxlovid, and steered towards a 31% reduction in overall revenue due entirely to this projected decline in COVID drug sales this year. That is not a company planning on continuing the franchise for much longer. The cynics among us may say this only shows their incentive to do the nonsense their recently hired urologist was spouting off in the Project Vertias bid. Again, that nonsense would burn through a HUGE amount of monkeys, which would be traceable if it was happening. No one has traced the monkeys directly. However, there are a couple recent articles here and here which cover how difficult it has been for everyone in research to get research monkeys. The cost of those monkeys has increased by several multiples since COVID began, and restrictions made it challenging to get monkeys. Then a major Cambodian supplier got shutdown for smuggling those monkeys. In short, there has been no increase in demand–merely a restriction in supply. That’s circumstantial evidence that Pfizer is NOT doing “directed evolution” of COVID in monkeys to try to keep their COVID franchise alive–as Pfizer themselves said in their press release. Again, I find it odd to be defending the honor of Pfizer, yet, what monkey evidence is available is consistent with Pfizer’s statements. The price of monkeys right now to achieve that strategy also significantly reduces the business case (what little of one there is) for directed evolution to develop a worse strain of COVID that you could then release for profit. So also consistent with Pfizer’s statements.
African swine fever
–This is not a direct threat to humans, but African swine fever, which is very contagious and deadly to pigs, has gone rampant again in China. You can read about it here. It is also spreading to neighboring southeast Asian countries. The main risk is to food prices as China (and other affected countries) will look to find alternative protein sources, increasing food prices more generally.
Health News You Can Use
–More studies show the benefit of regular exercise, the latest demonstrating it is very effective for depression and anxiety. Of course, exercise and a healthy diet can not only make you feel better, but also reduce risk factors for severe COVID all the way through heart disease and diabetes. In the US, healthcare costs are a leading cause of personal bankruptcy. Reducing your overall health care risks with healthier habits makes good economic sense as well.
I hope to cover more of how to form some of these habits, probably in a future ramble…
Socioeconomic
–This will be a longer section, and a little Ramble-esque for those of you OGs. By way of introduction, let’s start with some news you can use.
–Deep fakes, which is the short hand term for computer generated, extremely realistic videos and audio have not only improved since we first discussed them awhile back. They are starting, inevitably, to be turned to nefarious purposes.
For example, this article warns of an increasing trend of deep fakes of your family members’ voices as a scam call. The general idea is that with just a few seconds of audio sample, the voice can be duplicated by computer AI programs already available on the internet. The voice sample can be obtained either from a previous fake call which was recorded, videos in the public domain ranging from TikTok to YouTube, or what I would suspect is a soon to be burgeoning black market in the recordings “for quality and training purposes” anytime you talk to customer service of any large corporation. In much the same way, your email addresses have already made it to various black markets. Once they have the voice deep faked, they call you and pretend to be either a kidnapped family member or one in urgent financial distress to get you to wire money to them. Since it’s a convincing copy of your family member, you would be more likely to be scammed by this.
Except now you know this is possible, and the low tech way around it is to simply have a secret code or phrase to use if they are really in trouble that you can check. An example from D-Day is at that link. If you want to level up the clever, the sign and countersign could be words from two different languages, or accents, or even a total non sequitur that a deep faked voice would be very unlikely to guess right because of the way it’s trained (we’ll get to that shortly).
For example:
Family Member: “…next thing I know we’re in Tijuana doing shots of tequila with Sergio and the Tecate ring girls. I only remember what happened after in flashes. It’s like the Hangover movies–except it’s not funny and never should have happened, like Hangover 2 and 3. Although I’m pretty sure Mike Tyson was at the cockfighting ring too… Anyways, I’m here now with a large dude with a bunch of prison tats and some smaller guy, real polite, in a nice suit saying that Sergio was on credit for some of the bets he placed on a rooster last night. But we can’t find Sergio, and they need at least the juice, or it’s on me. I need you to wire me 8 grand as fast as you can!”
You, Extremely Doubtful Of This Story: “Baseball bat.”
Family Member: “Aluminum.”
But, and this is also key, they say it “ah-lu-MIN-e-um”, like some Royalist weirdo–and not “a-LU-min-um”, the way God intended. The deliberate pronunciation choice part of the code you preset with the family. Now that you know they really ARE living a bad version of “Hangover 4”, at least there will be a story for the 8 grand you should probably wire as soon as you can!
Alternatively, the correct response to “baseball bat” might have been “savoir faire” or a Spanish word mispronounced deliberately without the intended “r” roll of the tongue. Or the response to “baseball bat” might have been “negative 82”. The key here is that to get good at mimicking voice, inflection, accent and vocabulary enough to deep fake you, the program will be unlikely to recognize a deliberate error, or a non sequitur challenge/response code, or one that shifts languages entirely. It will have trained to recognize those as mistakes, and programmed itself right out of them. Thus, the program won’t try them as “real” answers when challenged by your code!
The key to defeating this scam overall is to have a preset code that is difficult for the deep fake program to predict. Since most of these aren’t that sophisticated (yet), some of these ideas ought to work.
–We’ll get back to deep fakes in a moment, because the possible, and in fact, probable, nefarious ends to which they could (and likely will) be used don’t stop with mere convincing scam calls.
In fact, I think the growth of this technology specifically will end with a societal-level crisis of truth. Again, that discussion coming up in just a bit…
I know, I know. I’m being a tease. Stay with me–I think the payoff is worth it.
First, I think it will help to do a 50,000 foot view of AI approaches and limitations.
So, by way of background, I currently work for a company that is using machine learning for AI-based analysis of digital images of scanned pathology slides–the same ones that go under the microscope every day for pathologists to diagnose cancer and various other diseases from the biopsies and surgeries you patients have done. “Digital Pathology” was a very hot topic at the recent USCAP conference, which is the premier conference for all of the academic pathologists. They are usually early adopters of new approaches, because the university hospitals do lots of research and get to play around with new things.
So why do we think this will work?
Because of how human pathologists already do this. I, as a practicing pathologist, am already a living, breathing, trained image analyzer. This is a rendition of something I might see under the microscope (drawing form, not an actual microscopic image):
Those purple islands with the long blue black dots at the top, and at far right and far left? Those are colon cancer cells.
Why do I know that? There are specific features I could describe to you in pathologist-ese for how we teach aspiring pathologists (and our medical oncology, surg onc, rad onc, and nursing colleagues at inter-disciplinary tumor boards) how to recognize these as clusters of colon cancer cells. But I will spare you the gory details.
Besides, it’s easier for me to show you what normal colon cells look like, so you can see why these are so different. They are bigger, uglier, barely trying to form normal colon structures, and most importantly, these are in the wrong “place” on the microscope slide. Instead of lining the inner layer of the colon, these are now invading through the colon wall, trying to get out beyond it, as cancer does.
But I know these features are “cancer”, which is to say, a pattern of appearance on the microscope slide that shows up in cancer patients. If we sample many colon tumors, they will look similar to this. If we sample a metastatic tumor in the same patient, say a new tumor in their liver or lung, they will look similar to this. So the guys and gals of hundreds of years ago, who first started pathology, started to notice patterns of how these cells look and also had the knowledge of the patient’s symptoms, how they progressed etc. for us to learn that this pattern, this appearance of these cells was “colon cancer.”
The classifications they came up with have been remarkably robust. Indeed, it was a wonder to sit with the “old ones”, trained 30-50 years ago, who could be incredibly accurate and detailed just off an “H&E” image like the above.
I am of a generation that got spoiled by some of the newer testing we have available. For example, we can now do a stain called “immunohistochemistry”, or IHC, which looks for specific proteins in that tumor and stains them so I can see them in the microscope. Or DNA analysis, where we grind that tumor up, and run an assay on it that spits out if specific mutations are present in certain genes. I didn’t have to think as hard or pore over the H&E as long–I could just dial up one of these newer techniques when the diagnosis was in question.
Now, we use those newer techniques to get other important information about the cancer.
In that colon cancer, we would do IHC for a set of proteins called the “mismatch repair” proteins–as some families have a hereditary risk of colon cancer because they are born without one of these proteins working. If I had a question about if these were really colon cancer cells–say, they took this from a liver in a patient not previously known to have a colon cancer, I could use IHC for diagnostic purposes. Look for proteins specific to colon cancer. The old ones could often be nearly as accurate just off the “look” of the tumor–they had stared at so many for so long. In that colon cancer we would also do genetic testing, looking for specific mutations in KRAS (if present, the patient will NOT respond to certain therapies) and maybe BRAF (if present, the patient MIGHT respond to another therapy).
We can do that because the way that tumor looks in the image above is determined by the mutations in its DNA that made it cancer, how that tumor is using those specific mutations (if it is at all–you can have the mutation, but not actively use it as a tumor), as well as how the immune system is responding to the tumor, how all the other cells around the tumor are signalling back and forth with the tumor cells, and even how distant signals, endocrine signals for example, are reaching the tumor. This is what we call the tumor phenotype,and my company’s bet is that you can tell some important information from just that “look”, that phenotype, of the cell. Either triaging for that more expensive IHC and DNA testing, to keep costs down and focus on the cases MOST LIKELY to harbor it. Or to predict phenotypes that might respond better or worse to entire lines of therapy.
Digital pathology and AI will not replace pathologists–what it does is make the old ways new again, as the computer might find patterns in the way those cells look in that image above that are not obvious to us humans, giving us a new way to help patients better.
So how does it do that? How do we teach a computer to not only recognize colon cancer cells from normal colon, but then go deep on how those cells look to maybe give us new insights and associations with treatment outcome, or predict what more expensive tests might show to reduce the cost of health care a bit?
There are two primary strategies, and what is interesting is that I could train humans to do this with these exact same methods.
So one approach, and this is most similar to how we are taught are in residency and fellowship, is called “fully supervised learning.” In short, a pathologist annotates the digital image of the microscope slide, circling all the areas of colon cancer. This is then shown to the machine learning algorithm, which is told “these circled areas are what you are looking for.” It then trains on another set of cases which include both normal colon and colon cancer, and is checked for accuracy for the colon cancer part, until it gets good at finding colon cancer. Then you validate it one last time on a set of normal and colon cancer cases to make sure the algorithm “generalizes”–or works on new data, and not just the cases it trained over and over against.
Similarly, in residency and fellowship, we would be shown an example or many examples of what certain cancers looked like. Having been shown what to look for, in theory, we got better at finding them later when a new case of them came along. Showing the case and our call to our attending pathologists was a check on how well we had trained to recognize these cancers.
The other approach is weakly supervised learning, called in the digital pathology world “multiple instance learning.”
One example is the approach pioneered by my current employer, “multiple instance learning” or MIL. The difference here is that there is no annotation—the machine learning algorithm is NOT shown what a colon cancer cell or cell cluster looks like. Instead, it is given an image from the microscope slide and told “this slide has colon cancer on it somewhere”. We also give it images from microscope slides of colon tissue with NO cancer on it, and tell the algorithm “this slide has NO colon cancer on it.” So unlike an annotated approach, the machine learning algorithm has to teach itself, by trial and error, what colon cancer looks like.
I can think of some weakly supervised examples from my own career as well.
For example, in medical school, on slower rotations, I would set up some time a very generous pathologist at the school, who was very dedicated to teaching. I’ll call him PD to preserve anonymity here, although he has departed us and his patients far too soon, God rest his soul. I can say that he lived his faith, and Christianity would have a far better reputation if everyone who claimed that same faith lived their life as he did. Regardless, he would set aside a few “best of” examples from that day or week, and I would swing by the lab, review them myself (often without access to the clinical history) and then give my thoughts back to him over a double headed scope.
There’s that line in “Rounders”, where Matt Damon’s character walks back in to Teddy KGB’s card room, the same card room and same high stakes game where he got wiped out on a bad play he made. He comments as he walks in that one of the great poker players once said that he never remembered any of the big pots he won—but remembered in exquisite detail every bad beat he had ever had. There is also a common saying in medical school that you’re not truly a doctor until you’ve made your first mistake (which hopefully comes in med school, when there are residents/fellows and attending physicians all looking over your shoulder to keep that mistake harmless).
My jiu jitsu gym has a similar philosophy towards competitions— “you win or you learn.”
Similarly, there is one case I reviewed with PD that I remember. Even down to the way the tissue was laid out on the microscope slide. This was a stomach biopsy, and for the life of me, I couldn’t figure out what was wrong. There was no ulcer, no cancer, no inflammation in the gastric lining under the microscope. So I’m there, at the double headed scope, waxing poetic about how maybe there’s just a bit of atypia in some of these stomach glands, somehow, someway, and PD is just nodding along “mmhmm… yeah… okay… maybe a little…” And then he grabs the slide, and moves it down far below the gastric lining on the slide into the muscle layer of the stomach. And yeah, sure, I guess in retrospect there was a lot of muscle there versus what you see usually… And PD says “Actually, this is the lesion.”
It was an uncommon GI tumor that happens in the muscle wall of the stomach called a “gastrointestinal stromal tumor” or GIST.
That was the first, and last, GIST I ever missed though. It’s been in the differential for every stomach slide I have seen since.
You win or you learn.
Another real life MIL for humans came in residency. I was on the head and neck service as a first year resident, paired with a senior resident. Our attending was one of the old ones, the guy who literally wrote the book (three volume book, technically) on head and neck pathology. I would give him even odds against a computer to recognize unusual patterns just by the “look” of the tumor—after all, he had the years of intensely staring at thousands of examples of these already. So we get this case that we’re struggling with, because the entire slide looks like thyroid cancer. I mean, really looks like thyroid cancer to us. But if it is, the entire thing is cancer, which is unusual. And it means when the PA measured the tumor, they were off by at least a factor of 10, because they described only a tiny little maybe tumor possibly there. So we’re at the multi-headed scope with the Old One, a lovely, soft-spoken gentleman who still wore a white lab coat and tie every day. The senior resident, by virtue of being senior, is the one going over our thoughts on the case. Similar to me on the GIST, waxing poetic about how this is really atypical, and its everywhere, but the tumor was measured much smaller so we were thinking it was some kind of generalized thyroid atypia. Even though it’s got all the features we humans look for to diagnose papillary thyroid cancer.
And here, for full effect, you need to know that Dr. Old One could easily do the voice acting for “Butters” from “South Park” and no one would notice the difference. For those unfamiliar, this is “Butters.”
“Mmhmm… yep… mmhmm…” said the Old One, cruising around the slide as the senior resident is explaining how all the features of thyroid cancer are there, but we couldn’t bring ourselves to pull the trigger.
“Welp… we’re going to call this ‘papillary thyroid carcinoma’…” said the Old One, ending the discussion.
“Classic type.”
Then he took the slide off the scope and held it to the sleeve of his white coat.
“And we’ll say it’s about, oh, 2.3 centimeters in greatest dimension, and correct that PA’s measurement,” he said.
“You know, the best part about being able to go over cases with the trainees around the scope is that you really learn what they know and what they don’t, mmhmm” the voice of Butters finished.
I’m told the funerals for those residents at the scope that day were “quiet and dignified.”
I’m a lot more confident to call it when it’s there now, though.
Alright, cool path stories, bro, I know—but suffice to say, there are overlaps between how humans learn and how machines learn. Both supervised and unsupervised methods can be quite effective.
Whether annotated or MIL (weakly supervised) approaches, the way the machine “learns” is by breaking the image down into tiles of a certain number of pixels across. These tiles are then broken into “features.” To keep these easy to digest (and Google actually has a free machine learning course that covers a lot of this as well), think of features as various aspects of the image the machine is going to evaluate to learn what a cancer cell looks like. So for example, how blue or purple the pixels in the tile might be. That blue or purple will come from the nucleus, and in general, a bigger, darker blue, “angrier” nucleus is more likely to be cancer. So that might be one feature.
The important part is that computers “think” in math, so the “features” are going to get turned into a numeric value. This is the “weight.” So the machine “learns” by changing these weights every cycle. These weights will be processed through the algorithm, typically in one of several forms of “neural network.” We’ll leave that beyond the scope as well. At the end, the weights (or numbers) from all the features in all the tiles are aggregated (by an “aggregator” within the code) to a single score. That single score will result in a prediction—in this case, the learning algorithm will “guess” if this slide has colon cancer or not based on the aggregated weights it gave to all of the features. It will compare its guess to ground truth—if the slide had cancer or not, which is known already. It will keep guessing and comparing, by trial and error, over and over again, until it arrives at a set of weights for the features that predicts most cases as either colon cancer or not colon cancer as accurately as possible.
Some of you will immediately wonder if the order of “guesses” matter—after all, the computer could drift to a less accurate set of weights by accident. Since it is only working with numbers, after all, and doesn’t actually “see” the image the way we do, there may be multiple weights that could get to a working answer. In short, to correct these, machine learning algorithms are taught in “folds”. For example, 8 fold training. That means that 8 separate machine learning programs were trained to do this “colon cancer: yes or no” task. Each starts with a very different set of weights, and you look to make sure they converge on a similar set of final weights. You can also have the algorithm randomly spike its weight guesses a LOT to make sure it doesn’t funnel into a less correct final answer.
So in shorter form, hopefully closer to the King’s English, the sausage gets made by the computer taking the image we see and making a digital copy of it. The computer then chops it into smaller pieces and pixel by pixel, the computer changes it to numbers. It then changes those numbers by making guess after guess after guess about what they “should” be until the sum of those numbers lets the final answer predict “colon cancer yes or no” most accurately.
In supervised training (annotation), the computer is told which tiles are most important to focus on first. In weakly supervised training (i.e. MIL), it’s told the answer you seek is somewhere on here—keep changing until you find it. There are advantages and disadvantages to each method. Annotation may take less time and computing to reach a model that works, and may need fewer cases to do it—but annotating ahead of time takes a LOT of effort and time. Further, the machine is largely stuck with what is already known—if there is something subtle that humans have missed, but a computer can find, the computer is less likely to find it because it’s already being told what is important. But what is “important” was pre-determined by our puny hooman brains and eyes. There may be a different, better answer out there that we have missed entirely over the years, and an annotated approach is somewhat less likely to find that. Weakly supervised methods require LOTS more data (cases) to train, and need to be checked on the back end to be sure they actually do “see” the colon cancer in their final answer. But by being handed an essentially blank slate, and not told what humans found most useful in identifying colon cancer, the machine can sometimes come up with surprising insights that we missed.
Give a human pathologist a 1,000 cases of colon cancer, and make them stare at them over and over for thousands of hours, and they might start to notice some new patterns. The advantage of machine learning is a computer can do that same task, but process those 1,000 slides in a few minutes, all while never getting bored or tired, for hours and hours and hours on end—achieving a number of total reviews of those cases in a couple weeks what would take a human pathologist years to do.
The biggest difference though is that computer learning and this kind of AI is very domain specific. While I, as a human, can take what I know about the way colon cancer looks to make guesses that a cancer I have not seen before is cancer, the same isn’t always true of AI models. An AI program that is great at detecting colon cancer will sometimes be only as good as a flipped coin (sometimes worse than a flipped coin) at detecting a different kind of cancer, like lung cancer. Some AI programs can get good at recognizing many different kinds of cancer, but couldn’t recognize chronic gastritis (a common, benign inflammatory finding in a gastric biopsy that isn’t cancer) that would be really easy for a pathology resident who had seen both before. ChatGPT famously had no idea how to play chess when ChatGPT was first going big in media coverage—because that’s not a program that really “knows” what the rules are or “sees” the board. It’s only making educated guesses from finding stuff on the internet ABOUT prior chess games. Which means ChatGPT didn’t know they were specific to -that- game being described, and was randomly chucking moves from many different games, even if they were not possible with the actual board situation. Similarly, you could take Stockfish or AlphaZero, the two premier chess AI programs and try to chat with them. I guess. They do chess well. They don’t talk. Nor can they explain why they make the moves they do sometimes—you only kind of find out 22 moves later that a weird looking pawn move was absolutely necessary right there. Even then, there are moves and evaluations that Stockfish will make that mystify even the very best human grandmasters.
And we have the best crop of chess players to ever live alive right now—in part because their games have been honed by learning from what Stockfish and AlphaZero do in and “think” about certain positions.
Those human grandmasters will play a very specific style of game when playing against a computer–something they would not use against a human. Because they have learned that is a pattern that gives them a fighting chance for a draw (they won’t win–the computer will eventually out-calculate the puny hoomans).
But specific patterns, even within areas that AI dominate, that can be exploited are a bug that turns out to be a feature of AI. Back in February, a highly ranked amateur Go player beat a Go AI program soundly. This is newsworthy because the same kind of program had beaten the best human player in 2016 (in fact, chasing that guy into retirement because the computer crushed him so completely). The amateur player won with a strategy honed and tested with another AI, set up to find patterns in the “Master of Go” AI programs. Turns out you could boil the frog–slowly surround the world champion AI Go program, and periodically make random moves into space that it would systematically fixate on. A decent human player would see they were being slowly surrounded and react. The world champion AI Go program could not–because it had never seen such a basic, obvious strategy tried against it before to “know” what was happening!
In short, the AI we have now is really, really good at things it is specifically trained to do that computers can do well. What they suck at is adapting when the situation changes, or is markedly different from what they learned to do.
But humans adapt well, because the real world changes and occasionally breaks patterns and rules on us.
That’s the biggest difference between the doman specific AI taking headlines by storm and truly generalizable, Skynet AI. We are lightyears from the latter, and I am not convinced there is a math that crosses that divide well.
—So speaking of ChatGPT, and its superpowered new version GPT-4, as well as all the various deepfake and AI art programs you have been hearing more and more of in what is turning out to be the year of AI really coming out…
The way they are trained and work is not all that dissimilar from the pathology approaches I mentioned above—which is why we spent all that time there. In some instances, they were supervised for specific tasks and functions. In others, training was accomplished by letting the algorithm learn by itself via guessing, and then being told if its guesses were right or wrong, and letting it go back and try again to see if it could get more right than last time.
The biggest difference is that most of these “in the news” applications like GPT-4, the deep fake programs, Deep Dream/DALL-E2/Images.ai, tend to use an approach called a “generative adversarial network” or GAN.
Here, you train a “discriminator” program to do a certain task. So we make our “colon cancer yes or no” program as before. Except we also create a program that will be the “adversary”. The adversary program tries to create fake examples of colon cancer that will fool the discriminator program. As the discriminator gets better telling real from fake, it improves. As you can imagine, to make a deep freak program then, you actually want the “adversary” to get better—creating fake videos that a good discriminator calls “real.” Training programs to automatically recognize and remove deep fakes will thus be an arms race between programs trained with this technique.
So yes, this is why deep fake programs are getting -really- good. Also why GPT-4 can pass the bar exam in the 90th percentile. But this is also why they have some of the weaknesses they do. The challenge/response code counters a deep fake voice program used for scam calls because a deliberately “wrong” response would have been trained right out of the deep fake to get the deep fake to be most convincing as the person it is trying to impersonate. GPT-4 was trained to use deductive reasoning for its answers—so it will make guesses. It’s been trained so that those guesses are maximally correct (and it will continue to get better with use)—but they’re still guesses. Sometimes it is really wrong and just making stuff up. GPT-4 will be very confident about those errors though—again, because that’s how it is trained. Make confident guesses, wait to be told if they are right or wrong, and then adjust based on that feedback.
Let’s finish this brief background of AI by emphasizing a few key points. None of these, including the ones in the news, are a generalized artificial intelligence. Some of them look quite convincing, like GPT-4 and ChatGPT, which can pass a Turing test and hold a conversation on a lot of subjects well. They can pass standardized tests in a multitude of fields.
These are still not a generalized artificial intelligence. This isn’t Skynet or HAL. We’re closer to the computer you can just talk to and have it do things, like in “Star Trek”–true. But these are all still task and domain specific programs. All the standardized tests that GPT-4 can rock are, in the end, standardized tests. The answers are well defined and out there, so GPT-4 can find them or deduce them fairly easily. One of the multiple choice answers on the bar exam question is correct–GPT-4 just has to figure out which one.
So sure, GPT-4 can pass the bar exam. What I don’t think GPT-4 can do is argue your specific case in court. Or judge if “geofencing” your phone to see whose phones were nearby a crime scene, and then having the police pull in the owners of all those phones for questioning violates due process and the need for a warrant. Yes, that geofencing example is possible, and is currently being debated in our courts. GPT-4 might be able to deduce what case law might inform on that judgment—but it has no skin in the game. And humans can just change the rules by changing the laws, or the people making the laws. GPT-4 is just an abstract computer, and its opinion is just the result of a numeric conversion, weights, of various features. Truly novel scenarios are likely to challenge its limits.
That said, GPT-4, especially as it continues to improve, will be a significant upgrade for its ability to rapidly recognize and summarize useful knowledge, in context. For example, I just asked GPT-4 a few questions about fungi in the human gut microbiome. You always read a lot about bacteria in the gut microbiome (because they are easier to study)—but fungi have to be there too. You get a nice general run down. I even got GPT-4 to suggest a couple good recent review articles on the subject. Reading through those, I think one of the authors is too quick to dismiss high incidence of Saccharomyces and Malassezia species simply because they appear to be non-colonizers. They may not stick around in the gut, but they are detectable a lot of the time, because the former are the yeast in bread and beer, and Malassezia are common skin contaminants you swallow frequently. Even if they are not sticking around, their “pass through rate” in the gut will be high, especially based on your diet, and they may be present in quantity and frequency to influence the host and the colonizing microbiome. In short, GPT-4 helped me find the current state of the question, and what appears to me to be some interesting frontiers—all in a few minutes.
But the review articles it found were the most useful part of that.
This doesn’t make the knowledge worker obsolete. The scientists working on fungi in the gut microbiome are still out there doing their thing. What GPT-4 does for me, as a “knowledge worker,” is make some of what I’m doing much more efficient. Perhaps even help me find the edges of what we do know, to explore ways to go beyond those current limits.
I saw it said online, and it’s very true, that “the invention of the calculator did not eliminate accountants. They just got more efficient.”
The chief advantage the “knowledge worker,” and the rest of us pitiable hoomans against this onslaught of AI, have is intuition in the face of truly novel scenarios and in applying ideas and principles from one domain to another. GPT-4 will suck at that. For example, there was discussion of a novel inbounds pass play used by one team late in a game at critical moment in the recent men’s NCAA basketball tournament. The play used was fairly novel—for basketball. The play design though was inspired by something the coach had seen in pro football to get receivers open. Only this time, it got basketball players open for a clutch pass. While GPT-4 could describe a “scrape play” to get wide receivers open in football, using it, or a pass tree concept, applied to a different game entirely is beyond current and probable near future AI capability.
Where we will need to worry though, at least in our day to day jobs, is this. If GPT-4 can pass the bar at the 90th percentile, as a lawyer, you need to assume you need to be functioning at around a 90th percentile level. Or you better be really good at whatever part of law GPT-4 doesn’t do well, that takes it from 90th percentile to 100th percentile. You will need to be able to fill in that last mile—or yes, you will be obsolete.
Also, one wonders if we need quite so many lawyers to provide that “last mile” service that GPT-4 cannot do. Or the other parts of legal that GPT-4 cannot do. Like argue in court for you.
That said, I suspect the efficiency gains from letting GPT-4 and similar programs do what they are best at will free the time for even greater heights and innovation from our existing knowledge base.
There are other industries that could see radical change. Deep fakes and AI art programs, for example, can be combined to create supermodels. With bots running their social media, ad campaigns could be made and designed around spokespeople who do not actually exist. Imagine a model that could never have a major scandal. Before you scoff, this is alreadyhappening. AI “model” agencies like Lalaland.ai already exist–and are already getting customers.
For example, Levi’s, as in the jeans, announced that it is planning to launch a campaign using AI generated models. Of course, this being 2023, their press release touted how this would allow them to digitally create every body type and skin tone possible, and thus represented Levi’s committment to diversity, equity and inclusion, as well as sustainability because they presumably would not be flying or driving living human beings to photo shoots. As SBF said when he got pinched for a massive ponzi fraud, the real motive (profit–because they are paying models and photographers less) must be concealed with the appropriate noises towards certain “shibboleths” to use the (alleged) ponzi schemers’ own words. You should read the linked press release, because it includes the corporatespeak response to the obvious outrage that Levi’s decided to use AI generated diversity rather than, you know, hire and pay real, diverse human beings.
Perception of effectiveness being as good as actual effectiveness may not be limited to only politicians? But I digress slightly.
Levi’s will not be the last to go in this direction–and they still will, following a profit motive, but getting better on their corporate cover story. In parts of the world where reputation doesn’t matter as much, because there is already ill repute, this is likely to happen faster. For example, if there are not multilple OnlyFans or cam girls that are completely artificial by now, all their spicy content computer generated (if not deep faked over the bodies of others) and all their digital communications with customers generated by one of these Turing test passing chat bots, there will be soon. The entry barrier is falling, and you can make enough money that someone will try to run a stable of OnlyFans catfish.
On the one hand, advertising may get cheap and easy. That’s an arguable benefit. Social media “influencers” may have to find something more useful to do. Also an arguable benefit. Real world modeling and Hollywood do not have the best record of treating models, actors and actresses especially well. Disruption of that model (no pun intended) is probably an arguable benefit too.
But does that mean that AI generated models, spokespeople, advertising, actors and actresses, and yes, even AI generated porn stars completely drive the humans out of these roles?
I see that suggested in the media. I still kind of doubt it.
In fact, I suspect that while AI will make significant inroads into these industries, increased use of AI will also create a premium value for authenticity. After all, part of the reason social media influencing became a thing is that it was a real person, whom you got to know, recommending a product—not some paid shill, even though many of them are, indeed, paid shills. I suspect my wife is not the only one who takes the recommendations of the social media groups she is on as gospel, versus just standard direct to consumer advertising. Don’t we all read product reviews now, looking for what appear to be real people who have used the product before?
I suspect that, for all the advantages listed above, AI enhancement or outright AI creation of completely computer generated models/salespeople/celebrities/actors will happen.
But I also suspect there will still be a role, if not a premium, for the authentic. For the real.
I read a Japanese author once explaining the art aesthetics of wabi sari and mono no aware. If you are unfamiliar, this is the emphasis in a branch of Japanese art on beauty that is “imperfect and transient.” An example might be a broken tea pot, glued back together, crack still prominent through the patina of its age. The idea of this art is to remind us that nothing is truly perfect, that beauty, and arguably all ideal states (love, joy, triumph—even negative ones like despair) are fleeting, and there is a real poignancy to the memory of those peaks in what remains after they have passed.
The counter-argument will be that we can now create the perfect body, with the perfect face, even if it cannot truly dance or talk either (apologies Phil Collins and Genesis). Our new AI supermodel will be honed by testing to be perfect. Given an AI personality that never fails, never falters, is always on point. As ageless as our new supermodel. Or we can simply replace them for the attention grabbing advantage of novelty once it seems KateUptonAI v1.0 is not generating the same view metrics they used to—with hardly any additional marginal cost to the new EmilyRajatkowskiishardtospellAI v1.1. Maybe it can always pass the Turing test, and march boldly through the uncanny valley.
But I suspect that wabi sari and mono no aware will prevail. Humans, real live humans, in some of these roles will continue to succeed because they are imperfect. Imperfection, flaws and failings capture something significant in the human condition too. We see ourselves in them. Well was it said by a priest once, paraphrasing, “Thank God some of the saints were terrible sinners in their pasts–because if they can turn it around, overcome their flaws and failings, and find redemption, then so can I!”
But these are just the industries getting obvious mention. One industry that will be pressured by GPT-4 and other broad knowledge AI applications, although I don’t see it being mentioned much yet, is college. We have been cranking out highly credentialed expertise, but that expertise is increasingly in minutiae. Just look at the ever expanding, ever more sub-specialized universe of academic publications. Before, we might have needed finely detailed expertise in all of those many subjects to justify all of that subspecialization. I’m not so sure we will need nearly as much of that in the near future. At most, we’ll need only a few of them to keep GPT-4 and its similar successors reasonably accurate and these few will largely self-select for those positions, because they are those rarities who are truly obsessed with some extraordinarily obscure subjects.
I don’t think colleges will go away, done in by the democratization of accessible knowledge and expertise via AI guidance. I do think colleges and universities will need to re-orient the curriculum to stay relevant. If anything, I think there is a strong argument, especially in undergrad, for a liberal arts emphasis. With AI as such a powerful “general knowledge,” even expert knowledge, tool, integration and innovation across disciplines is what will succeed in the very near future. There is even an argument to be made for greater cross pollination between disciplines in the advanced degrees, like masters and doctorates. For the same reason. The purpose of the advanced degrees (beyond the professions requiring doctorate length training, like MDs, DVMs, DOs, arguably JDs etc.) will be less about filling the need with deep human expertise, and more about pushing the limits of knowledge and creating new knowledge and fields from the edges revealed with AI assistance.
That’s all the good and optimistic disruption though–which will be simultaneously more and less than you expect from the lay reporting. There are definite dark sides and risks to this technology as well. No, not that we will one day have to worry about dystopian future wars against Terminators or get plugged into the Matrix as living batteries to feed the new machine overlords, despite what Elon Musk fears. I, personally, still don’t think we are close to a generalized AI like that.
What we have now is a lot of AI that is getting very good at certain specific tasks. These are tools, not a sentient computer with its own goals and personalities. The good or evil of any tool depends on the purpose to which its user applies it. Same for these task specific AIs. In the hands of the wrong user, or applied for ill ends, serious societal harm can be anticipated. Yes, even using some of these tools with the best intentions can easily pave the road to hell with AI level efficiency.
Same as it ever was, though. Humankind invents a new tool, then finds a new way to use it poorly. Because somehow it is humankind’s destiny and curse to eventually succumb to the angels of our lesser nature. If a new tool can be used in ways to get more money/sex/power for one human at the expense of others, it’s not if but when the tool will be abused in exactly that way.
So what shape will the predictable abuse take?
Some, I think, are easy to anticipate based on current trends.
The war for your attention, the battle for schismogenesis, the struggle for your very mind and soul in a deluge of information weaponized to try to get you to feel/think/act in a particular way is a fertile ground for abuse of these technologies.
Set aside if you think the release of the “Twitter Files” has been selective reporting and selective release of details. What they reveal, and what the leaks of 10,000 WhatsApp messages of UK government officials during the early COVID outbreak that were all the outrage across the pond recently, is the incentive of government. One of the major incentives, and major values, of government is perception management.
Just sticking to the COVID example, think back to the early days of the outbreak, when details about the virus were sketchy. Readers of this update knew it was out and spreading in the community globally before official reports did, and knew that we had estimated that early estimates of its mortality would come down to “bad flu” level and that early reports showed deaths clustering in the elderly and those with serious underlying medical conditions, but was putting a large number of people into the hospital fast. We said at the time that the decision to lockdown on city, state or national level was tricky, and would come with serious socioeconomic consequences. That decision was made, looking mostly to protect from the downside if we, in this report, were wrong and mortality included the young and healthy, and was at a much higher than “bad flu” rate.
Again, a difficult call that I am glad I didn’t have to make.
But, let’s assume we have another pandemic of something that is clearly worse than COVID. What the “Twitter Files” and the UK WhatsApp leaks show is perception management by the government, looking to influence channels of major communication like social media to emphasize the seriousness of COVID and de-emphasize or silence entirely messages that might cause citizens to NOT follow the recent government regulations.
So let’s put the question this way. If you were in government, had a pandemic that was CLEARLY worse than COVID, and had an unpopular policy to enforce to mitigate as much of the public health damage as possible, like lockdowns or mandatory vaccinations, -AND- you now also have VERY convincing AI assisted deep fake abilities and GPT-4 abilities to perfect and target messaging…
…well… do you have a moral obligation to use them? Even if the deep fakes and messages your AI creates are deliberately skewed to be as scary as possible, so that your unpopular policies appear to be the lesser of the two evils? You might save lives–even a lot of lives–by doing it. So shouldn’t you?
Be careful with your answer. That slope is slipperier than it looks.
Let’s take another example. Let’s say you are Greta Thunburg, tweeting urgently in 2018 that new reports reveal that without urgent action then, right then, in 2018, climate change would cause humans to be extinct by 2023. For the record, Greta was recently roasted online for deleting this exact tweet, since humanity is not quite yet extinct. Regardless, let’s say you’re Greta. You’re convinced that climate change is an extinction level threat to humankind. You have access to top tier deep fakes and GPT-4 to hone your messaging. If you believe, truly believe, that climate change is an extinction level event, are you not morally obligated to abuse these tools to convince others? Are a few lies worth getting that truth across, and action taken to avert disaster?
Or let’s say you are the most extreme elements of the current schismogenesis in the US body politic. You believe, truly believe, that the other team, whichever it is for you, is truly corrupt, misguided, if not outright evil. They are all, at heart, the worst examples of what you have been shown to further the schismogenesis in your heart. To convince you that your tribe is not that. Definitely not that. You believe, truly believe, that the goal of the other team really and truly is the totalitarian authoritarianism you accuse them of, be it full on degenerate communism or some fusion of nationalistic oligarchy and theocracy. Regardless, if they win, the dystopia you fear will become inevitable. But you, lucky you, have access to deep fakes and GPT-4 for messaging. You even have some of the new AI start ups out there looking to apply AI to emotional messaging in advertising (for now). Since, as we have discussed before in the update, it’s the emotional part of the message that resonates best and can drive action more consistently. Even influences how subsequent messages are more or less likely to impact you.
Yes, Virginia, that is getting lots of VC money behind it to build AI to manipulate your emotions better with messaging you get. Sleep well.
Anyways. You have the AI tools to disrupt the message of the Other Team. Before they can win, before they can get power they cannot help but abuse. Before they can usher in the dystopia they clearly desire, in their naked lust for power. Are you not morally obligated to use these tools to stop them?
Even more fun question–raise your hand if you can think of people you know, right now, who would be willing to believe and accept as true any reasonably close, even if still probably fake, deep fake or GPT-4 driven or emotionally manipulated content? Because they are already so far down a schismogenic extreme? So locked into a tribal identity? They want to believe, and so they will.
These people are already your “likely voters.” They’re already committed to a team. Committed to winning. The tools exist NOW to make their information silos even more compelling, more enticing, more attractive, more weaponized, and more tribally and schismogenically reinforcing than ever before.
Sleep well.
“But surely, these are extreme examples,” you say, Hypothetically Growing Concerned Reader. “Besides, you said above that GAN training is not only why they get so good at these tasks, but also how to train AI that can defeat them. The deep fake AI that learns to fake past the best detection model wins the deep fake gold medal–but that just becomes the test model to train a better deep fake detecting AI in a constant arms race. Also, our government has checks and balances to avoid some of this predictable abuse. We can also demand legislation and greater controls to avoid this predictable abuse too.”
Perhaps. My “dark mirror” counter-argument is that power corrupts, and the power to control minds and decisions is absolute power. I care not whose hand is over the nuclear button if I can control, perfectly, when and if that button will be pressed. I am less sanguine that politicians, particularly given the moral character revealed in so many of them recently, will avoid temptations here. In the US, they cannot even legislate away their ability to conduct insider trading. You think they will put meaningful restrictions on their ability to manipulate your opinion? I love your optimism!
Even if, by modern political miracle, your own government restricts use of these tools on you.. maybe even restricts their use on influencing other countries…
Think that will stop Russia? Or China? Just imagine if Roman emperors had these tools–do you think they would have relied on bread and circuses, or just AI generated the right distractions? So do you see a scenario where Putin or Xi have these tools and do NOT use them, both on their own populations and as a way to create political disunity in nations they see as adversaries? These terrors are already out of Pandora’s box. All over the internet. Even if China and Russia don’t have the source code now, they know it can be done, and will seek to do it.
I think it gets worse before it gets better, quite frankly. I think a time is coming, probably sooner than we think, where there will be a real crisis of trust in anything. One could argue that even with the information deluge we have now, it’s alreadydifficult to impossible to know what is really happening and why in the world. The war for attention and schismogenesis has made everything (even vaccination campaigns fergodsakes!) about narrative, first and foremost. Truth has not survived contact with that enemy as often as it should.
Now let the deep fake legions roll.
From advertising, to news, to politics, all now AI-weaponized to the “truth” you want to believe. Whether it is accurate or not. Legions of the extremes of the tribes now “convinced” by deep faked smoking gun evidence of their righteousness.
In the extreme version of this crisis, you simply cannot trust anything that is not happening local enough to you to see and know what is really occurring, right in front of you, in the real world. How do you maintain a globally interdependent, just in time manufacturing society in that scenario? How do govern countries that span entire continents? How could you govern anything more than a city state? If there were a truly global threat, like a dinosaur killer asteroid discovered heading to Earth, how could you mount the co-ordinated response that would require? Inundated by deep fakes, numb from experience of being fooled by them once too often, would enough people believethat the asteroid was real to do anything about it?
I wonder if the only way through that is the old way. Personal relationships, connection. Honor and integrity. Again, the emphasis, the need, would move to a higher value on authenticity. Who you know, whose character you can vouch for and testify to, and who can attest to yours, becomes paramount.
For example, a lot of the content of these updates (and one of the more popular sections, frankly) was all of you reaching out because you saw or heard something that pinged your “Plandemic!” bull shit meter. Or sounded juuust plausible enough, but had some serious implications if true, that you wanted a fact check.
That you chose me for that is, no joke, a great honor. Not one I take lightly, or without becoming a little verklempt thinking about.
In the dark age of the deep fakes and weaponized AI for schismogenesis and narrative purposes (sadly, I think this dark age is more likely than not, at least for some brief period), this is the way through.
All of you will need to become the bullshit filter in your own domains of knowledge and expertise. While at the same time relying on those in your network to filter truth, and avoid the siren call of schismogenesis. (Thank you to our South African correspondent, as I immediately call to mind all the times I have to run headlines here past them for a reality check because the reporting on Africa here in the West is so uniformly poor quality).
The transition to greater authenticity will not be easy. We’ll have to be a lot more honest with ourselves that there is quite a lot we don’t know. Some things we are certain of just ain’t so. That takes a humility we are unaccustomed to–especially as we get older and think we should finally know better. I suppose the final wisdom of old age is discovering that you never actually quite reach the age where you do, consistently, know better. But perhaps that change in ourselves, learning to count on, and trust, one another again is the salve our current age requires.
The beauty, the truth we seek is in ourselves–but its style is wabi sabi.
Impoliteness everywhere, inconsiderate driving, and that pervasive sense of isolation, gloom, fear and chippiness?
Maybe deep fakes inadvertantly solve that, by forcing us to rely on one another. Trust, and be trustworthy.
What also help protect us is less ephemeral. Less feelings ball. There will be more than one GPT-4. Already, the truth has trouble hiding on the internet. The other part the “Twitter Files” show is that even government pressure was not enough–other voices and opinions got out. Some were wrong. Some turned out to be right, like the early suspicions of a Wuhan virology lab leak at least being possible. Centralized control of these new AI tools will be impossible. They will democratize, for ill, no doubt. But also for good, where these tools can be used to combat misappropriation of them with the same AI capabilities.
Dystopia is possible. I do not view it as the most -probable- outcome.
Crazy as it sounds, I think -we- are still the safer bet. As often as we go astray with the angels of our lesser nature, we do, sometimes, rise with the angels of our better nature too.
–Finally, this has become epic in length already, so I’ll wrap the rest up as a few quick hitters.
I somehow missed this at the time, but leading international journal of hard science “Nature” published an op-ed endorsing a candidate in the last US presidential election. I have no idea why they would do this either, other than running on raw emotion. Politics is the decisions and opinions made, presumably, but clearly not always, on the set of agreed facts. Science’s role is to provide those facts as best as they can be known, without agenda. Science must be agnostic and apolitical in its function, and if it fails to do that, will fail to provide the facts needed for us to make the best informed decisions we can. When that happens, science, as institution, will lose the trust it has garnered through its track record of dispassionate and relentless pursuit of the truth that can be known about the physical world. The metaphysical is largely, by definition, beyond that which science, as the branch of rationalist philosophy it originates from, can interrogate (for the record). Regardless, the results were utterly predictable to everyone but Nature editors, it seems:
Play schismogenic games, win stupid prizes.
What’s even worse is what an “Ugly, Narcissistic American” moment this was–as if the domestic election of the US president is a matter of such clear world importance that Nature’s editors felt they had to get involved. Before you “@” me that the US is so powerful that this is true, let me just point you to Putin making a world changing invasion of Ukraine to highlight that poor decision making in, and thus who is running, other major nuclear powers also matters. Imagine if the leadership of China finally pops off at Taiwan–which is still a major producer of semiconductors, on which much of the world runs. Leadership of Saudi Arabia matters, as the Kingdom of the Sauds is still a leading producer of oil, the “spice” on which our current world’s Dune runs and that which must flow for trade, modern medicine, and modern agriculture to run. No Nature op-ed on the murder of Jamal Khashoggi. No comment on the Uighur genocide in China. But hey–at least they condemned the invasion of Ukraine. So I guess their mental map of the world doesn’t end in the middle of North America at least some of the time.
This is almost as embarrassing as the conference I was at in France about 6 months after Trump got elected. The European head of the hosting society got up and reminded all participants before the plenary sessions presenting new science results in the field that “this is a scientific conference. Please refrain from commentary on politics, no matter how high emotions might be. Politics is beyond the scope of our scientific efforts and collegial collaboration. Please, all presenters, focus on your exciting data, and keep all discussion and questions germane to the science only.” This was received with a standing ovation.
Every single US presenter (and ONLY the US presenters) who got up after then immediately spent the first five minutes of their presentation venting their personal feelings and opinions about the US election–as if the mostly European audience was there to give a f*** about domestic US politics, and not the, you know, science. I have never been so embarrassed to be an American as I was in that audience. Aboot then, I wondered if I could pull off a Canadian accent for the remainder of the conference, eh?
–Is the pendulum on the censorship hysteria swinging? An article worth reading here.
–When someone or some institution tells you, invariably by their actions, what their true values and motivations are, believe them. Recently learned that Joe Rogan’s profits from YouTube went up by 25% right before all of his podcasts moved to Spotify permanently. Despite the many COVID controversies Joe had. The reason and timing was interesting–it was right after Joe signed his Spotify deal. YouTube, despite all their Levis like press release posturing, dropped all the demonetization strikes and content warnings on Joe’s podcasts on YouTube.
Because YouTube, and parent Alphabet, wanted to make as much money as possible off the Joe Rogan Experience before it left their platform.
So let all that “objectionable” content run wild, for as many clicks as YouTube could get on it before it was gone, baby!
–Profit. No matter what the corporation says, it cares most about its bottom line.
–Speaking of schismogenesis, Russell Brand versus Carson Tucker. Who ya’ got?
Your chances are…
–Your chances of catching coronavirus are equivalent to the chances that every picture or photo, every single one, in this update was either generated by free AI tools on the web (via DALL-E2, NightCafe, Deep Dreams, and Images.AI) or modified by AI (befunky) for stylistic effect. Don’t believe me? Go back up to Iron Mike’s photo–the face tattoo is wrong. AI can’t get that detail right yet (also struggles with hands, for the record). The photo of the “real” model is likewise not a real person. A computer algorithm drew that. The wrong face tats on Iron Mike aside, there are limits to this technology, as I keep emphasizing. I gave up on various forms of prompts for “young european man at party with Tecate ring girls and Mike Tyson in the background” because none of them got me anything usable. A simple Mike Tyson in black and white photo prompt got what was used. Likewise, AI did not handle approximating the bird girl statue from “Midnight in the Garden of Good and Evil” well–had to use an existing image prompt to get the above. Also, surprisingly, AI cannot, at all, generate an H&E image that looks like anything you would see under a microscope. Despite the fact that you can Google image literally thousands of examples of what these really look like from photos of real cases. I dunno why the AI has that specific blind spot either. “Army of the Bioterrorist Monkeys” got some pretty cool looking hits though…
<Paladin>