Gone Rambling

Go a little off topic

28 Apr 2023: Brief Update (No, Really)

Coronavirus Archive

We will do a summary of sorts of all things coronavirus sometime next month as well. Until then:

Blastomycosis

The outbreak in Michigan is up to 109 cases confirmed, with 13 hospitalizations and fortunately still just the one death. The paper mill has been closed (obviously) to get a deep clean of pretty much everything, and an upgrade of the ventilation system. Thus far, the specific source of the fungal contamination has not been conclusively found. The spores got in, likely through contaminated raw material, and lingered to cause these many cases over the last several weeks. This is the largest single outbreak of this fungus in the US that I am aware of. Again, no threat to anyone who wasn’t in the specific plant in Michigan.

Marburg

Finally have some better numbers. As of last week in Tanzania, there have been 9 confirmed cases with 6 deaths. All cases are confined to the same rural district of Tanzania. Of 212 known contacts of the cases, 208 have finished the observation period. The outbreak in Tanzania will soon be on the clock to be officially declared over, barring new cases.

Equatorial Guinea is a little more active. You have at least 5 districts involved, with 17 confirmed cases (12 deaths) and up to 23 probable cases still being investigated as of last week. In the last two weeks, there were two new cases (included in that 17 total), both known contacts of a previous case. At least 116 known close contacts are still being followed.

Unless you are traveling to those specific regions in those specific countries, your risk of contracting Marburg remains extremely low. There remains little chance of pandemic spread of Marburg.

Coronavirus

Most of the news here is not news to the regular readership. There has been a steady drip of news wherein none other than Dr. Fauci gave an interview conceding that masks, as a public health policy, were underwhelming in their effectiveness and may have contributed to about 10% risk reduction for an individual using a mask.

Since the last update, the FDA updated its guidance on COVID vaccination. The long story short should also sound pretty familiar to readers already. If you are under 65, in good health, and have not been vaccinated already, the recommendation is for a single dose of the bivalent (previously the “booster”) vaccines designed for the original omicron variant and initial, grand-daddy-of-’em-all wild type SARS-CoV-2. Very young children (under 5 years old) can receive a multi-dose regimen of the bivalent, since their dose is much smaller.

Yes, I too remain skeptical about how many people who have not been vaccinated before will opt to vaccinated now.

Regardless, of the vaccinated, if you have received the full original vaccine series AND a bivalent booster, you are NO LONGER eligible for further boosters UNLESS you have specific high risk conditions. Those OLDER than 65 can get additional boosters, so long as your last booster was more than 4 months ago. You may want to discuss with your physician before getting a booster shot; you might also consider waiting until closer to fall to consider one, so that you can both see how “COVID season” is shaping up and have more recent protection on board during peak COVID months by timing your shot to the fall. Anyone age 5 or older with immune compromise can get boosters as often as the discretion of their healthcare provider permits, spaced out at least every 2 months between shots.

Lastly on the COVID “beat”, we have an update on Damar Hamlin, the Bills football player who collapsed on the field and required quick work with an AED after getting hit in the chest by a Bengals player. This generated much conspiranoia and murmurings that it must be related to long term vaccine side effects despite no mention I have ever seen of when, and if, Damar Hamlin was vaccinated or last contracted confirmed COVID, if ever. This was probably the peak of “sudden increase in athlete cardiac deaths” furor of just a few months ago, which we covered at the time.

Yes–that furor has mysteriously disappeared from headlines since then, hasn’t it?

Regardless, Mr. Hamlin has been cleared to return to practice full time. The news coverage of this happy announcement quoted Mr. Hamlin stating that his ultimate diagnosis was commotio cordis, which we said based on review of the video of the incident was most likely (but is a diagnosis of exclusion of the other, mostly hereditary and rare, potential causes). Mr. Hamlin is taking good advantage of the publicity of his incident to push for more AEDs to be readily available at high school and younger sporting events, which is a very reasonable policy.

Nefarious cardiac damage or myocarditis from vaccination appears to have been ruled out though. It may be particularly awkward for those conspiracy theories that alleged that Damar Hamlin died on the field, and has been replaced by a body double (we wish we were making that up, but that’s how wild some of the “theories” got), when Damar plays his first NFL game. Because the chance that you can find someone who looks identical enough to Damar is already quite low–finding one that can ALSO play NFL level cornerback is this side of impossible.

AI

We have gotten some nice responses back on the AI sections, so just some quicker updates. If you really want to nerd out on the inner workings and training of ChatGPT (and other large, famous natural language processing AIs like ChatGPT), look up “transformer” as an AI training strategy. I don’t have a good way to simplify that, although it does have a wikipedia page. The “T” in ChatGPT stands for “transformer” (and the GP is the flavor of transformer used to train that specific AI). The shortest way I can put it is that it’s a combination of the un- or weakly supervised training followed by supervised “finetuning” to the final product.

On the limitations of AI, had the following exchange happen recently, live and in person:

(Fellow physician to me): “I’m going to use ChatGPT to come up with next year’s call schedule! I can tell it who is on vacation or out of office when, how many weeks to divide up for each person, and ask it for the schedule.”

(Me): “You’ll get a rough draft, and you need to check it for errors. It will probably make some. But you’ll save some time.”

(Fellow Physician): “No, it can do it. I’ve heard it can.”

….a short time later…..

(I find printed calendar sheets suggesting Fellow Physician switched to manual)

(Me): “Went back to the old school route?”

(Fellow Physician): “I tried ChatGPT three times. It gave me three different drafts, but kept putting people on for weeks when I had told it they would be off. But, it made the whole thing much quicker–I could just switch between people on the weeks it screwed up.”

Again, AI is a useful tool that will increase our efficiency. A total replacement it is not. Another example of this is that ChatGPT can write computer programs by itself–but it makes mistakes. They did a study with an AI (may have even been ChatGPT) and professional coders. Some had AI assist. Some did not. The ones who did were more productive, spending less time per project because they got a rough draft of the program. But the moral of the story is they still had to spend some time (hours, in fact) checking and revising code–because the AI tools are not perfect.

“Yet,” I heard argued from a reader. The reader posited that as the models got greater exposure to more and more data, they would inevitably improve on all these functions AND we might see fully emergent, novel behavior from them.

While some emergent behavior is possible, and more data will improve function for -some- functions, that is not true for all of them, and I believe there will also be a cap on emergent behavior, let alone a true general intelligence (bona fide sentience, or consciousness). The cap on emergent behavior is admittedly a guess. That more data doesn’t solve the problem is more tacit knowledge.

I know, without being able to go into detail, but I know that there are certain applications where no additional data will improve performance of the model or prediction. The best example I can give is let’s say we show a bunch of pictures of people to an AI, and try to train it to predict which of the people in the pictures are named “Larry.” While it may be able to do that some of the time, it will NEVER be perfect at doing that–not unless there exists a photo of everyone with everyone’s name underneath it where the model isn’t guessing, but already knows everyone who is named “Larry.” But if you already know, why would you ever create this model to predict people named “Larry”? You wouldn’t create this model. And even if you did, it would still fail to guess which newborn children were being named Larry–unless there really are some mysterious pattern or look to a kid that causes you to name the baby “Larry.” Even among adults, there’s not likely to be a defining set of features for people named Larry. Not too many are likely to look like women, true, so the AI can improve the odds on its predictions somewhat. But of the remainder, a lot of “Larrys” also probably look pretty similar to “Daves” and “Mikes.” There is overlap in the way people look, and the name assigned to them isn’t entirely random (it -looks- like you might be able to guess a “Larry” just by looking at them, at least some of the time)–but a picture is going to be limited in how informative it is about a person’s name. So there will be a hard limit on how accurate the prediction of name from a picture will be. That limit will exist, no matter how many additional known cases of Larry and Dave and Mike you feed it to train on (unless you feed them ALL, and they ALL become already known). Then, as we mentioned, if you then try to use it to predict which new humans (babies) are being named Larry, you can absolutely expect the model’s prediction accuracy to suffer. No matter how many pictures of existing Larry’s you showed it.

The point is, when your goal, what you want your AI to identify, has a strong Venn diagram overlap with something else, especially if the something else is similar and roughly as common, there is a limit to what additional data will tell you. You will always have prediction problems in that area where the two circles on our imaginary Venn diagram overlap. You might have other features, other circles that overlap and can reduce the area of overlap. But if, in the real world, in the ground truth, there is ALWAYS some degree of overlap across all possible variables, you have a hard cap on performance that no amount of new training data will fix.

To say nothing of the risk of overfitting the model when you have too much data… but we’ll leave that lie for now.

Lastly, there was some online concern recently about a new AI paper that basically described training an AI to “call” on other subject specific AIs to help it solve problems. For example, whereas ChatGPT sucks at chess, what if ChatGPT could, behind the scenes, phone its AI buddy Stockfish–an AI that does nothing but chess, and at super grandmaster level? ChatGPT could take the chess moves you are putting into ChatGPT, recognize them as chess moves, and “know” to call Stockfish. ChatGPT could have an interface set up with Stockfish where it takes the moves you give ChatGPT, and types them into Stockfish itself, then takes Stockfish’s moves after yours, and gives them back to you–letting ChatGPT “play” chess with you merely by tapping a dedicated AI chess function in Stockfish. This isn’t all that different from what many apps on your phone already do to function–they are calling other data stored elsewhere, or sometimes are even ringing other “mini-apps” when you ask them to do something, to do it for you. But it’s not the “AI branching into other AIs to become even MORE AI” that some of the online chatter appeared to be afraid of. It honestly seemed to me more like a statement of the probable, if not obvious, where one could increase the usefulness of one particular “interface” AI like ChatGPT (which is a very natural human interface) if it could “talk” to other more task specific AI algorithms when it recognized a problem as being suited to one of their specialties. Would save you, the human user of ChatGPT, the trip to find that task specific AI algorithm yourself, and figure out how to ask your question/problem there…

Alright… wrapping up (I did say “short” this time!)…

Your chances of catching coronavirus are the chances that this is the joke ChatGPT came up with when I took the lazy route and asked it to finish the “chances are” section for me:

“Right now, the chances of catching coronavirus are about as high as the chances of finding a roll of toilet paper at the grocery store in March 2020.”

Which, for an AI chat bot, isn’t a bad effort! But no offense to my ChatGPT buddy, I think the comedians’ jobs are safe for a little while longer…

<Paladin>