“I’m extremely left-leaning, but I do have concerns about the (((globalists))) in finance”
“I’m extremely left-leaning, but I do have concerns about the (((globalists))) in finance”
As a person whose job has involved teaching undergrads, I can say that the ones who are honestly puzzled are helpful, but the ones who are confidently wrong are exasperating for the teacher and bad for their classmates.
I am too tired to put up with people complaining about “angies” and “woke lingo” while trying to excuse their eugenicist drivel with claims of being “extremely left leaning”. Please enjoy your trip to the scenic TechTakes egress.
“If you don’t know the subject, you can’t tell if the summary is good” is a basic lesson that so many people refuse to learn.
From the replies:
In cGMP and cGLP you have to be able to document EVERYTHING. If someone, somewhere messes up the company and authorities theoretically should be able to trace it back to that incident. Generative AI is more-or-less a black box by comparison; plus how often it’s confidently incorrect is well known and well documented. To use it in a pharmaceutical industry would be teetering on gross negligence and asking for trouble.
Also suppose that you use it in such a way that it helps your company profit immensely and—uh oh! The data it used was the patented IP of a competitor! How would your company legally defend itself? Normally it would use the documentation trail to prove that they were not infringing on the other company’s IP, but you don’t have that here. What if someone gets hurt? Do you really want to make the case that you just gave Chatgpt a list of results and it gave a recommended dosage for your drug? Probably not. When validating SOPs are they going to include listening to Chatgpt in it? If you do, then you need to make sure that OpenAI has their program to the same documentation standards and certifications that you have, and I don’t think they want to tangle with the FDA at the moment.
There’s just so, SO many things that can go wrong using AI casually in a GMP environment that end with your company getting sued and humiliated.
And a good sneer:
With a few years and a couple billion dollars of investment, it’ll be unreliable much faster.
Not A Sneer But: “Princ-wiki-a Mathematica: Wikipedia Editing and Mathematics” and a related blog post. Maybe of interest to those amongst us whomst like to complain.
the team have a bit of an elon moment
“Oh shit, which one of them endorsed the German neo-Nazis?”
Aaron likes a porn post
“Whew.”
Please don’t make posts to TechTakes that are just bare images without a description. The description can be simple, like “Screenshot from YouTube saying ‘Ad blockers violate YouTube’s Terms of Service’”. Some of our participants rely upon screenreaders. Or are crotchety old people who remember an Internet that wasn’t all social media feeds sharing snapshots of other social media feeds.
“Drinking alone tonight?” the bartender asks.
I don’t see what useful information the motte and bailey lingo actually conveys that equivocation and deception and bait-and-switch didn’t. And I distrust any turn of phrase popularized in the LessWrong-o-sphere. If they like it, what bad mental habits does it appeal to?
The original coiner appears to be in with the brain-freezing crowd. He’s written about the game theory of “braving the woke mob” for a Tory rag.
In the department of not smelling at all like desperation:
On Wednesday, OpenAI launched a 1-800-CHATGPT (1-800-242-8478) telephone number that anyone in the US can call to talk to ChatGPT via voice chat for up to 15 minutes for free.
It had a very focused area of expertise, but for sincerity, you couldn’t beat 1-900-MIX-A-LOT.
Petition to replace “motte and bailey” per the Batman clause with “lying like a dipshit”.
Wojciakowski took the critiques on board. “Wow, tough crowd … I’ve learned today that you are sensitive to ensuring human readability.”
Christ, what an asshole.
For a client I recently reviewed a redlined contract where the counterparty used an “AI-powered contract platform.” It had inserted into the contract a provision entirely contrary to their own interests.
So I left it in there.
Please, go ahead, use AI lawyers. It’s better for my clients.
Adam Christopher comments on a story in Publishers Weekly.
Says the CEO of HarperCollins on AI:
“One idea is a “talking book,” where a book sits atop a large language model, allowing readers to converse with an AI facsimile of its author.”
Please, just make it stop, somebody.
Robert Evans adds,
there’s a pretty good short story idea in some publisher offering an AI facsimile of Harlan Ellison that then tortures its readers to death
Kevin Kruse observes,
I guess this means that HarperCollins is getting out of the business of publishing actual books by actual people, because no one worth a damn is ever going to sign a contract to publish with an outfit with this much fucking contempt for its authors.
There’s a whole lot of assuming-the-conclusion in advocacy for many-worlds interpretations — sometimes from philosophers, and all the time from Yuddites online. If you make a whole bunch of tacit assumptions, starting with those about how mathematics relates to physical reality, you end up in MWI country. And if you make sure your assumptions stay tacit, you can act like an MWI is the only answer, and everyone else is being un-mutual irrational.
(I use the plural interpretations here because there’s not just one flavor of MWIce cream. The people who take it seriously have been arguing amongst one another about how to make it work for half a century now. What does it mean for one event to be more probable than another if all events always happen? When is one “world” distinct from another? The arguments iterate like the construction of a fractal curve.)
I saw this floating around fedi (sorry, don’t have the link at hand right now) and found it an interesting read, partly because it helped codify why editing Wikipedia is not the hobby for me. Even when I’m covering basic, established material, I’m always tempted to introduce new terminology that I think is an improvement, or to highlight an aspect of the history that I feel is underappreciated, or just to make a joke. My passion project — apart from the increasingly deranged fanfiction, of course — would be something more like filling in the gaps in open-access textbook coverage.