Artificial Intelligence and Natural Stupidity

We live in an irritating world, made all the more tiresome by the increasing, amount of tech we contend with every day. There are those Facebook algorithms, eerily sending you adverts geared to your “likes” or Linked In knowing your fourth grade classmates better than you do; that tinny  voice offering to help you navigate tech support (god help you if you have an accent); or having to prove you’re not a robot to some web site. Never mind refrigerators and “smart’ TV’s that can watch you – and be hacked  or apps professing to predict your cardiac or Alzheimer’s risk that are about as reliable as a palm reader. Human contact, it would seem, is becoming irrelevant. The few times an actual human answers a call or is there to help, well, I don’t know about you but I want to weep: things go so much more smoothly.

Yes, I know, that’s just crazy talk. The future is now and it’s automated.

Then there’s manufacturing where factories that used to employ hundreds if not thousands of people now have maybe twenty people working alongside the robots who now do the work. What would one call them I wonder? It’s a pride of lions – so perhaps a clank of robots?

Danger, Will Robinson, danger.

Algorithms, chatbots, trolls and nameless, faceless tech of all sorts are so ubiquitous we barely notice them any more, up to and including health care where, you’d think that in dealing with a sick, vulnerable person some human contact would be the basic requirement. Where’s Robot in Lost in Space to warn us, along with Will Robinson, of the danger?

In the health care realm even when there is ostensible human contact it’s technology driven. Hang out in a hospital room for a day or two as I did recently and watch as a nurse or PN (I know they’re not called that any more but I can’t keep up with the changes) wanders in, checks the beeping machines, jots something down and wafts out, without a word. Glance down the corridor and pretty much everyone at the nurse’s station is staring vacantly at a monitor, oblivious to anyone waiting to ask a question.  Honestly, it seems a pity to disturb those machines sometimes.

So why it should have come as a surprise to me I don’t know. In Ontario apparently there’s some conflict going on about robots in Canada’s operating rooms.  A handful of surgeons are seriously miffed that their toys are going to be taken away from them since there’s no real evidence that they actually work any better than the old-fashioned kind of surgery done by actual humans.

(Reminds me of a conference I attended years ago where a urologist spoke for nearly an hour about some cool new massively expensive “microwave” that was going to revolutionize prostate treatment and do away with prostate cancer forevermore. Well, it’s decades later and I haven’t noticed any revolution or reduction in cancer.  What was noteworthy at the time to me at least was that a later speaker, a woman doctor explaining cervical cancer was all about low-tech clinical matters and how to make the patient comfortable. Never noticed any patients being mentioned alongside the toys. Interestingly, a group of male prostate cancer patients made their own film around that time in which they related their experience(s) in the hope that more men would exercise caution before leaping into the high tech and/or surgical options.)

Granted, tech can be a godsend to the injured and disabled. Some of those new digital prosthetic limbs are truly miraculous. The trouble is when the outcome doesn’t match the time, money and effort it takes to create some piece of hard/software. Take robots in the operating theatre. There has to be some serious proof that the tech is superior to current care. And I don’t think I’m being a Luddite when I say if I was the person who was being cut open I’d just as soon have a thinking, adaptable being leaning over me, one whose intelligence was not artificial thank you very much, versus some robot programmed by a 17-year-old who actually believes everyone’s insides look just like that pictures in Gray’s Anatomy (the book, not the TV show). OK, I’m being facetious; presumably live human doctors have input into the development of whatever boffo artificial intelligence (AI) went into the thing. But, as with the cockpit, one would prefer to grump about legroom secure in the knowledge there are biological beings flying the plane; humans who could react to the unknown based on expertise and experience, not rote. (Highly motivated humans who also understand that they go down with plane so it behooves them to figure out how to set the damn thing down safely, maybe on the Hudson River.)

As anyone (read: all of us) who have banged on a mouse in frustration as a drop down menu or some program insists we haven’t done such-and-such even though we have, seventeen times, and there’s simply no recourse, I can see the day coming when a surgical arm needs rebooting and gosh, nobody’s there.

No doubt AI aficionados will object and tell me (in no uncertain terms) that my understanding of artificial intelligence is flawed and as biased as Will Smith’s character on I Robot. Maybe so. But as someone who read the original Asimov books way back when, I tend to think the potential for harm increases exponentially with the complexity of the task at hand. Big Blue may have won at chess and even some game called “Go” was won by some digital gamer, but surgery and flying a plane aren’t a game.

Artificial intelligence, oh my

Alas, the toys ‘r us crowd are happily moving along, research funding at the ready and PR teams salivating at the thought of the coolness of it all. In fact, any day now in won’t just be a robotic arm in the OR but dead silence as STAR (Smart Tissue Autonomous Robot – who thinks of these acronyms anyway?! ) takes over.

At the moment STAR’s a bit slow: a simple gastric bypass operation that takes a surgeon  approximately eight minutes takes it roughly an hour. But just as chatbots reply to the question you tweeted some airline in that anodyne, generalized robotic tone, soon so will your surgery. Trouble is, things can go wrong in surgery and I’m not so sure STAR is up to the challenge. Then again, if something goes wrong you can always sue the manufacturer.

Honestly, I think I’d rather go with lions and tigers and bears. At least they’re real. And their babies are cute.

Here’s where it gets seriously creepy

I suspect most people have an image of AI and tech based on the movies where our hero/heroine leaps in to land the plane or do that tracheotomy when evil terrorists have taken over but those are, by definition, fiction: scripted, directed, edited. No computer ever crashes on any of those shows – people are being chased by clever villains (who love classical music) but every time they manage to download the vital clue in ten seconds flat and get away. No memory stick eve screws up; nobody ever has an issue finding the right port and naturally no software ever needs rebooting: everything from the GPS on up or down works perfectly. Then there’s the rest of us who are simply trying to order a book on line and have spent the last half hour retyping our home address, only to see that  irritating red pop-up telling us our order can’t go through because our information is incomplete.

Inevitably, at times things get seriously creepy. Pornographic androids and now this female android called Sophie who’s been given Saudi citizenship – I kid you not.  I bet you anything this female, now Saudi, android has rights that real women in that Wahhabi  country don’t have.

Then again, what can one expect from the desert kingdom where the crown prince throws anybody and everybody in jail at whim and plans some robot-run economic free zone for the future?  Maybe Sophie can work there.

Fake news?

Moving on – and keeping to the theme of technology taking over – and androids aside, the so called health news that social media zip over for our eyes only are all too often dead wrong. And potentially dangerous. Not only are these almost inevitably disease oriented (which most minor problems are not) but their advice can be downright dangerous. I see catchy ads asking me to click on some supplement that will make smarter, fitter, thinner and no doubt taller. Needless to say I don’t click but somebody does, otherwise they’d stop posting those ads. Like spam, if nobody ever replied they’d stop doing it.

Research in general has increased exponentially over the last decades, to an extent that’s difficult to conceptualize. What we used to call the Index Medicus in the 20th century was in the tens or, eventually, hundreds of thousands; now we are in numbers too large to calculate (and getting larger all the time). And the sad thing is that most of us don’t appreciate just how much the error rate goes up as the numbers increase.

My friend Frogheart, aka Maryse de la Giroday whose gem of a blog contains everything nano you’d ever want to know passed along this piece to me; it’s about a single issue, breast cancer germ cells used over the last four decades that actually aren’t breast cancer cells at all but melanoma.

The mind boggles at just how much research might have been based on this faulty methodology/thinking and just how many patients and clinicians have been led astray. Notably because breast cancer is such an emotive topic.

Over the years I’ve read many, many research papers and articles; some I’ve been asked to review, others I’ve written about, still others have simply been out of interest (or fury). There’s a lot of bad research out there people. Fake news isn’t just about politics; it’s also about life, health and everything else. So please use caution when you read or hear that “scientists” have found X or Y may help with this or that and don’t add supplements or alter your medications based on faith on science. Science, like pretty much everything else human beings engage in, is not only fallible but subject to the same human foibles as anything else: money, position, power, jealousy, idiocy …  As the geneticist Lewontin once said, scientists come from the same world as the rest of us.

A world that these days is probably binary, digital – and not that bright.

 

 

12 thoughts on “Artificial Intelligence and Natural Stupidity

  1. penelope harris

    Are we talking about the HeLa cells being used in today’s cancer research, from one woman’s ovarian cancer ? Probably…

    1. susan Post author

      Hi Penny – Yes we are. Among other mistakes and sloppy science. The Slate article was excerpted from the book, Rigor Mortis How Sloppy Science Creates Worthless Cures Crushes Hope and Wastes Billions by Richard Harris. I think this link might work:
      http://www.slate.com/articles/technology/future_tense/2017/04/the_impostor_cell_line_that_set_back_breast_cancer_research.html
      Sorry about that.
      In reality most effective treatments don’t come from genetics or high tech solutions but are based on the sensible use of old drugs.

  2. Clive Edwards

    Cranky is as cranky does. My eighty something neighbour’s favourite expression is, “…you know what makes my cranky?”

    Personally, I am beyond “cranky”. “Bemused”, while somewhat under-descriptive, is more along the right line. Medicine, health, politics, economics, the security state. So much B.S., so little time.

    1. susan Post author

      Well, if one goes beyond cranky the top of one’s head would blow off and that wouldn’t be A Good Thing …

  3. Maryse de la Giroday

    Hi Susan Baxter! I’m really glad to see this piece, which brings together a lot of AI as described in various news sources while focusing on its (and robot) integration into health care. So often one reads a series of disconnected bits of information about AI and about robots that it becomes difficult to form any kind of coherent picture. Having tried to do this a few times myself I appreciate the skill and hard work, let alone the insight, it took to produce this piece. I hope to see more! Cheers, Maryse Btw, thank you for the shout out.

  4. Clive

    And let’s not forget “off-label use” of pharmaceuticals. I remember my wild and spent (misspent or otherwise) when no self-respecting hippie could resist playing Russian roulette with their mind. In those days, when the CIA was just getting into the drug business, they co-opted LSD for “brainwashing” purposes. Recent research has revealed why this failed, and why hippies are generally a tad jaded about authority in general. Seems psychedelics contribute to brain plasticity, increasing the number of synapses and other brain connections that actually change the way the brain works – but in a good way. All the various psychedelics seem to protect against depression and anxiety, including PTSD – good for combatting brainwashing – not so good for promoting brainwashing. Perhaps that is why the CIA stopped “turning on” the youth of the world and now traffics in opiates such as heroin instead. After all, fast brains, like fast cars, are a danger in the wrong hands. Of course, mileage differs with the driver.

    1. susan Post author

      I don’t know much about psychotropic drugs though I’ve read bits and pieces. Personally I tend to be … cautious .. with anything that can affect neurological connections. As for off label use of meds, I don’t tend to be all that against it as a rule, provided they actually work (not like, say, antipsychotics used in nursing homes supposedly to help with memory issues which they do nothing for). Drugs are simply substances that have an effect on the body. The ones we like we call therapeutic effects, the ones we dislike we call side effects. But in the end they’re just effects. 🙂

Comments are closed.