The big lie of AI

There is a lot of bullshit propagated about ‘AI’ by the tech industry and media, what it is capable of, what the dangers are. Hopefully these links are just enough to dispel much of it for anyone who hasn’t yet taken a good look at the big lie of AI.

This third video explains something about how a language model works

This last one brings up a playlist when clicking on the title

I think Jaron Lanier is one of the smartest guys in the room when it comes to all of this stuff.

His point about the “geek mentality” is well stated, but falls short on one important point. That point being that all these techies were the chess and math club geeks who got rolled for their lunch money and didn’t get dates to the prom. Now they have all kinds of money and power and they’re looking for some payback.

His other excellent point was about the mystery and spirituality involved with being a human. I am hopeful that it’s that point that will eventually win out. It’s not in our nature to isolate ourselves in some artificial environment. It may work for some people, and more power to them if that’s how they choose to live, but I don’t think it works for the vast majority of us.

Lanier is definitely a smart guy on this stuff, but he is a pretty mixed bag in his opinions of the state of surveillance capitalism and how to approach correcting it.

The AI singularity is a good excuse for continued automating away of jobs. Nothing we can do about it, AI is just getting more advanced and we need to surf that wave!

What’s really happening is billions being poured into various different ways of automating away jobs with little in common other than some/most of them using ML for different bits of it.

Tech has always cost jobs. When I was selling accounting systems back in the 80’s, one of the cost justifications was the company did not need as many people in their accounting department. Look at how many jobs have been lost in manufacturing due to robotics.

It’s a harsh reality for sure, but I don’t know if there is a lot of people who start businesses with the main idea being able to hire a lot of employees.

As you push the minimum wage up, you get to the point where it becomes more cost effective to automate those jobs. I read the other day where Best Buy is opening a store with pretty much all virtual employees as a test. It sucks, but it’s where we are. Especially when you consider, at least at the present time, employers are having trouble finding workers.

There’s a large auto tire and maintenance company here called Lex Brodie’s, they’ve been around since the 60’s. They recently had to close one of their locations because they couldn’t find enough employees to staff the location. A big part of that is due to the loss of population we experienced here from Covid, but another big part of that is people don’t want to do that kind of work anymore.

Lower paying jobs just aren’t allowing for people to earn a living anymore. And going to college to earn more money comes at the cost of high debt today (the biggest form of debt in the U.S.). It was bad here in the U.S. before covid and inflation, and it is much worse now. Instead of the economics being meaningfully addressed (previously made worse by outsourcing jobs), automating more jobs away is accelerating the situation. When unemployment is too high and costs of goods is too high, who are the customers? And of course, small businesses who don’t automate away jobs have a harder time keeping the doors open than big business. Seems like a recipe for economic collapse to me, with a short term win for big business only.

Yesterday I read a piece of interview with Noam Chomsky, who said that while there are sciences of biology and chemistry, there is no cognitive science (a field he thought he was developing fifty years ago). In particular he said that when someone decides to move a finger, pick up an object or say a word, there is NO science explaining or modeling what that someone is DOING cognitively. We just don’t know. Mechanistically, something happens, but what about before the mechanism? He borrowed an old metaphor (who said it originally?) that suggests we may know what a puppet is doing on the puppet stage, and we might study how the strings get pulled mechanically, but all that tells us nothing about the story, the decisions, the intentions of the puppetry.

Today’s AI is about strings.

To me that suggests that the economists are thinking only mechanically. If they were social scientists or cognitive scientists, they would be asking why automation is the current fad, and whether there really are any economic advantages to it.

Perhaps “earning a living” is the core of economics.

Do they have an apprenticeship program? There’s a problem these days with some companies complaining that there aren’t enough Xs or Ys out there when they don’t train any themselves or they treat their employees like crap.

True. Companies are used to the version of corporate welfare that has turned universities into trade schools.

A related article is from today’s Ha’aretz where architects and historians poke holes in the idea that Saudi Arabia is going to build an artificially intelligent city in a big wall across the northern Saudi desert. Two main points made were that (a) such ideas are not new but come from older 20th century fantasies, and (b) it’s all founded on the notion that technology can do anything including solving social problems. Then they end with the obvious suggestion that bin Salman is not serious, just playing politics. Grandiose claims and impossible promises sell well.

Those kinds of exaggerated claims can be found pushing the very popular idea that AI can read our minds, or draw pictures of what we are thinking, or sense what people want to say. These claims can usually be reduced down to tiny lab results that are overgeneralized to all pictures or language. In other words, if an experiment using brain sensors (or implants) can “recognize” one, two or a handful of words, then they run with the implication that the problem can be solved for all words. Serious problems remain, duh. One is that such experiments involve long training time, where both the software AND the human subject are mutually trained to link up stimulus and response. Another is that they fail to measure or talk about how many mistakes are made by the human or the software. What all this amounts to is a routine that has been used with disabled people for decades where they have a “communication board” that contains some buttons that are matched with “words”, sometimes with spelling, sometimes with pictures. When the subject looks at (retinal tracking) or points at one of the buttons, the “result” is that they have chosen one of the pre-selected words.

That is as far from “language” as pitch-correction is from singing.

“The method is so simple, brain stimulation devices can be purchased over the internet or you can make one yourself from nine-volt batteries.”

I can’t say for sure, but I would bet the house they do. They’re a pretty well respected company here, a car repair place that has a good reputation.

I mentioned the population loss here. A lot of that was tourism industry people who left when things started opening back up on the mainland. Those jobs pay pretty well, for Honolulu standards, and I’m sure a lot of people here are opting for those jobs over car mechanic.

There is some validity to claims about not enough people available for jobs. But the thing with that is, a lot of those jobs are the kinds of jobs people don’t want to do anymore. People who don’t like getting their hands dirty.

There is always a big mismatch between what workers think they want and what bosses think they want. Sometimes that can feel like a shortage of something.