IT#8 Will GPT AI make humans obsolete?

 #ITManagement #SoftwareEngineering #AI #ML #GPT


No. GPT is not a real AI, it's just an old ML on steroids with tons of computational resources spent. GPT model is not an intellect, it's just an ML model to predict what would a human do so that other humans would like it. That's it, not truth, not the right thing, just what other humans would like. Based on tons of available data. GPT AI is just a reflection of humans in a mirror. Remove humans from this picture and this mirror will reflect nothing.






How does it work?


About 40 years ago, around 1982 I did a simple computer simulation on IBM/OS 370 generating primitive movie scenarios. It had three personages and a set of limted actions they can do. The setup was fixed:


Julia near a creek with a pitcher, Pete nearby, a gang of outlaws hides in bushes.


BTW, here is how one of the modern AIs (MidJourney) illustrated it:





 

Inside it was just a probability matrix of states with a Markov chain. Markov chain is a very simple mathematical probability model. You define states, and then the matrix of probablilities of moving between those states. To say it simply, each row of a matrix is a vector of probabilities of moving to the next state. Complicated? Let me give you an example. Here is that matrix. I cannot promise I used exactly this one, but that's the idea:




Notice, these are not probabilities, but rather weights, This is how it works today in those AI models. So, if you just printed out “Julia:”,  you go to the next state of “cries”, “shoots”, and ”runs away” with probabilities proportional to 0.5, 0.1 and 0.01. See the row “Julia:”.


Sometimes it produced an expected:


A gang of outlaws: jumps out of bushes.

JuliaL cries.

Pete: shoots.

A gang of outlaws: runs away.


But since it was probabilistic, sometimes it was more dramatic:


A gang of outlaws: jumps out of bushes.

Julia: cries.

Pete: shoots, shoots, shoots.

A gang of outlaws: shoots, shoots, shoots.

Pete: shoots, cries, shoots.

Julia: shoots, cries, cries, shoots.

A gang of outlaws: runs away.


And sometimes an unexpected:


A gang of outlaws: jumps out of bushes.

Julia: jumps out of bushes.

A gang of outlaws: runs away.


MidJourney AI illustrated it this way:





However the bottom line is that all that modern AIs do is creating that probabilistics matrix but in a more advanced way and with huge dimensions. It also automatically picks up the tokens to print (yes, it has no clue what it means, it's just chunks of a text for it).


How is the matrix calculated?

Yes, sure, the matrix is created using that scary “neural network” thingy. But really?


Ok, 50... yes, 50 yeas ago I played with self-learning robot to play tic-tac-toe using the match boxes. A simple game on a 3x3 field.


To do that, you buy dry peas and save a bunch of empty matchboxes.






Now you put a possible tic-tac-toe combination with the next step on every box. For example, for the first move on an empty filed, you will need 9 boxes:






Depending on a possible move of the second player, you get a bunch of boxes for the next step. Since it's only 3x3 field, you won't have to break the bank, the number of boxes will be reasonable. It's 9*8*7 minus duplicates, Put a dozen of peas into each box. Now, once you need to make a move, consider all boxes with possible moves and pick the one with most peas. If several boxes has the same number of peas, pick a random one. Don't put them out, keep them open until the game is over. This is called “training”.


If you win, put an extra pea into each box used in a game. If you loose, take one pea from the each box used in the game. Today that's called “back propagation”.


Play a lot of times. And in the end, your matchbox robot will start winning.


Each matchbox is a node in your neural network. The main difference is that the training and back propagation was executed on your brain and not a computer.


There are a few small details, see the “regression” in the name, but essentially that's what is done to create those AIs. Of course, you cannot handle billions of nodes with your brain or handle (or budget) billions of physical matchboxes, but computers can do that. And that's what they do. Which many people today for some reason call “AI”.



So, the bottom line?

So, will that pseudo-AI replace humans? In some areas yes. Illutrsator for books and papers, yes. This paper is illustrated using MidJournay and Dall-E AIs.


Text? For God sake, humans produce so much junk text that it can only add to a huge pile of low quality texts. Check litnet.com. The problem there is not to find a text, but rather find something worth reading.


Code? That's harder. You see, it's not trained in the existing GitHub code, it's also trained on all the bugs contained there. Do you want these bugs? No? Ooops...


So the bottom line is simple. No. It may be a valuable tool, but it will not really replace humans. Well, some crazy execs may try it, but trying is not succeeding. And they will not succeed.


Those Ais are just a reflection of humans. Remove humans and that mirror will reflect nothing.






Comments

Popular posts from this blog

IT#5 Corporate Parasites

IT#14 Economy of Complexity

#IT18 Whom to blame for high software engineering salaries?