1. Threats

Clippy the Maximizer

A talk about the lesser known dangers of chat GPT and other generative models.

Threats



Pleading. Read this. chatGPT has no concept of what lying is. It has no concept of fact, because the data has not been properly validated amongst other things. What is happenning is highly sophisticated probability testing.

It is burbling. Follow the links. OpenAI themselves describe it as a work in progress. Two or three iterations down it may be more reliable With a bit of luck…. See my comments below for the original post and other supporting detail. Tread carefully.

Source: post by Alan Woods


Can we trust anything from ChatGPT?

This carefully researched article demonstrates just how dangerous generative Large Language Models are.

They dream up the sequence of tokens following tokens in a way that produces highly plausable streams of words, fabricating plausible sources with complete disregard for any reality.

This is because the LLM has no interest in what documents provided any information. Indeed it is not created or intended to be able to do this.

Its design has no capability to link its token patterns that it is learning to any one of the vast number of training documents.

Those who have created this technology do not understand linguistics, the science of languages. They believe that intelligence can be obtained by this dreaming technology just by using ever larger models, but, as we are seing, this i not true and can never be the case.

This is the greatest danger and risk from all the hype surrounding current neural net based AI.

Source: post by Richard Self


Amongst other concerns it seems chatGPT has democratised the writing of malicious code. Its not really a specialist skill anymore as it will generate such code on request. Tears before bed time. — Alan Woods