Clippy the Maximizer
A talk about the lesser known dangers of chat GPT and other generative models.
Threats
-
Chaff: flooding society
- Bullshit asymetry principle. AI Chaff principle is 2-3 orders of magnitude bigger.
- Slows everything down, a penalty on everyone doing anything.
-
Identity impersonation
- Especially friends and relatives
- This is in the wild already
- Dangers of creating synthetic profiles with or informed/inspired by real data
- Chaffing someone’s identity by creating similar mock profiles to make their real profile hard to find and trust
- Click Monkeys: It can exist now at even larger scale
-
Collecting information about you from relatives and contacts
- Combining this info via enrichment
-
Psyops and manipulation
- Sentiment analysis
- Goal directed agents
- Synthesizing people, voices, content, entire identities and websites for false histories and legitimacy
- Replika is a good example of this threat, think of this happening at scale
-
Propaganda and psyops
- Create profiles by combining data from multiple real entities
- Leaving fake data trails from synthetic individuals
- Could I disguize this as advertising and get VC funding to finance this
- Most advertising is about manipulating populations, so yes
- Pysops as a service: basically Facebook but much more dangerous
-
Unreliable generation
- How many pieces of sound are in a cloud?
- Broken is dangerous, but occasionally working is worse
- Accuracy of self driving cars article (6 nines of reliability)
- How accurate is ChatGPT, is it even 2 or 3 nines?
-
Changing motivatios of companies
- Companies change the operation of services frequently, AB testing as well
- Companies shut things down constantly
- Shaky foundation on which to build other things
- AI Supply chain issues: data sources, training personnel, etc.
- What if the company gets sold to an adversarial party?
- AI-generated arguments changed minds on controversial hot-button issues, according to study
- Replika users fell in love with their AI chatbot companions. Then they lost them
- Silicon Valley’s New Professional Nihilism
- I Changed My Mind on Social Media and Teen Depression
- VW wouldn’t help locate car with abducted child because GPS subscription expired
- ChatGPT and the AI Apocalypse
- ChatGPT Is Nothing Like a Human, Says Linguist Emily Bender
- Something beautiful that generative text models will never allow again / by Matthew Pagan / Feb, 2023 / Medium
- Photographer Who Found IG Fame for His Portraits Has Confessed Images Were A.I.
- Show HN: ChatGPT-powered dystopia simulator
- Oceania Always at War with Eastasia: Generative AI and Knowledge Pollution
- artnet
- They thought loved ones were calling for help. It was an AI scam
- Rising scams use AI to mimic voices of loved ones in financial distress
- How to Take Back Control of What You Read on the Internet
- Chinese disinformation targeting U.S. features fake people on fake news programs
- They thought loved ones were calling for help. It was an AI scam
- Rohan Light: I’m thinking it’s best to assume one’s LinkedIn account is subject to training exercises by third parties with the view of replicating voice or teaching an algo
- The Misalignment Museum is an art installation with the purpose of increasing knowledge about Artificial General Intelligence (AGI) and its power for destruction and good.
Pleading. Read this. chatGPT has no concept of what lying is. It has no concept of fact, because the data has not been properly validated amongst other things. What is happenning is highly sophisticated probability testing.
It is burbling. Follow the links. OpenAI themselves describe it as a work in progress. Two or three iterations down it may be more reliable With a bit of luck…. See my comments below for the original post and other supporting detail. Tread carefully.
Can we trust anything from ChatGPT?
This carefully researched article demonstrates just how dangerous generative Large Language Models are.
They dream up the sequence of tokens following tokens in a way that produces highly plausable streams of words, fabricating plausible sources with complete disregard for any reality.
This is because the LLM has no interest in what documents provided any information. Indeed it is not created or intended to be able to do this.
Its design has no capability to link its token patterns that it is learning to any one of the vast number of training documents.
Those who have created this technology do not understand linguistics, the science of languages. They believe that intelligence can be obtained by this dreaming technology just by using ever larger models, but, as we are seing, this i not true and can never be the case.
This is the greatest danger and risk from all the hype surrounding current neural net based AI.
Amongst other concerns it seems chatGPT has democratised the writing of malicious code. Its not really a specialist skill anymore as it will generate such code on request. Tears before bed time. — Alan Woods