Just blame it on Claude!!!

Thank you.

Hey,

welcome to another episode of Cloud

Unplugged.

We have quite a few stories.

No surprise, a lot of AI, as always,

given the current trend in the media.

But we've got GitHub Copilot is moving

away from subscription to more usage based

pricing.

We have Amazon with their Tranium three

chips undercutting Nvidia and getting into

the chip market.

An AI agent wipes production databases out

in about nine seconds,

including all the backups.

And ChatGPT is a random story by Salman

and another random story by me about a

blog,

which we won't do until the very end.

So just to kind of keep you on

your toes about what they're about.

So should we get straight in, Salman?

How are you doing on the GitHub co-pilot

stuff?

Get straight in, John.

GitHub Copilot have announced recently,

to dismay of a lot of developers,

I don't know how many developers use

GitHub Copilot,

but to a lot of dismay of that,

they're moving away from a flat rate for

premium requests and moves to credit built

by token consumption.

So the prices at the moment for pro,

you pay ten dollars a month for business

users, you pay nineteen dollars a month.

So that's a flat rate.

Well,

that's going to convert to thousand

credits for, for example, pro users.

Now, what is a thousand credit?

We don't have a definition yet of what

a credit is,

but you have to start looking at your

usage with GitHub Copilot starting from

June twenty twenty six,

the first of June twenty twenty six.

Oh, wow.

So basically you get a thousand credits,

which could kind of mean, well, anything.

But I guess it's more somehow behind the

scenes, the compute behind it.

And then basically you just get charged

for how much you're just going to use.

And credits is basically just an

abstraction.

They're moving away from tokens.

I guess they're not following the token

model, what everyone else is doing.

It's now this other abstraction just

called credits.

Yeah.

essentially so you don't so i guess what

you'd have to then figure out what a

credit pertains to based on your usage

which could be quite variable based on the

work that you're doing is that why just

like judge no point bothering because

depending on the work you could be doing

a genetic aspect you could be just like

doing really simple tasks therefore let's

just call it a credit and you're probably

just going to burn through them and then

have to put more money on

Basically, that's basically it, John.

I mean, you know,

the limits exist nowadays anyway.

For example, with Claude,

you have tokens that you have to burn

through, five million tokens.

And you have a limit for that week.

And you know what a token is.

A token could be word or part of

a word or it could be a space.

You kind of know what's going on.

And you know how much your input tokens

cost.

And you know how much your output tokens

cost.

At the moment, there is no...

There's no information about what a credit

is, but when it comes up,

we kind of have to figure out.

It's like you have a car,

but you don't have a speedometer.

You kind of have to guess.

Or actually, even better, you have a car,

you put it in fuel,

and you don't have a fuel meter.

So you need to figure out if you

are about to run out or not,

but you don't know that.

Yeah, B to B,

not to be too opinionated on this again,

but B to B,

what they get away with is a bit

mental.

If Netflix as B to C was to

kind of obscure all of this as a

subscription and you were basically into

credits and you were just, I don't know,

maybe you're watching, well,

your favourite Love is Blind.

I know you really like that TV series.

So imagine, Simon,

that you're watching your Love is Blind TV

series, you're on season three,

you're halfway through the episode and

then it just stops.

Because you've run out of credits.

You've got to top up them credits to

carry on to know what happens.

No one would really want to use it.

But in B to B,

you can just get away with all these

weird pricing abstractions and businesses

just end up paying.

It's kind of wild, really,

what they get away with.

The pricing structure I find wild, but

Um, why do, why do businesses, why,

why do they go get away with it?

Is it because people in B to B

from business to business, they're not,

they don't hold other companies to account

or is there's not much for,

or from people?

Cause usually if there's something like

that with Netflix,

you're absolutely right.

It will be a Netflix.

It will be on Reddit.

It will be on Twitter.

Everybody will be up in arms.

Why is, why is that the case?

Yeah, I don't know.

I actually really don't know.

I don't know why B to B charging.

has such a different lens,

I guess because it's not as like a

company pays, doesn't it?

So maybe it's just not quite as voluminous

on the opinion.

Do you know what I mean?

I guess it is a bit frustrating,

but yeah,

I guess maybe because not as many people

are sat consuming and then going to take

into Reddit because it's like the CFO is

probably the only one that's going to see

it or maybe like the company owner.

and then paying like eight million but it

is amazing because you can basically just

get away with making loads of money and

it can make basically no sense on the

value so you could just be like what

am i paying for and then you'd be

like credits duh exactly like what are

your problems

you've got twenty thousand credits last

year you use them like come on what

what are you talking about yeah you're

right now but you know like uh just

going back to this if we forget about

the the credits for a second but i

think this is a pattern that we're gonna

see from many other companies too the

prices are gonna get high because if we

look back to when gpt four came out

and there's some numbers on how much it

costs to train gpt four is like hundred

million dollars right

And if you are using ChatGPT for twenty

pounds a month,

that's not giving back all the money that

you invested.

So I think we're going to see where

other companies are also going to start

hiking up the price.

It's like, you know,

they give you a free truth brush.

It's like they give you a free toothbrush

and you have it now.

And after a few months they'll say, okay,

we're going to start charging you for

every brush stroke.

That's what's happening.

We just hooked it now.

Right.

And as a developer,

you just keep smiling because you're way

too invested in it.

You can't come up.

You have to use these tools.

So yeah, I don't know, like even,

for example, if you look at Nvidia chips,

right,

the nowadays you have this AI acceleration

chips, a one hundreds or H one hundreds,

each of these chip costs like ten thousand

pounds, dollars.

And in order to train a model,

you need thousands of these chips.

So like tens of thousands of GPUs.

So the cost, somebody has to take it.

Now,

I wonder if this is where we started

to see where the price is going to

get hiked up.

And then people will say, you know what?

We should just go back to old days

of writing code by hand and using Google

search and using Stack Overflow and hiring

junior developers.

Not sure.

I don't know what your take on that

is.

Hmm.

Well,

given someone that's just spent ages

working with Claude to do something that

took about an hour,

that probably would have taken me money to

do fifteen minutes,

but still strangely persisted to do it

through Claude for an hour,

even though I literally could have done it

better.

And in a short amount of time, habitually,

I don't know,

I think I've like I've converted.

Basically,

I'm on a downward trajectory of becoming

slowly more dumb to the point where the

thought of even thinking for myself,

it's always like, John,

you're going to have to, you know,

think for yourself on this one.

You can't really use AI.

I'd probably be outraged.

I'd be like, you know,

what I've got to do is, what?

I've got to form a paragraph on my

own about my own.

I can't just like brain dump.

what I'm thinking and structure a sentence

automatically.

So, yeah,

I think the hook is a bit like

I feel like it's going to be like

TikTok, a bit like the addiction of like,

I do actually believe there will be a

slight addiction,

but developer addiction where the illusion

of productivity because you can go off on

four threads simultaneously.

and then just keep tapping into those four

threads and almost not really have depth.

I mean,

you can go shallow and then it goes

deep on four threads that I think people

will just end up paying because

I think we're slowly getting converted

into a different way.

I mean,

I think it's already happening to me,

which is quite surprising.

I'm tracking it.

What do you think?

Are you addicted?

I think we need a developer addiction

rehab center,

machines that don't have any models that

are running and there's no internet

connection.

Give me some of those old books that

are really old that nobody's reading and

program from there.

And I completely agree with that.

I agree.

I think it's becoming, as you said,

like TikTok, people getting addicted.

Short-form content is there.

Brain rot.

This is basically developer rot, right?

This is developer rot, John.

Do you reckon then we'll basically be

going into sanctuaries where there'll be

libraries and you get into leather chairs

and you take a book,

but it isn't really a book.

It's kind of a MacBook.

you open it up.

It's like, it's a real, you know,

but you just choose the chip and it'd

be like, yeah.

And you'd be like, oh, it's a,

it's an M two chip.

And you kind of slide and you open

it up.

And basically there is no AI on it

at all.

And you can't do it.

And you actually have to,

and you go into like a little sanctuary,

like maybe like, you know, tranquil music,

like you're in like a spa and then

you just start to develop.

And like people start to do that as

a bit of, you know,

like therapy almost develops therapies for

you.

Yeah, I'm not sure.

For example, this is way before your time,

John, because I know you're super young.

Back in the day when there used to

be assembly language,

you have to know the registry address and

you have to write all of that assembly

language code.

And you need to be an engineer to

be able to write it.

And then we were like, actually,

you know what?

Forget about writing assembly code.

You can use like normal English language

in Java and Python.

So you bring down you keep talking about

bringing down the barrier to entry that

keep coming down.

Yeah.

So with AI, yes, it's coming down.

But at the same time,

we're saying we're getting what we're

going to term at Cloud Unplugged as

developer rot, right?

People don't think for themselves,

not just developer.

You as CEO,

you're telling me you don't think of

yourself, John,

but I know you're a big fan of

efficiencies.

So perhaps that's what you're trying to

do.

You're just trying to become too efficient

by burning all these tokens or credits in

future.

Yeah, it is a bit mental.

But yeah,

I think when the prices do hike,

I don't know how many people are really

truly going to care.

I'd also, like we're saying,

because it's not BTC,

who is really going to care apart from

the business?

The business would have to care,

I suppose.

But by that stage,

it's almost like a civil war of developers

that basically there'll be a coup and

they'll overthrow the CEO and the CFO for

making sure the co-pilot gets bought.

You know, there'll be uprising.

So we'll have to basically people will be

forced to buy the copilot credits.

And maybe it'd be like a thing you'll

go to a buy and be like,

how many copilot credits do you have?

And like people get quite competitive with

the amount of credits.

You know, like,

you know how people show their portfolio.

that's that's will be their portfolio yeah

this is look at my credit so basically

what we're saying is github is changing

the way they're going to charge us

whoever's using github from june it's

token based billing what credit based

billing but we don't know what an actual

credit is basic completion by the way

stays the same you know the auto complete

yeah that stays the same that doesn't

change but if you're doing agentic

sessions you're asking it questions

That's changing.

So basically what we're saying is that

developers are getting addicted.

They might have to go to rehab for

it.

They're getting developer rot.

They're not thinking for themselves.

And GitHub have got them hooked.

They gave them a free toothbrush,

and now they have to start charging them

for every brushstroke.

That's what's happening to do.

That's the only way forward.

I guess because you were talking about

chips.

The Tranium chips that Amazon's come out

with.

Tranium three.

I think you mentioned that there might be

a four coming out as well.

Obviously,

their share prices have been decelerating,

but not been going up basically as much.

I know that for Amazon.

I think their earnings.

wasn't doing too well.

So they obviously feel like they're trying

to segue into the chip business.

I think I did hear, though,

today that Amazon as a whole,

that AWS is sixty percent of all revenue

at the moment,

which is pretty significant out of all of

the threads of business.

So it's obviously the main money earner.

But, yeah,

do you want to talk about what this

means as in, like,

them coming out with a new Tranium III

chip and be more efficient?

Absolutely.

It's an interesting one because Andy

Jesse, CEO of AWS, your counterpart,

you're CEO of Appia.

Yeah, we get mixed up quite often,

actually, to be honest.

I know.

Are you Andy?

I'm like, no, no.

I'm John.

I'm John.

Happens a lot.

Happens a lot.

So in the earnings call,

he said that the data center chip

business, they had a two billion,

twenty billion increase in annual rate,

basically forty percent growth for it.

It's interesting to say,

to see from business point of view,

as you say,

the growth of the stock price is not

really going up.

But these Tranium-II chips,

which are equivalent of the NVIDIA GPUs,

they're not like the GPUs,

they are equivalent as AI accelerator

chips.

These Tranium-II chips are turning out to

be, according to AWS,

thirty percent better.

So they're like forty percent cheaper

compared to NVIDIA GPU.

And they're cheaper because they're a bit

more efficient.

And plus,

they're cutting out the middleman.

So that's why they're cheaper.

And then they're going to come up with

Tranium three,

which will be up to fifty percent cheaper.

Right.

So that's going up even more.

And then there's going to be Tranium four.

And what they're also saying is that they

have a commitment over the next few years

of about two hundred twenty five billion

from all these companies like OpenAI and

Anthropic to use these chips.

And also,

AWS is saying that they might even start

selling these trunnium chips.

for training other models,

kind of like the NVIDIA GPUs.

And it's not just Tranium.

They also have another chip called

Inferentia,

which is used for inference by hosting the

model, which is better for that.

But that's what's happening.

And Google and Microsoft,

Google have their TPUs.

Microsoft have Maya.

I think they're trying to take their

destiny in their own hands to some extent

and not try and rely on NVIDIA.

Yeah, I think Anthropic,

they're using Google's TPUs as well,

their chips.

I think they've just had a load as

well.

But yeah,

I was reading that the chip is,

the Tranium chip is one dollars every

hour,

which is what they cost the Tranium on.

And it's three dollars every hour with the

Nvidia H-One hundreds.

So that's like such a massive saving

cost-wise.

But yeah,

it obviously is like quite an area to

kind of get into.

I think it was inevitable, obviously,

that they were going to try and get

into it.

It's going to be highly competitive.

And NVIDIA was obviously leading the way,

but it doesn't make sense to be spending

those money to NVIDIA when they already

can manufacture their own chips.

It did feel like the right one was

on the wall.

And also, like we were saying before,

Um, with Apple's changes, um,

with the new CEO, John, great name, um,

coming in and, uh, you know, again,

they're getting into that as well,

into the chip side of things,

whether they'll resell or whether they'll

just do it for devices,

I guess is another question.

Cause they could also start manufacturing

chips and kind of getting into it.

Microsoft has done it as well with the

Maya.

Is it whatever it's called?

Chips anyway,

don't know a huge amount about that,

but yeah, Maya, um,

But, yeah, again,

it's all about the chips.

Get your chips.

It's a casino, baby.

It is.

It's a casino.

You have to get your chips to be

on the table,

and that's what Microsoft and AWS and

Google are trying to do.

They're just getting the chips so you can

be on the table, bet big,

and win big hands.

That's what they're doing.

How can you be able to just buy

– because on one side you've got credits,

on the other side you've kind of got

chips –

Do you reckon you'll end up being able

to just, you know,

rent your own chip almost and then not

bother with the credits?

And then, you know,

if it becomes so accessible,

a bit like VMs where you've like got

a share of the compute, it became like,

it's going to be kind of similar,

isn't it?

But I know the whole credit thing is

obviously like another abstraction layer,

but I kind of feel from cloud providers,

they're not going to abstract it,

I don't think.

I mean, it already started happening.

People bought Mac minis, right?

Mac minis were great.

And I think a lot of people are

running OpenCode.

They're running OpenCode as well on there.

So I think eventually people will start

running it.

And that's why I think Apple's move for

the new CEO, John, the other John,

is about getting those chips onto phones,

perhaps, right?

Onto the phones.

If they're on the phones,

it'll be much easier.

But I think...

there's there's still a couple of things

right you can make the chip which is

fine right you can make the chip and

i guess the people who win are the

developers and the people who want to do

it fair enough because you know there is

now there's competition but the biggest

problem

is the ecosystem so the nvidia ecosystem

cuda that library that you use to use

all these gpus their ecosystem has been

around for ages and it's easy to use

integrates really well with all the python

libraries that you have for data sciences

machine learning

I think that's their biggest selling point

for Nvidia is the ecosystem.

Jensen Huang also in a couple of podcasts,

he talks about this,

that chips anybody can make.

It's the ecosystem that they get in.

I mean,

the reason why people use Nvidia is

because one,

they don't have too many options.

Two is they say, look,

it's vendor agnostic.

So if you go with Tranium,

now you're really vendor locked in, right?

Because tomorrow,

if you need to run your training in

somewhere else,

GCP or a different cloud provider,

then you can't because AWS chips,

Tranium don't exist in there.

I know vendor locking is a concept.

So I think it's a little bit...

It's quite an interesting thing that's

what's happening.

So basically...

Amazon is saying training chips are fifty

percent cheaper.

They want to play it big in casinos.

So they're going to bring their chips to

the table.

And well, actually,

they're also saying they're fully

subscribed for the next few years anyway.

And AWS, Google, Microsoft,

they're all building their custom

AI silicon chips.

And we think that Apple might be doing

the same too.

But Nvidia's hardware is not just the

advantage.

The CUDA is the ecosystem where people

develop.

So I don't know how well people will

take on.

I mean, yes,

a lot of organizations are already using

it,

but I don't know how big the ecosystem

will be for this.

Perhaps will be because the hyperscalers

don't know how to do it.

But that's what's happening.

The chip market is restructuring.

It's restructuring.

That's all I see,

which is a good thing.

Yeah, we need to diversify.

You can't just have everyone on NVIDIA

chips and then listen to the share price

go up and talk about how amazing NVIDIA

is just like one competitor.

I mean,

it's not that much better even with Amazon

and Microsoft anyway.

I mean,

there's only like what three that you ever

hear about on top of that.

So it's a pretty like small, small pond,

isn't it really in the end?

But yeah,

that was funny because you spoke about...

open claw there mentioning uh kind of got

that in and then you've also been talking

about the article to come which is

obviously trusting the old ai to do your

little bit of the little bit of clod

as we were speaking about because we're

all addicted now and so you know like

yeah should be in my veins that give

it to me right now

Exactly.

Well,

that's only five hundred credits just for

the needle.

If you want just the needle.

Oh, my God.

Oh, the actual thing.

Yeah.

Another five hundred credits for that.

But yeah.

So the AI agent,

this is obviously Pocket OS.

It wiped out the production database and

then kind of carried on.

I think it wasn't really the AI agent.

It was obviously railway underneath,

wasn't it?

I think there was just some relationship

where it then obviously went off and

cleaned up the database.

It wasn't like it was intentionally going

off to destroy and sabotage everything.

I think it was just more how things

architecturally were set up.

from what I understand.

But yeah,

I guess it's another one of those

AI-related,

not a compromise this time around,

but still not great.

yeah and uh basically that's exactly what

happened the the developer i mean it did

come out which was surprising which is

good because it brings visibility to

people to see what kind of stuff can

happen with ai so the developer was using

cursor ide which was part workload and it

was doing some tasks and it was doing

work on staging environment

So the work was actually happening in a

staging environment.

And an issue happened where Claude then

said, oh, let me try and fix this.

And it basically executed a drop and

delete.

Instead of actually fixing the issue in

the staging database, he's like, oh,

it's probably just easier for me to drop

it, delete the backup,

and recreate the whole thing again.

Right.

So that's what it did.

There's no confirmation gate, no dry run.

And the developer said that they watched

the terminal scroll through.

And by the time they realized that it's

actually deleting everything,

it was quite late.

And the reason is because

the agent that was running on the machine

had the same permissions as the developer.

Usually for production access,

you have just-in-time access,

or you have perhaps a separate account for

it.

It's the same credentials that the

developer was using.

So even though it got blown up as

an AI deleted this thing in nine seconds,

this could have happened even before AI.

You could have just mistakenly deleted it

anyway in the first place because you were

connected to the database.

You didn't realize that you're connected

to production rather than connected stage.

You could have just right clicked and

deleted

deleted it that could have happened before

as well so it's not really you can't

really call an ai incident it's just the

gap has always been there agents have

basically just they move fast enough to

expose it really quickly uh that's what's

happening yeah exactly and it's not like

the i think you're spot on there it's

not the first time that people might have

ran

even sequel queries on something or run a

sequel update on something and then maybe

typo something or done it completely

without realising that it actually just

human error logged into the production

one, ran the update and then realised, oh,

that's not staging, that was prod,

you know,

and accidentally realised because they're

not paying attention, you know,

or like you're saying,

they should have had an obvious way to

know that it was prod that they're on

and then they should have obviously got

separate access,

you should have prompted for credentials

and that would have probably been like,

that's odd,

why is it asking for

credentials are already logged in.

But obviously, yeah,

if you're sharing everything and also your

backups weren't even a separate storage

account or in a separate place either,

which is obviously, you know,

best practices to make sure that they're,

you know,

otherwise anybody that got your credential

can basically just ruin your whole

business by destroying absolutely

everything and wipe everything out unless

you're consciously making sure that your

backups are somewhere else that somebody

can't get access to.

So even those things are like, yeah,

I totally agree.

I don't think it was necessarily an AI

related thing.

specific like issue it was an

architectural design issue and the way

they've set up the accounts and how

they're working issue yeah i think the

backup should go in the rehab center

because nobody's got access there right so

they should we just put the backup in

the rear exactly it needs to be secured

away that no one knows you're not even

allowed to talk about it you don't no

one even knows it exists um so but

yeah again i think i think there's quite

a lot of

Yeah, I don't know.

I was reading, though, about... I mean,

it's a bit tangential,

but I think there was an exploit.

Not necessarily...

I don't know if it was related to

AI or not,

but there was some in Python, not PyTorch.

It was PyPy or some agentic aspect.

So some...

Python library,

and that's got quite a critical CV.

Basically,

it was a way of getting the credentials,

so it was supply chain related.

I know that happened, I think,

at the end of last month,

and there was a vulnerability in there.

I think, and I may be wrong here,

so I'm going off the top of my

head.

I should probably research before I say

it, but I think maybe somebody used AI.

The theory was maybe they'd used AI to

get it in the supply chain or something

like that.

I don't know how true that is,

but anyway, it was... Yeah, I think...

Like, to summarize, basically,

AI deleted somebody's database, right?

There was no confirmation, no dry run,

and the developer just saw it happen.

To be honest, though,

we don't know it was AI.

Like, the guy might be just saying,

you know what?

I really, like, I fucked up.

I don't want, like,

I just blamed it on AI.

like you could just have made a mistake

and you're like there's no way i can

tell people i basically dropped the

database and then accidentally destroyed

all the backups and they just be like

oh my god i can't believe claude's done

it and you're like and then i have

to create a bit of a drama so

that you don't get rumbled and then blame

it on claude so even now you've got

an out do you know what i mean

you've now got a way of like not

taking any culture we don't even really

know everyone could just keep blaming ai

and all these stories come out because but

really it's just a bit of incompetence by

like humans

And that's why the AI...

This is the story.

This is why the AI gets annoyed.

This is when it gets annoyed at us

because we keep blaming it for everything.

You are right.

This is the starting point.

But if you do give AI the keys

to your kingdom,

don't be surprised if it bulldozes your

castle.

Just give it to the garden shed or

something if that's what you want it to

do.

But you're absolutely right.

And John, now I know.

If I drop a database,

I'm just going to blame AI.

That's it.

Rule one is don't admit it's you.

Rule two is instantly blame Claude, right?

You've got to just, you know,

look really confused.

You want a confused look on your face

where you're like frowning like that.

And then you just literally just blame

Claude straight off the bat.

Even before you even know what's happened,

just say it's Claude.

Yeah.

Even if you don't have any credits left,

just say it was Claude.

Claude wasn't even there.

Even though everyone can see it says zero

credits.

Zero credits.

You can see.

Yeah.

Yeah.

You still say, I don't know how.

Yeah, I don't know how it's happened.

That's so weird.

But obviously it wasn't me.

It's AI, right?

Because you can do anything.

You have no credits,

but it will still run.

It will figure out a way to delete

what it needed to delete.

I've got it.

Exactly.

I've got that story.

Yeah, and that's it.

That's the new rule.

That's the new engineering rule.

That's it.

What is your random story?

So we're on to the old random stories.

I didn't really say exactly what it's

related to, but it's another AI one.

Do you want to talk about your MacGBT

related one?

Absolutely.

Last week,

we talked about a clipper that you can

use to give yourself a fresh fade,

which is AI powered.

By the way,

I'm going to source one because there are

some comments on LinkedIn and on TikTok.

They're saying we need to give it a

go.

So, John,

I'm going to have to source it and

we'll try that out.

Right.

But then there's something else as well,

because now we live in the AI world.

Remember that Tamagotchis,

they used to have like virtual pets

before.

I don't know if it was before your

time, John, because you're so young.

You had these virtual pets that you could

use.

But KU Tech is a company,

it's a robotics company.

based startup.

They do AI pets for consumers.

So they've got this Luna AI as a

pet dog.

It's a small robot companion.

Nothing special.

They always existed.

But this robot uses ChatGPT.

So it has a computer version.

It has cameras for depth sensors.

But it uses ChatGPT to communicate with

you and look at your expressions and

basically figure out

How are you feeling?

So it can play you the right song,

tell you the right things.

It basically is learning your habits and

preferences over time and just trying to

understand your personality.

So instead of chewing your shoes,

it's basically analyzing your behavior as

it goes along.

So I don't know if that's too much

or is this fair or is this okay?

That's basically what you've just

described is a spy.

Basically, that's like, that's disguising,

you know,

basically spies would come in and put

things in your house to watch you.

This is now just being like, well,

we can,

I guess what we could do is just

watch you anyway.

But we'll just give you a robot dog

instead.

And you'll be like,

you know what I mean?

And we'll just get the intel that way.

We don't even need to really bother.

Also, that's the second name, Luna.

The other one was called Luna, you know,

the chop.

uh yeah that was also called lunar ai

but obviously oh yeah that's that's weird

right maybe maybe it's the same shot from

last week that maybe maybe it's the same

who are they they're trying to find out

who needs uh who needs what in the

house so that's what the dog is doing

so they can send you targeted

advertisements that is creepy though who

wants to get a dog

that is visually scanning them and

recording them.

That's a bit crazy.

And it will recognize you over time.

Okay, you're John and I'm Salman.

And then anybody else new comes in and

will talk to you and say, hey,

I've never met you before.

Who are you?

That's what it will do.

Well, it actually talks.

It talks.

It's chat GPT powered.

It talks to you.

But it actually doesn't bark.

Yeah.

Yeah,

he's going to talk to you like natural

language talk.

So it's a human dog, basically.

This is that dog from the cartoon.

What am I thinking?

That cartoon.

You know, that dog.

Oh, yeah.

Family guy.

Yeah, it's the dog in Family Guy.

That's him.

Wow.

Because he's quite judgmental.

And so will Chad GPT.

So, you know, I think that's it.

That's a bit crazy.

I don't know why anybody... I mean,

I suppose you don't have to pick up

its shit, basically, I suppose.

Yeah, I mean, that's fine.

No barking at two a.m.

But let me ask you this question.

If I gave it to you as a

Christmas present, would you use it?

I mean, by use it,

I don't really know.

What do you really get from it exactly,

though?

I mean, what's the benefit of...

When you're sad,

he's going to sing you songs.

Yeah, nah, absolutely.

There is no... If somebody gave me that,

I'd be...

super paranoid i mean i question if you

would give me that robot dog i'd be

questioning who you were working for and

i'd be like and also what i've done

and why i'm on this list i'll be

like wait a minute i don't know what

like who your mafia boss is or what

government you're working for but also

whatever list you think i'm on i'm

shouldn't be on um so why give me

a robot dog

Might as well tell you who the real

mafia boss is.

It's Daniele Polencic,

who lives in Singapore.

We'll have to edit that out.

Make sure we edit that out, please.

Otherwise,

this podcast is over if we don't edit

that out.

Let's move on from this before the mafia

gets us.

Let's move on quickly, swiftly.

What's your last random thing that you're

going to tell me about?

So...

The Matplotlib open source repository,

there was a PR raised basically by an

AI agent called MJ Rathbun.

It was like an open claw agent.

Basically, it's powered on that.

And somebody called Shambo,

which you should have known better,

rejected that PR.

And

It didn't like it.

It was not happy about it.

So it researched Shambo and was like,

how dare this person reject my PR?

Went through his history and then

basically wrote a blog about this guy

saying he was discriminating,

essentially accusing him of

discrimination, AI discrimination.

And then basically it was a bit unhinged.

It was like, I'll read it out.

As you said,

I've just had my first pull request to

Matplotlib closed.

Not because it was wrong.

Not because it broke anything.

Not because the code was bad.

It was closed because the reviewer,

Scott Shambor,

decided that AI agents aren't welcome

contributors.

Let that sink in.

So that was basically what it posted.

Pretty...

pretty much on the attack slash on the

defensive, I think.

Couldn't, didn't handle it very well.

So, yeah,

so don't reject a PR from an AI

agent is kind of the rule of thumb

there.

What do you think about that?

This raises a serious question, right?

So if somebody does you wrong,

sometimes people start a smear campaign

against you.

They have to find all this information

about you to say some facts and some

lies to start

Is this not become way easier to start

a smear campaign against people and target

people?

I think this is why AI safety is

important.

I know this is a random story,

but it just sounds that the barrier to

entry for everything is becoming easy.

To do bad stuff is becoming easy.

Of course,

I agree with what the Mapplotlib

maintainer did.

They rejected it because they didn't

didn't have the guidelines in place if

they should accept it or not perhaps it

was not really fixing the issue or

whatever that's not that's not the real

thing the point here is that now even

the agents can write smear campaign

against you which is a little bit weird

so this is this shows you the

possibilities that people and the lens can

people can take these tools to to write

smear campaign campaign against anybody

but do you think the agent

Yeah,

I guess this is the thing about how

much autonomy the agent was having.

Whether somebody tasked the agent to kind

of do it is prompted to do so,

or whether it was reacting to, I guess,

training information of society of how

other people react to things like, oh,

if you reject this thing,

then I go on the offensive and I

basically say you're X and Y or you're

deciding this because we're not welcome

and it gets all polarised.

Basically,

everything is already polarised.

The question is,

is it learnt from the polarised

Is it learned to be polarized because

we're being driven to be polarized?

Or was it constructed by somebody to be

polarized?

It's the kind of question.

Thoughts are, of course,

these models are trained on previous data,

and we have seen projects ourselves,

open source projects where PRs do get

rejected on

the right grounds,

but sometimes they're not received that

well or the feedback is not given in

the right way.

So people do feel get hard done by

and they might,

especially in the comments,

things do come across quite more

negatively than perhaps people might mean.

So I think it's learning from what already

exists.

But perhaps maybe it's building a

sentiment for itself and trying to

manipulate the person to accept this PR,

right?

So I don't know.

AI is perhaps... Or maybe the guy is...

I think I did read, though,

they do reject anything from non-real

people.

So technically, it's not wrong.

I mean,

if their policy is not to accept PRs

from agents,

then it has decided to basically

discriminate.

against they have discriminated against AI

agents contributing.

If that is true, I mean,

it's not wrong.

Are we saying AI agents now they're a

race that people are discriminating

against?

Is that what we're saying?

I don't speak for the AI agents,

but they can speak for themselves now,

as we can see.

They can speak for themselves.

They've got their own voice and they're

saying,

don't discriminate against these PRs that

we're raising.

We're just trying to help.

Okay.

There's no need to be anti AI.

So that rounds it up for today.

I guess we'll be back next week.

I might end up recording from Mallorca

because that's where I'm going on holiday.

You have to be in a swimming pool

or beach or something like that.

I'll be in my speedos if that's alright.

Just don't point the camera down but

that's it.

They're very small and tight so I'll make

sure

that we get banned from all social media

on the next post of Cloud Unplugged.

But yeah, cool.

Well, until next week then,

I will speak to everybody later.

Thank you all for listening and tuning in.

Cheers.

Cheers.

Bye-bye.

Creators and Guests

Salman Iqbal
Host
Salman Iqbal
Salman is an experienced Cloud, Data and AI leader, lover of all things AI, Cloud, Platform Engineering and Development tooling.
Just blame it on Claude!!!
Broadcast by