They can and will produce deterministic tests for their code, just like you would
But those tests are themselves still generated language
And we human coders already know it's really easy to get a false positive in a test
Or how to miss an edge case
They can and will produce deterministic tests for their code, just like you would
But those tests are themselves still generated language
And we human coders already know it's really easy to get a false positive in a test
Or how to miss an edge case
Which is just a way of saying how we already have been using computers to calculate answers since the days of the loom and the adding machine
1 + 1 always equals 2
(except when it equals 11)
Which is just a way of saying how we already have been using computers to calculate answers since the days of the loom and the adding machine
1 + 1 always equals 2
(except when it equals 11)
But rather that IT CANNOT TELL THAT IT IS CORRECT with 100% or even 99.9% (or realistically, 80%) accuracy
The output is language
It MIGHT be the same language as a correct answer to a question
It MIGHT NOT
But rather that IT CANNOT TELL THAT IT IS CORRECT with 100% or even 99.9% (or realistically, 80%) accuracy
The output is language
It MIGHT be the same language as a correct answer to a question
It MIGHT NOT
And the funny thing is they're half right
The skill is in understanding what it cannot do
Which is be correct
And the funny thing is they're half right
The skill is in understanding what it cannot do
Which is be correct
A massive abacus with a dash of randomness
Hold the training and context static and put the temperature on 0 and they will still give you a probabilistic output and chose the likeliest
A massive abacus with a dash of randomness
Hold the training and context static and put the temperature on 0 and they will still give you a probabilistic output and chose the likeliest
Working with LLMs is like that except even dumber because they don't actually have a sense of empathy because they don't have senses
Working with LLMs is like that except even dumber because they don't actually have a sense of empathy because they don't have senses
Appealing to their sense of empathy helps
"We wash our hands so we don't get each other sick"
"Soap doesn't kill germs, but scrubbing with soap is what makes the go away, so you gotta scrub longer"
Appealing to their sense of empathy helps
"We wash our hands so we don't get each other sick"
"Soap doesn't kill germs, but scrubbing with soap is what makes the go away, so you gotta scrub longer"
I have seen my nieces and nephews pantomime because they can't reach the soap and facet instead of ask for help
I have seen them lie to my face about washing (and giggle because lying is funny apparently)
I have seen my nieces and nephews pantomime because they can't reach the soap and facet instead of ask for help
I have seen them lie to my face about washing (and giggle because lying is funny apparently)
Microsoft Azure earnings were down and they're concerned you might think it's because AI wasn't selling
A Salesforce SVP said they were "more confident" about LLMs a year ago because now they're having to put "deterministic" guardrails on Agentforce
Microsoft Azure earnings were down and they're concerned you might think it's because AI wasn't selling
A Salesforce SVP said they were "more confident" about LLMs a year ago because now they're having to put "deterministic" guardrails on Agentforce
Anyway...
Anyway...
(Azure SQL does support MS.CDC or MS.REPLICATION but only within Azure SQL)
(Azure SQL does support MS.CDC or MS.REPLICATION but only within Azure SQL)
"The output of an LLM is language, not truth" is another litany
There's many pitfalls. It will imagine APIs or that APIs do things they're not supported to
"The output of an LLM is language, not truth" is another litany
There's many pitfalls. It will imagine APIs or that APIs do things they're not supported to
It does work best like a sounding board. The Agent and Planning modes in Github Copilot are pretty good at some things and terrible at others
It does work best like a sounding board. The Agent and Planning modes in Github Copilot are pretty good at some things and terrible at others
(Also Pertussis/Whooping Cough is on the rise again)
Stay safe, everyone, and @ me if you want a recommendation on masks
(Also Pertussis/Whooping Cough is on the rise again)
Stay safe, everyone, and @ me if you want a recommendation on masks
The One Armed Man in The Fugitive was played by the late great Andreas Katsulas, whom I best remember as G'Kar from Babylon 5
The One Armed Man in The Fugitive was played by the late great Andreas Katsulas, whom I best remember as G'Kar from Babylon 5