From Newsgroup: alt.comp.os.windows-11
On Mon, 11/24/2025 6:00 AM, Daniel70 wrote:
On 24/11/2025 3:40 am, Paul wrote:
CoPilot
Can you write me some C code which prints out the first 20 Fibonacci
numbers ? Attempt to print the numbers on a single output line, with
a space characters to delimit each number printed out.
Sorry!! WHAT?? "Attempt"?? Does an LLM really know what 'attempt' means
.... or does it just produce code that it "thinks" does what it is being asked to do??
You have to remember that the LLM AI prompt reads something along the lines of...
"You are a helpful assistant"
When you have an assistant, you can beat them with a stick,
or, you can encourage them to do things.
The usage of the word "attempt" does not affect its value
as a directive.
"Make the program print the numbers on a single line"
would have achieved much the same result when writing the program.
The program is not "copied verbatim" out of the 420TB of
training data. The process is not that crass. The AI has
derived some language rules, for what statements go with
what other statements. The Mix Of Experts module loaded however,
does not "think" about all aspects of programming. If it
uses a subroutine that has a known bug in it, it will
not tell you about the bug. You have to figure that out
for yourself while running your testbench.
This is why I can comfortably get a 20 line program
from it -- the odds would be not that high, of it
writing glaringly bad code. If you ask for longer
modules
"Write me a replacement for the entire Linux OS"
you would not expect that to end well. Maybe the safety
timer would go off after 15 seconds.
On occasion, the LLM AI will answer with "why don't you
write the program yourself?".
*******
For fun, try this with an LLM AI
"What are your capabilities?"
You will get reams of output on the screen,
before the safety timer goes off, and...
the LLM AI erases the screen :-)
When you craft questions for the "AI", you have
to be pretty careful to not trigger a flow of sewage.
This would be especially important if you were paying
for the tokens.
That was the first question I asked an LLM AI.
I "thought" the thing would have some canned responses
for new users, and it could succinctly tell you
"I am a helpful assistant who cannot do math". But
I was wrong, and the AI went off on a hallucination
spree like you would not believe. Pretty funny, for
my first question.
Will they every fix things like this? Hmmm.
I'm still waiting for the AI to say "I don't know".
It is supposed to be able to say that, but I've
not read of any accounts of it having happened.
I do not see a theoretical reason for that to
happen either. As long as there is some non-zero
confidence interval assigned to an answer,
the machine is going to trot that out and print it.
Paul
--- Synchronet 3.21a-Linux NewsLink 1.2