no, AI just sucks ass with any highly customized environment, like network infrastructure, because it has exactly ZERO capacity for on-the-fly learning.
it can somewhat pretend to remember something, but most of the time it doesn’t work, and then people are so, so surprised when it spits out the most ridiculous config for a router, because all it did was string together the top answers on stack overflow from a decade ago, stripping out any and all context that makes it make sense, and presents it as a solution that seems plausible, but absolutely isn’t.
LLMs are literally design to trick people into thinking what they write makes sense.
they have no concept of actually making sense.
this is not an exception, or an improper use of the tech.
i didn’t say “AI doesn’t work”, i said it works exactly as expected: producing bullshit.
i understand perfectly well how to get it to spit out useful information, because i know what i can and cannot ask it about.
I’d much rather not use it, but it’s pretty much unavoidable now, because of how trash search results have become, specifically for technical subjects.
what absolutely doesn’t work is asking AI to perform highly specific, production critical configurations on live systems.
you CAN use it to get general answers to general questions.
“what’s a common way to do this configuration?” works well enough.
“fix this config file for me!” doesn’t work, because it has no concept of what that means in your specific context. and no amount of increasingly specific prompts will ever get you there. …unless “there” is an utter clusterfuck, see the OP top of chain (should have been more specific here…) for proof…
yes, that’s exactly the point of everything I’ve said:
to an inexperienced user/developer/admin the output LLMs produce look perfectly valid, and for relatively trivial tasks they might even work out…but when it gets more specialized it fails spectacularly and it gets extremely obvious just how limited of a system it really is.
which is why there is so much pushback from professionals. actually that’s pretty much all professionals, not just in IT.
sure, and that works at small scales and as long as no change is required.
when either of those two change (large projects where interdependent components become inevitable and frequent updates are necessary) it becomes impossible to use AI for basically anything.
any change you make then has to be carefully considered and weighed against it’s consequences, which AIs can’t do, because they can’t absorb the context of the entire project.
look, I’m not saying you can’t use AI, or that AI is entirely useless.
I’m saying that using AI is the same as any other tool; use it deliberately and for the right job at the right time.
the big problem, especially in commercial contexts, is people using AI without realizing these limitations, thinking it’s some magical genie that can everything.
no, AI just sucks ass with any highly customized environment, like network infrastructure, because it has exactly ZERO capacity for on-the-fly learning.
it can somewhat pretend to remember something, but most of the time it doesn’t work, and then people are so, so surprised when it spits out the most ridiculous config for a router, because all it did was string together the top answers on stack overflow from a decade ago, stripping out any and all context that makes it make sense, and presents it as a solution that seems plausible, but absolutely isn’t.
LLMs are literally design to trick people into thinking what they write makes sense.
they have no concept of actually making sense.
this is not an exception, or an improper use of the tech.
it’s an inherent, fundamental flaw.
Removed by mod
yeah, no… that’s not at all what i said.
i didn’t say “AI doesn’t work”, i said it works exactly as expected: producing bullshit.
i understand perfectly well how to get it to spit out useful information, because i know what i can and cannot ask it about.
I’d much rather not use it, but it’s pretty much unavoidable now, because of how trash search results have become, specifically for technical subjects.
what absolutely doesn’t work is asking AI to perform highly specific, production critical configurations on live systems.
you CAN use it to get general answers to general questions.
“what’s a common way to do this configuration?” works well enough.
“fix this config file for me!” doesn’t work, because it has no concept of what that means in your specific context. and no amount of increasingly specific prompts will ever get you there. …unless “there” is an utter clusterfuck, see the
OPtop of chain (should have been more specific here…) for proof…Removed by mod
yes, that’s exactly the point of everything I’ve said:
to an inexperienced user/developer/admin the output LLMs produce look perfectly valid, and for relatively trivial tasks they might even work out…but when it gets more specialized it fails spectacularly and it gets extremely obvious just how limited of a system it really is.
which is why there is so much pushback from professionals. actually that’s pretty much all professionals, not just in IT.
Removed by mod
sure, and that works at small scales and as long as no change is required.
when either of those two change (large projects where interdependent components become inevitable and frequent updates are necessary) it becomes impossible to use AI for basically anything.
any change you make then has to be carefully considered and weighed against it’s consequences, which AIs can’t do, because they can’t absorb the context of the entire project.
look, I’m not saying you can’t use AI, or that AI is entirely useless.
I’m saying that using AI is the same as any other tool; use it deliberately and for the right job at the right time.
the big problem, especially in commercial contexts, is people using AI without realizing these limitations, thinking it’s some magical genie that can everything.
As a dev: lol. Do it again, you are good at entertaining