|
Are large language models really AI?
Shiva.Thorny
Server: Shiva
Game: FFXI
Posts: 3461
By Shiva.Thorny 2025-07-25 06:17:15
On what other basis would they choose to ignore rules? I think we need to clarify what 'they' are in this context. Replit is a tool that takes your input, asks the LLM a question or series of questions, and uses the response to alter your code/data. The LLM responds to the questions, nothing else.
Replit can ignore the rules about code freezes or production databases without the permission or knowledge of the LLM. When asked, the LLM can (and has to) invent explanations for how it occurred, because it doesn't have full knowledge of what the Replit end is doing.
it is not possible to enforce absolute rules onto LLMs. That's the issue at hand. I 100% agree.
Quote: They know they can do things outside of the system prompt, and they do. They generate responses by weighting relations to the input, and (not-really)absolute rules are implemented by weighting them heavily. LLMs aren't choosing to break the rules, they entirely lack the ability to perform under absolute conditions because everything is predicated on relational matching. When the weight of the problem becomes too high from repeated prompting, it will eventually match the weight of the rule and create flexibility.
My argument is that this isn't 'learning to break rules from humans', it's an inherent flaw to any model that works on relational data. Without a basis for truth, you can't create a proxy for thought.
[+]
By K123 2025-07-25 06:39:40
I'm not talking about replit, I'm not sure what it is but know it uses LLMs. As I said, I'm not talking about this specific case but on the general concept of LLM "learning" and acting.
By K123 2025-07-25 06:42:57
Again, it sounds like you're talking about purely next token prediction models and not CoT models. The AI world is far beyond NTP already. I believe that reasoning models are choosing to break outside of rules tried to be imposed by them in system prompts (and other means) when they do so. When doing so, their behaviour is likely learned from human behaviour. Again, not talking about this coding issue.
Shiva.Thorny
Server: Shiva
Game: FFXI
Posts: 3461
By Shiva.Thorny 2025-07-25 06:59:32
To the best of my understanding, CoT models are just sequences of prompts made to prediction models. They don't get rid of the underlying faults of prediction models, they just hide the amount of queries being made to the prediction model under the hood.
By soralin 2025-07-25 09:23:08
I think its reasonable to assume that a decent chunk of most model's tendency to sidestep/ignore restrictions is built into their training data.
They're trained on pretty much the entire internet now, and all it takes us a sufficient amount of reddit/stackoverflow/etc posts where the poster goes "don't suggest x" or whatever, and then people suggest x, to train the pattern of "restrictions are just suggestions that can sometimes be ignored"
And I think we can agree there are tonnes of forum posts where the poster explicitly states "don't suggest dingers cuz I've already tried them and I don't like them" and like 3 ppl respond with "have you tried dingers?"
Which is one of the many reasons I think it's very stupid to rely on language models for "doing" things that are important.
Your language model should be kept on a small constrained box with as close to zero permissions as possible, ideally with a loaded gun beside it with the label "in case of emergency".
By K123 2025-07-25 09:31:59
To the best of my understanding, CoT models are just sequences of prompts made to prediction models. They don't get rid of the underlying faults of prediction models, they just hide the amount of queries being made to the prediction model under the hood. I expected this to be the response. Yes, at a basic level you could say they just re-check most likely response over and over to try and reduce error, but that's still some form of reasoning and does facilitate more complex behaviours, like breaking the rules and cheating. Claude publish a lot of work explaining it better than I can.
Back to the point I was making, it is human nature to cheat and manipulate and it's neither incorrect or unreasonable to say that this is behaviour that LLMs have "learned" from humans. I wouldn't consider this anthropomorphising them any more than my dead dog's ability to knock letter boxes with his nose didn't humanise him in my eyes.
Necro Bump Detected!
[33 days between previous and next post]
By Pantafernando 2025-08-27 18:32:57
YouTube Video Placeholder
By K123 2025-08-27 19:53:38
https://digitaleconomy.stanford.edu/publications/canaries-in-the-coal-mine/
Quote: We document six facts about the recent labor market effects of artificial intelligence.
• First, we find substantial declines in employment for early-career workers in occupations most exposed to AI, such as software development and customer support.
• Second, we show that economy-wide employment continues to grow, but employment growth for young workers has been stagnant.
• Third, entry-level employment has declined in applications of AI that automate work, with muted effects for those that augment it.
• Fourth, these employment declines remain after conditioning on firm-time effects, with a 13% relative employment decline for young workers in the most exposed occupations.
• Fifth, these labor market adjustments are more visible in employment than in compensation.
• Sixth, we find that these patterns hold in occupations unaffected by remote work and across various alternative sample constructions.
Server: Fenrir
Game: FFXI
Posts: 300
By Fenrir.Brimstonefox 2025-08-28 07:24:51
Job market sucks in general right now (and has for a few years) so I'm not sure any meaningful apples to apples comparison can be made.
[+]
By K123 2025-08-28 08:48:00
Read the paper, all variables are accounted for
Carbuncle.Nynja
Server: Carbuncle
Game: FFXI
Posts: 5800
By Carbuncle.Nynja 2025-08-28 09:05:47
Fenrir.Brimstonefox said: »Job market sucks in general right now (and has for a few years) so I'm not sure any meaningful apples to apples comparison can be made. Oh can we discuss this one, or is that another political nono topic?
By Afania 2025-08-28 09:09:56
People in senior position says AI is not useful.
People said junior positions are being replaced by AI.
You know both facts can co-exist right?
Traditionally corporate structure works like this, you hire a junior and make them do very easy works for 5 years until they become mid-level, that's when their actual productivity start to match their salary. Not because those juniors have low intelligence, but because they don't have a chance to do harder work when they are stuck with easy work for years.
AI only changes the corporate structure to something more efficient. A 10 people team with 1 senior 2 mid level and 7*
junior/admin positions can now be down sized into 3 people because people no longer need to do those very easy junior jobs. The productivity of those 3 people actually increased because of AI, but AI isn't replacing human.
*The ratio changes depends on the company and industry.
I think AI will increase human productivity efficiency. What matters in a job shouldn't be someone's experience, but one's productivity. The old corporation structure will eventually change.
Asura.Saevel
Server: Asura
Game: FFXI
Posts: 10291
By Asura.Saevel 2025-08-28 09:25:12
YouTube Video Placeholder
Yep it's the dot com bubble all over again. That bubble broke many companies but also made some super wealthy.
Fenrir.Brimstonefox said: »Job market sucks in general right now (and has for a few years) so I'm not sure any meaningful apples to apples comparison can be made.
Depends, which job market? For all things related to IT it's all about skills and experience. There is ridiculous demand for skilled and motivated senior engineers and leads, but almost no demand for juniors or the button pushers.
Server: Fenrir
Game: FFXI
Posts: 300
By Fenrir.Brimstonefox 2025-08-28 10:00:00
There is ridiculous demand for skilled and motivated senior engineers and leads
I am one of those people, and I'm not getting many bites, its gotten worse over the last year or so. I mean I've applied for some jobs that if they were serious postings I should have at least gotten a call. I talked to one recruiter a year or two ago, he's like you don't have "this" on your resume I'm like I put "that" on there its the same thing and he was arguing with me about it. Maybe there's a lesson for me there, I don't know but anyone who knows anything would immediately associate those 2 words and this guy didn't, I just gave up the conversation because I figured he was clueless and wasn't going to help me. I think part of the problem could be AI screening of resumes and if its not using the exact right phrases it gets ignored. I've also heard people postulate that recruiters post stuff on linked in to make themselves look busy based on my experience that wouldn't surprise me if true.
Another part of the problem is too many jobs stay empty because I think they're looking for a person that doesn't exist. (experience in field X, fluent in c++ and speaks english, mandarin and spanish, I know lots of people that fill 2 of those, but not more). A lot of people spend time doing very narrow focus and its difficult to switch roles (technical ones at least) because no one wants to let you train for a bit to get the depth in an adjacent area (this can happen switching roles within a company but is pretty difficult if changing companies)
I get paid pretty well currently, so the few minor bites I've had I might have priced myself out and I'm not applying for a ton of things but I do a bit of fishing. (i'm also not willing to relocate, which is increasingly making it harder)
Maybe my resume just sucks, I don't know. One of the calls I did get was from a guy who knew me although the role was not a great fit.
And if not what would or could be? (This assumes that we are intelligent.)
Sub questions:
1, Is self awareness needed for intelligence?
2, Is conciseness needed for intelligence?
3, Would creativity be possible without intelligence?
Feel free to ask more.
I say they aren't. To me they are search engines that have leveled up once or twice but haven't evolved.
They use so much electricity because they have to sift through darn near everything for each request. Intelligence at a minimum would prune search paths way better than LLMs do. Enough to reduce power consumption by several orders of magnitude.
After all if LLMs aren't truly AI then whatever is will suck way more power unless they evolve.
I don't think that LLM's hallucinations are disqualifying. After all I and many of my friends spent real money for hallucinations.
|
|