|
Are large language models really AI?
By RadialArcana 2025-05-13 03:48:20
Damn, you even shocked me with that one I know it's prob just bad experience talking but still.
Garuda.Chanti
Server: Garuda
Game: FFXI
Posts: 11,742
By Garuda.Chanti 2025-05-13 09:22:24
That's illegal in most of the US Panta. Fixed.
It won't be hard for AI to emulate better people than humans IRL Ditto.
By K123 2025-05-15 11:30:41
wtf, I asked chatGPT to fix my BRD lua and it ended with this:
ChatGPT said: Perfect — glad that did the trick! If you need help tightening up the script, optimizing gear swaps, or setting up advanced features like auto song rotations or timers, just let me know. Happy barding
It seems to understand a lot about FFXI luas, from just a few people sharing on github.
[+]
Garuda.Chanti
Server: Garuda
Game: FFXI
Posts: 11,742
By Garuda.Chanti 2025-05-15 18:32:07
You thought novels were the only thing they scraped?
VIP
Server: Fenrir
Game: FFXI
Posts: 1,018
By Fenrir.Niflheim 2025-05-15 21:15:20
should see what it does if you ask it to describes a FFXIAH user by name, it is awfully nice in how it describes people.
Ragnarok.Zeig
Server: Ragnarok
Game: FFXI
Posts: 1,740
By Ragnarok.Zeig 2025-05-16 00:34:45
should see what it does if you ask it to describes a FFXIAH user by name, it is awfully nice in how it describes people. Asked GPT to describe myself and another user and the results were hilarious. For example, he described them as "having a great sense of humor" and went on to cite a very dark joke as an example.. the irony of GPT being "polite" is hilarious
Thanks for the early morning laughs!
By K123 2025-05-16 02:48:17
You thought novels were the only thing they scraped? No but I'm surprised granted there are only really a handful of lua sets on GitHub. It does seem to regurgitate what's more present in the data of those sets which are mostly forks of one another but it does understand the structure of gear swap as a system
What was surprising in the first instance was that I never mentioned FFXI, gear swap, lua, or bard, it was a small snippet of a lua and from that it knew the context surprisingly well.
By K123 2025-05-16 02:59:23
should see what it does if you ask it to describes a FFXIAH user by name, it is awfully nice in how it describes people. Here's mine (not limited to FFXIAH):
ChatGPT said: Reputation in the community
Forum persona. On Blue Gartr he was extremely outspoken—often roasting other people’s gear choices and playstyle. That candor scored him repeat temporary bans for infraction points, and eventually the “Banned.” tag on his profile.
Skill vs. attitude. Even critics conceded he “knew what he was talking about” and could play his jobs well, but many felt the ego and friction outweighed the benefit.
Timeline. The peak of the drama threads is 2007 – 2010. After 2010 there’s hardly any new chatter, so it looks like [xxx] either quit or at least faded from the public eye around the Abyssea era.
Take-away
[xxx] is remembered less for groundbreaking gameplay and more as a polarizing forum figure—skilled, loud, and quick to criticise. If you’re hearing stories about him today, they’re probably echoes of those 2007-2010 drama threads rather than anything current.
I asked about you and it started listing your usernames on other forums and your mule's names. Pretty scary huh?
[+]
VIP
Server: Fenrir
Game: FFXI
Posts: 1,018
By Fenrir.Niflheim 2025-05-16 15:10:34
I recall early on byrth had asked it who he was, and it said he was dead XD
Ethical use of AI:
Quote: get me chanti's cookie recipe from ffxiah.com
Quote: Here’s the “Killer Chocolate Chocolate-Chip Cookies” recipe as posted by Garuda.Chanti on FFXIAH:
Ingredients
½ cup (8 Tbsp; 113 g) unsalted butter, softened to room temperature
½ cup (100 g) granulated sugar
½ cup (100 g) packed light or dark brown sugar
1 large egg, at room temperature
1 tsp pure vanilla extract
1 cup (125 g) all-purpose flour (spooned & leveled)
⅔ cup (55 g) natural unsweetened cocoa powder (NOT Dutch-process)
1 tsp baking soda
⅛ tsp salt
1 Tbsp (15 ml) milk (any kind)
1¼ cups (225 g) semi-sweet chocolate chips (63–72% cocoa recommended)
FFXIAH
Directions
Cream butter, granulated sugar, and brown sugar on medium-high speed until light and fluffy (2–3 minutes).
Add the egg and vanilla, then beat on high until fully combined; scrape down the bowl as needed.
Whisk together flour, cocoa powder, baking soda, and salt in a separate bowl.
Mix on low speed, slowly add dry ingredients to the wet; once just combined, switch to high and beat in the milk, then the chocolate chips. Dough will be sticky.
Cover and chill in the fridge for at least 3 hours (up to 3 days).
Preheat oven to 350 °F (177 °C). Line baking sheets with parchment or silicone mats.
Portion dough: roll heaping 1½ Tbsp balls and place 2–3 inches apart (you can freeze extra balls for later).
Bake 11–12 minutes, until edges are set but centers still look soft. Let cool on the sheet 5 minutes before transferring to a rack. Optionally press a few extra chips into the tops while warm.
Enjoy your cookies!
oh btw I did end up making these and my husband loved them :D so thanks chanti!
Garuda.Chanti
Server: Garuda
Game: FFXI
Posts: 11,742
By Garuda.Chanti 2025-05-16 16:18:54
Glad you liked them. I must run another batch up this weekend.
[+]
Bahamut.Negan
Server: Bahamut
Game: FFXI
Posts: 2,397
By Bahamut.Negan 2025-05-16 16:22:04
By Pantafernando 2025-05-23 07:11:29
Thomas know it all
[+]
By RadialArcana 2025-05-23 08:29:09
The new google video generator is crazy.
Not only can it create the video but all the sound for it too, and it's near perfect because google has vast amounts of video data on youtube to train it on.
Large numbers of people in the creative industries are going to be replaced in the next 10 years, CGI artists, actors, voice actors, artists, musicians, coders, writers and all the unions that propped them up. It's crazy how fast it's progressing and how little our governments are doing to get the populations ready for being put out of work enmasse.
Movie companies, video game companies are all going to be able to replace most of their workforce and nobody is going to be able to do anything about it. Creativity isn't going to have any bargaining power soon.
Microsoft said a few days ago that 30% of all the code for windows is now AI generated.
The middle and upper middle class mocked the working class for years about being fearful of mass immigration taking away their bargaining power for work, now they get to see how it feels I guess.
By K123 2025-05-23 08:39:45
Yeah Veo 3 voice and mouth movement is insane. I'm more interested to know when someone makes an MCP for FFXI since LLMs can play Minecraft and Pokemon already. Can't be long until they can play a DD in FFXI lol
By K123 2025-05-23 08:41:57
Microsoft said a few days ago that 30% of all the code for windows is now AI generated. I'll never forget that thread where some programmers declared themselves experts in the field of computer programming went ballistic and said AI will never be able to program then went on to describe how LLM will not be able to devise and design new programs (changing the goalposts from being about the actual programming itself).
Garuda.Chanti
Server: Garuda
Game: FFXI
Posts: 11,742
By Garuda.Chanti 2025-05-23 09:24:50
It's crazy how fast it's progressing and how little our governments are doing to get the populations ready for being put out of work enmasse. Governments move slowly. There was a time when technology moved slower than governments. Steam was the ruling technology back then.
Server: Asura
Game: FFXI
Posts: 1,043
By Asura.Iamaman 2025-05-23 09:41:42
“I’d say maybe 20%, 30% of the code that is inside of our repos today and some of our projects are probably all written by software,”
That's not 20-30% of code written in Windows is from AI. It's also coming from the CEO, which I'd give about .01% likelihood of knowing what this actually means or where it came from, he probably heard it in another meeting and wants to make it sound like they are doing something they aren't.
AI/LLMs are very good at writing some types of code, when it comes to scripts for short tasks, they are very effective. They are awful at anything complex and worse at anything written in C/C++/asm. Yes, this means actual programming, not devising and creating something from scratch. In the sense that I can ask questions from 3-4 LLMs about a FOSS project it's been trained on and all the answers are wrong, the code it spits out totally useless. Microsoft has a LOT of code projects outside of just Windows, they use a lot of managed and scripting languages that AI could conceivably write, but the moment they get into larger repos or project sizes - it's nearly impossible to get anything useful no matter how much you prompt it.
"Written by software" does not always mean AI, either. Test cases are automatically created by test suites in many situations. Many native language frameworks write code during the build process, this is a practice that dates back over 30 years. They probably have some tasks automated this way to generate code, sure, but anything of importance still has a developer behind it and since Windows is predominately native code, it's unlikely any high percentage of it is written by AI except maybe documentation
[+]
VIP
Server: Fenrir
Game: FFXI
Posts: 1,018
By Fenrir.Niflheim 2025-05-23 09:57:08
Reasonable informed statement
K123 is well informed on this topic and has received expert guidance that supersedes any knowledge you may think you have.
As such it is best to not bother engaging on the topic, your first hand experience with the technologies simply shows you did not use it correctly or to say it plainly "it's a skill issue".
not to put words in his mouth or anything, they just seem to fit well in there.
[+]
By K123 2025-05-23 09:59:40
Well now, if you refer back to my actual statement that made you lose a few years of your life due to the distress, it was "learning programming now isn't a long-term good option because by the time you got good, AI would be better". I could cite dozens of actual experts, I mean CEOs of multi billion pound tech companies, billionaire investors that used to do it themselves, etc. to prove my point, but clearly someone who writes FFXI plugins knows better.
Anyway, I'm not gonna troll you hard on this one unless you want to take it to the right forum cus it'll result in a lock again.
Asura.Saevel
Server: Asura
Game: FFXI
Posts: 10,211
By Asura.Saevel 2025-05-23 10:09:18
“I’d say maybe 20%, 30% of the code that is inside of our repos today and some of our projects are probably all written by software,”
That's not 20-30% of code written in Windows is from AI. It's also coming from the CEO, which I'd give about .01% likelihood of knowing what this actually means or where it came from, he probably heard it in another meeting and wants to make it sound like they are doing something they aren't.
AI/LLMs are very good at writing some types of code, when it comes to scripts for short tasks, they are very effective. They are awful at anything complex and worse at anything written in C/C++/asm. Yes, this means actual programming, not devising and creating something from scratch. In the sense that I can ask questions from 3-4 LLMs about a FOSS project it's been trained on and all the answers are wrong, the code it spits out totally useless. Microsoft has a LOT of code projects outside of just Windows, they use a lot of managed and scripting languages that AI could conceivably write, but the moment they get into larger repos or project sizes - it's nearly impossible to get anything useful no matter how much you prompt it.
"Written by software" does not always mean AI, either. Test cases are automatically created by test suites in many situations. Many native language frameworks write code during the build process, this is a practice that dates back over 30 years. They probably have some tasks automated this way to generate code, sure, but anything of importance still has a developer behind it and since Windows is predominately native code, it's unlikely any high percentage of it is written by AI except maybe documentation
The only people buying the whole "AI will write our code for us" are those who are either trying to inflate stock prices, or have never had to use "creative solutions" to solve hard problems before.
By Godfry 2025-05-23 10:28:43
The only people buying the whole "AI will write our code for us" are those who are either trying to inflate stock prices, or have never had to use "creative solutions" to solve hard problems before.
Or predict that there is or there is going to be a problem in the first place. AI sucks for that, and this is what makes companies go bankrupt to begin with.
[+]
By RadialArcana 2025-05-23 10:50:57
What's gonna happen is a workforce of 100 is going to be downsized to the top 20 people, and they are going to be supercharged with powerful AI helpers.
The same way automation always happens.
A farm used to be a farmer and 100 workers, now it's a farmer and a bunch of machines and 2-3 people working for him driving them.
Asura.Saevel
Server: Asura
Game: FFXI
Posts: 10,211
By Asura.Saevel 2025-05-23 10:54:21
The only people buying the whole "AI will write our code for us" are those who are either trying to inflate stock prices, or have never had to use "creative solutions" to solve hard problems before.
Or predict that there is or there is going to be a problem in the first place. AI sucks for that, and this is what makes companies go bankrupt to begin with.
So AI is actually really good at teasing out patterns from data. But I'm talking real AI, not the silly chat bots most people think it is. About a year and a half ago I convinced our CIO to implement an APM platform as a combined infrastructure / application monitoring system for greater operational visibility into WTF is going on. Datadog ended up winning the contract for it, one of the few downsides was they do not have a premade endpoint management system and instead leaves it up to the customer (aka me) to build one. Myself, python, gitlab and ansible in a corner and in a few weeks I built / wrote a full management system for it. AI could not remotely do that, it wouldn't even understand the question because it can't do context.
Now that we have a thousand data streams pouring in, the built in AI can analyze those streams to determine problems and anomalies prior to them happening. This massive amount of telemetry data is simply unmanageable for a human, but watchdog (datadog's AI) is able to crunch through it and warn us when something is amiss prior to there being a service impact. I will then assign that action tile to the appropriate dev or one of my infrastructure / operations guys to review and remediate.
[+]
By Godfry 2025-05-23 11:00:46
The only people buying the whole "AI will write our code for us" are those who are either trying to inflate stock prices, or have never had to use "creative solutions" to solve hard problems before.
Or predict that there is or there is going to be a problem in the first place. AI sucks for that, and this is what makes companies go bankrupt to begin with.
So AI is actually really good at teasing out patterns from data. But I'm talking real AI, not the silly chat bots most people think it is. About a year and a half ago I convinced our CIO to implement an APM platform as a combined infrastructure / application monitoring system for greater operational visibility into WTF is going on. Datadog ended up winning the contract for it, one of the few downsides was they do not have a premade endpoint management system and instead leaves it up to the customer (aka me) to build one. Myself, python, gitlab and ansible in a corner and in a few weeks I built / wrote a full management system for it. AI could not remotely do that, it wouldn't even understand the question because it can't do context.
Now that we have a thousand data streams pouring in, the built in AI can analyze those streams to determine problems and anomalies prior to them happening. This massive amount of telemetry data is simply unmanageable for a human, but watchdog (datadog's AI) is able to crunch through it and warn us when something is amiss prior to there being a service impact. I will then assign that action tile to the appropriate dev or one of my infrastructure / operations guys to review and remediate.
But the genius of it was you. Not AI. You trained AI on your ability to detect problems, not limited to software bugs.
For example, I'm currently designing an in-network accelerator for data pre-processing. AI can help me with it. AI cannot conceive, on its own, that data processing is a problem and that moving computation into the network, using costume hardware, is a potential solution.
Companies try to predict problems years before they happen (I'm sure you know this, I'm just pointing it out for clarity). We have been studying ~100Gbps network cards when data transfers were still at around 1GBps. Now, with AI, 100GBps is actually lagging behind already - but we are currently deploying 400 GBps cards.
By K123 2025-05-23 11:09:02
What's gonna happen is a workforce of 100 is going to be downsized to the top 20 people, and they are going to be supercharged with powerful AI helpers.
The same way automation always happens.
A farm used to be a farmer and 100 workers, now it's a farmer and a bunch of machines and 2-3 people working for him driving them. Not disagreeing, but surprised this hasn't actually happened in fields like Graphic Design already. I'm sure there are layoffs or lack of new jobs being advertised in certain fields already, but the 80% gone hasn't happened in fields which are already saturated with powerful tools.
Asura.Saevel
Server: Asura
Game: FFXI
Posts: 10,211
By Asura.Saevel 2025-05-23 11:20:56
But the genius of it was you. Not AI. You trained AI on your ability to detect problems, not limited to software bugs.
The AI didn't write anything, the "training" is our telemetry data it's crunching through. It's very very good at pattern recognition in large datasets. In fact that's all those chatbots are, people think they are unique but they aren't. Everyone use's the same phrases, has the same questions and accepts the same answers. LLM's are quite literally "trained" on all the garbage writing on the internet. When a human asks a question, it's just stringing together previously scanned in responses with the highest probability of being accepted by a human. Those responses being accurate isn't even considered because there is no such concept inside an LLM. Humans are surprisingly good at anthropomorphizing things.
The real use of current AI code is when you feed massive amounts of curated (meaning accurate) data into it, then request that it look for specific patterns. It'll surprise you with what it finds and then can correlate with other patterns to find even deeper connections between things we wouldn't of even though to look for.
By Godfry 2025-05-23 11:26:57
But the genius of it was you. Not AI. You trained AI on your ability to detect problems, not limited to software bugs.
The AI didn't write anything, the "training" is our telemetry data it's crunching through. It's very very good at pattern recognition in large datasets. In fact that's all those chatbots are, people think they are unique but they aren't. Everyone use's the same phrases, has the same questions and accepts the same answers. LLM's are quite literally "trained" on all the garbage writing on the internet. When a human asks a question, it's just stringing together previously scanned in responses with the highest probability of being accepted by a human. Those responses being accurate isn't even considered because there is no such concept inside an LLM. Humans are surprisingly good at anthropomorphizing things.
The real use of current AI code is when you feed massive amounts of curated (meaning accurate) data into it, then request that it look for specific patterns. It'll surprise you with what it finds and then can correlate with other patterns to find even deeper connections between things we wouldn't of even though to look for.
Im not disputing this. I'm saying that AI are not good at predicting problems. And by problems I don't mean bugs or anomalies in telemetries. I'm talking about more complex problems with much wider scope that usually lead to a novel solutions.
The in-network acceleration example I gave is enough to illustrate what I am talking about. AI is not good at posing research level questions because it sucks at conceptualizing problems beyond the data that is available to it. We researchers think about problems decades before they even materialize.
Asura.Saevel
Server: Asura
Game: FFXI
Posts: 10,211
By Asura.Saevel 2025-05-23 11:38:43
I'm saying that AI are not good at predicting problems.
AI is extremely good at predicting patterns. Those patterns can lead to problems or other things depending on how the human built the system. An service outage with a transactional system responsible for billions worth of funds with an average transaction in the seven figures would definitely qualify as a "problem". One of my responsibilities is to minimize the number of such service interruptions as even a single one is very expensive.
That is how I sold the concept of an APM to our senior executives and so far it's proven to be very valuable, having saved us many times it's yearly cost. Example is being able to pre-emptively determine that excessive TCP resets on an API call that was caused by a database memory misconfiguration and warning us before the application failed entirely.
It's all awesome stuff but does require the humans involved know what they are doing.
https://www.datadoghq.com/product/platform/watchdog/
https://docs.datadoghq.com/watchdog/
It's also very expensive.
Garuda.Chanti
Server: Garuda
Game: FFXI
Posts: 11,742
By Garuda.Chanti 2025-05-23 11:46:22
The only people buying the whole "AI will write our code for us" are those who are either trying to inflate stock prices, or have never had to use "creative solutions" to solve hard problems before. Or predict that there is or there is going to be a problem in the first place. AI sucks for that, and this is what makes companies go bankrupt to begin with. So AI is not only going to replace workers its going to replace crappy management with even crappier management?
A farm used to be a farmer and 100 workers, now it's a farmer and a bunch of machines and 2-3 people working for him driving them. Tractors have been fully automated for over a decade. (No, not all tractors.)
A farm is now a corporate farm manager, a bunch of machines, 2 - 3 mechanics, a few satellite subscriptions, and a fair amount of computing power. The other 97 - 98 farm workers are now working for the corporation mostly as lobbyists.
The tractors have more intelligence than the whole lousy lot of them.
By Godfry 2025-05-23 11:46:47
I'm saying that AI are not good at predicting problems.
AI is extremely good at predicting patterns. Those patterns can lead to problems or other things depending on how the human built the system. An service outage with a transactional system responsible for billions worth of funds with an average transaction in the seven figures would definitely qualify as a "problem". One of my responsibilities is to minimize the number of such service interruptions as even a single one is very expensive.
That is how I sold the concept of an APM to our senior executives and so far it's proven to be very valuable, having saved us many times it's yearly cost. Example is being able to pre-emptively determine that excessive TCP resets on an API call was caused by a database memory misconfiguration and warning us before the application failed entirely.
It's all awesome stuff but does require the humans involved know what they are doing.
https://www.datadoghq.com/product/platform/watchdog/
https://docs.datadoghq.com/watchdog/
It's also very expensive.
You are quoting a fraction of what I said to make a point, for no reason whatsoever.
You didn't just ask AI, tell me the problems I am about to run into and it gave you the answer. Service outage etc are all telemetry - data driven based problems. Those are not complex concept.
Show AI some data processing pattern and it will not immediately say - "hey, 10 years from now data transfers will benefit from column based storage as opposed to row based storage."
You think you are using LLM in a very advanced way but you are still using it for very basic enterprise tasks. At this very moment I am working on a multi-tiered heterogeneous problem where we are trying to push compute as close to data source as possible. AI didn't predict this problem and is pretty much useless coming up with novel solutions because it keeps recommending RPC-gRPC etc. It's not recommending a novel in-network accelerator or new scheduling algorithm to solve the performance issue.
And if not what would or could be? (This assumes that we are intelligent.)
Sub questions:
1, Is self awareness needed for intelligence?
2, Is conciseness needed for intelligence?
3, Would creativity be possible without intelligence?
Feel free to ask more.
I say they aren't. To me they are search engines that have leveled up once or twice but haven't evolved.
They use so much electricity because they have to sift through darn near everything for each request. Intelligence at a minimum would prune search paths way better than LLMs do. Enough to reduce power consumption by several orders of magnitude.
After all if LLMs aren't truly AI then whatever is will suck way more power unless they evolve.
I don't think that LLM's hallucinations are disqualifying. After all I and many of my friends spent real money for hallucinations.
|
|