ryeats a day ago

You know that teammate that makes more work for everyone else on the team because they do what they are asked to do but in the most buggy and incomprehensible way, that when you finally get them to move on to another team and you realize how much time you spent corralling them and fixing their subtle bugs and now when they are gone work doesn't seem like so much of a chore.

That's AI.

  • Spooky23 a day ago

    Just like a poorly managed team, you need to learn how to manage AI to get value from it. All ambiguous processes are like this.

    In my case, I find the value with LLMs with respect to writing is consolidation. Use it to make outlines, not writing. One example is I record voice memos when driving or jogging and turn them into documents that can be the basis for all sorts of things. End of the day it saves me alot of time and arguably makes me more effective.

    AI goes bad because it’s not smart, and it will pretend that it is. Figure out the things it does well for your scenario and exploit it.

  • DavidPiper a day ago

    We need to update Hanlon's Razor: Never attribute to AI that which is adequately explained by incompetence.

    • xerox13ster a day ago

      And just like the original Hanlon’s Razor, this is not an excuse to be stupid or incompetent.

      It is not a reason to accept stupidity or incompetence. We should reject these things and demand better.

  • blibble a day ago

    > You know that teammate

    now imagine he can be scaled indefinitely

    you thought software was bad today?

    imagine Microsoft Teams in 5 years time

    • darthcircuit a day ago

      I’m not even looking forward to Microsoft teams on Monday.

    • ThatMedicIsASpy a day ago

      I only need to look at the past 5 years of Windows

  • bambax a day ago

    I'm extremely wary of AI myself, especially for creative tasks like writing or making images, etc., but this feels a little over the top. If you let it run wild then yes the result is disaster, but for well defined jobs with a small perimeter AI can save a lot of time.

    • runiq a day ago

      In the context of code, where review bandwidth is the bottleneck, I think it's spot on. In the arts, comparatively -- be they writing, drawing, or music -- you can feel almost at a glance that something is off. There's a bit of a vibe check thing going on, and if that doesn't pass, it's back to the drawing board. You don't inherit technical debt like you do with code.

  • 0xEF a day ago

    You are not wrong, but I pose the argument that too many people approach Gen AI as a replacement instead of a tool, and therein lies the root of the problem.

    When I use Claude for code, for example, I am not asking it to write my code. I'm asking it to review what I have written and either suggest improvements or ways to troubleshoot a problem I am having. I also don't always follow its advice, either, but that depends on how much I understand the reply. Sometimes it outputs something that makes sense based on my current skill level, sometimes it proposes things that I know nothing about, in which case I ask it to break it down further so I can go search the Internet for more info and see if I can learn more, which pushes the limits of my skill level.

    It works well, since my goal is to improve what I bring to the table and I have learned a lot, both about coding and about prompt engineering.

    When I talk to other people, they accuse me of having the AI do all the work for me because that's how they approach their use of it. They want the AI to produce the whole project, as opposed to just using it as a second brain to offload some mental chunking. That's where Gen AI fails and the user spends all their time correcting convoluted mistakes caused by confabulation, unless they're making a simple monolithic program or script, but even then there's often hiccups.

    Point is, Gen AI is a great tool, if you approach it with the right mindset. The hammer does not build the whole house, but it can certainly help.

    • cardanome a day ago

      Generative AI is like micromanaging an talented Junior Dev that never improves. And I mean micromanaging to such a toxic degree that not human would ever put up with that.

      It works but it simply not what most people want. If you love to code then you just abstracted away the most fun parts and have to only do the boring parts now. If you love to manage, well managing actual humans and seeing them grow and become independent is much more fulfilling.

      On a side note, I feel like prompting and context management is something that is easier for me personally as a person with ADHD as I am already used to working with forms of intelligence that are different to my own. I am used to having to explicitly state my needs. My neurotypical co-workers get frustrated that the LLM can't read their minds and always tell me that it should know what they want. When it nudge them to give it more context and explain better what they need they often resist and say they shouldn't have to. Of course I am stereotyping a bit here but still an interesting observation.

      Prompting is indeed a skill. Though I believe the skill ceiling will lower once tools get better so I wouldn't bank too much on it. What is going to be valuable for a long time is probably general software architecture skills.

      • nathan_douglas a day ago

        I don't disagree with anything you've said, but I _do_ think I'm starting to enjoy this workflow. I don't mind the micromanagement because it's usually the ideas that appeal most to me, not the line-level details of writing code. I suppose I fit in somewhere between the "love to code" and "love to manage" dichotomy you've presented. Perhaps I love to make it look like I have coded? :)

        I set up SSH certificates in my homelab last night with Claude Code. It was a somewhat aggravating process - I had to remind it a couple times of some syntax issues, and I'm not sure that it actually took less time than I would've taken to do it myself. And it also locked me out of my cluster when it YOLO'ed some changes it should not have. On the whole, one of the worst AI experiences I've had recently.

        But I'm thrilled with it, TBH, because it got done, it works, I didn't have to beat my head against the wall for each little increment of progress, and while Claude Code was beating its own head against the wall, I was able to relax and 1) practice my French, and 2) read my book (Steven Levy's _Artificial Life_, which I recently saw excerpted on HN).

        The general state of things is probably still pretty terrible. I know there're no end of irritations that I have with Claude Code, and everything else I've looked at is even less pleasant. But I feel like this might be going in a good direction.

        *EDIT*: It should go without saying though that I'd much rather be mentoring a junior person, though, as you said.

    • scarecrowbob a day ago

      "Gen AI is a great tool, if you approach it with the right mindset."

      People keep writing this sentence as if they aren't talking to the most tool-ed up group of humans in history.

      I have no problems learning tools, from chorded key shortcuts to awk/sed/grep to configuring all three of my text editors (vim, sublime, and my IDE) to work for their various tasks.

      Hell, I have preferred ligature fonts for different languages.

      Sometimes tools aren't great and make your life harder, and it's not because folks aren't willing to learn the tool.

      • ninetyninenine a day ago

        They write that sentence because gen ai has been effective for them.

        We have intelligent people using ai and claiming it’s useful.

        And we have other intelligent people who’s saying it’s not useful.

        I’m inclined to believe the former. You can’t be deluded about positives usefulness. But you can be about the negative simply by using the LLM in a half assed way and picking the most convenient conclusion without nuance.

        • kentm a day ago

          There’s actual studies that show that you can be deluded about positive usefulness. There was a study that showed people using AI thought they were being 20% more productive but actually had lowered productivity. Even productive people do not accurately estimate Joe properly track time and effort.

          • ninetyninenine 20 hours ago

            Interesting show me the study. My initial reaction is that it’s bs, but let me see the study before I make a judgement.

        • runiq a day ago

          > You can’t be deluded about positives usefulness.

          If you honestly believe that, I've got a bridge to sell you.

          • ninetyninenine a day ago

            How can you be deluded? Everyone has used it. they literally see the positive results. It’s not speculative.

            But you can miss the positive results if you haven’t used LLMs recently or used agentic ai like cursor. it’s easy to miss the positives

        • shakna 15 hours ago

          "You can’t be deluded about positives usefulness."

          If that were true, then we would not have the Dunning-Kruger effect. Regardless of your intelligence, all of us are susceptible to a cognitive bias that makes us think that we are better than we actually are at some things.

          The classical case used to demonstrate the Dunning-Kruger effect is self-assessment. That is, how well you think you can do a task. Rating the performance of a task - which is precisely what is happening here!

          People are shit indicators of their own performance. With a great new placebo tool, people are incredibly likely to say it improved their life. Even though it did nothing at all.

          Being deluded about positive usefulness is normal.

          • ninetyninenine 15 hours ago

            You just made that up. You created a connection between the Dunning-Kruger effect and positive delusion about tools. Do you have data to back that up?

            I mean as much as we complain about LLMs hallucinating, here's an example of a human making shit up out of thin air. What's going on here is NOT self assessment. It's obviously assessment of an LLM.

            Additionally the Dunning-Kruger effect like all of psychology stands on shaky ground.

            • shakna 15 hours ago

              So... Attack first? That what you're going with?

              DKE has been confirmed more than anything else. It was one of the few things not hit by the replication crisis.

              You're assessing how well you do, when aided or not, by a tool. That's still self assessment, I'm afraid.

              And that self-assessment is flawed. [0]

              [0] https://arxiv.org/abs/2507.09089

              • ninetyninenine 13 hours ago

                >You're assessing how well you do, when aided or not, by a tool. That's still self assessment, I'm afraid.

                Aid? I assess the output of the tool. Whether that tool aids me is another topic all together. I am not using the tool to augment my existing abilities.

                The tool is literally doing the task for me and I am evaluating the results afterwords. This is not some wrench that augments my existing strength. This is more of an assistant than a tool but wording can be manipulated so that assistants can also be thought of as tools. Let's not manipulate the wording to be in our favor and go for intent. The intent here is that clearly the LLM is different from a wrench. When you evaluate a wrench you also evaluate yourself because you are operating the wrench. When you evaluate an LLM, you are not operating it. You gave it a prompt and it went off on it's own to do something.

                https://www.scientificamerican.com/article/the-dunning-kruge...

                This is what I mean by dunning kruger. Either way, whether it's legit or not my points still stand.

                >So... Attack first? That what you're going with?

                What is this? Attack first? Who is getting attacked here?

                • shakna 12 hours ago

                  You attacked.

                  > You just made that up... I mean as much as we complain about LLMs hallucinating, here's an example of a human making shit up out of thin air

                  You called me a liar, delusional, and accused me of making up shit.

                  As you appear to ignore all things contrary to your current opinion - as the article you linked points out it effects everyone; and have acted like a dick, this conversation is now over.

        • scarecrowbob 13 hours ago

          What you're missing in the discussion is that you've got an unexamined assumption that other folks -haven't- used these tools based on your conclusion that they are simply useful; you have assumed that if folks haven't found them useful then folks haven't "really" used them.

          But that's simply not true.

          Not only have I used these tools and found them to be unhelpful to me, I have good reasons why I don't think they are helpful. I can even give two modalities in which I find them actively unhelpful:

          - for creative work, they don't allow me to chew over the details which I find important to struggle with as I express my thoughts and how to communicate them

          - for rote lookup or facts, I either understand the underlying material such that my code completion or templating tools are faster and clearer for me or I probably need to struggle with the underlying complexities until I can generalize the problem myself.

          You simply assume that I'm not, like, a 47 year old with an annoying theory of mind and learning and who has conceptual models for how I learn things based on almost 3 decades of teaching hundreds of students, coaching dozens of my cohort, and learning many skills across several domains.

          Which is fine. I am old enough that "you're holding it wrong" is something I've seen several times in my life.

          But at the end of the day, all you have is the usual "you're holding it wrong" objection that most folks have to technology that doesn't actually fit well.

          I will give you some free advice, totally worth what you're paying for it.

          It is indeed entirely possible that humans are quite often "deluded about positives [sic] usefulness" of different tools. That delusion can often be a difficult or painful lesson. I've got a lot of tendon issues from rock climbing and bad scalar patterns in my clarinet playing to prove that well enough for myself.

          I suggest that if you really believe that anything which helps you in some short term kind of way won't hamper you in your future endeavors, you might want to question that belief.

          If you can't think of any examples (cocaine being one easy example) then I suggest that you don't know enough about the world to be conjecturing about it as you have been doing here.

          In any case, good luck. Clearly all the people disagreeing with you here are wrong.

          • ninetyninenine 13 hours ago

            >In any case, good luck. Clearly all the people disagreeing with you here are wrong.

            Doesn't prove your case. Plenty of instances where everyone is wrong and one person is right. Lead for example was once thought by everyone to be healthy. Very few people considered it toxic.

            >I will give you some free advice, totally worth what you're paying for it.

            Could be completely useless advice and totally worthless. You declaring it worth it does not suddenly make the advice valuable. In fact I'm anticipating negative value.

            >It is indeed entirely possible that humans are quite often "deluded about positives [sic] usefulness" of different tools. That delusion can often be a difficult or painful lesson. I've got a lot of tendon issues from rock climbing and bad scalar patterns in my clarinet playing to prove that well enough for myself.

            Of course it's possible. It's just more rare. I put values into a calculator. The calculator does a calculation faster than me. Was that delusion? There clear example. Can you give me a clear example of the alternative? Where you use a tool it only feels useful but isn't. Your rock climbing examples feel like a bit of a stretch. In fact they feel like counter examples, you eventually noted that they aren't useful.

            >If you can't think of any examples (cocaine being one easy example) then I suggest that you don't know enough about the world to be conjecturing about it as you have been doing here.

            I suggest that you actually don't know enough about the world compared to me given my 60+ years being alive. Your attitude is rude and condescending. But you know I often wonder what would trigger someone to be like this? Like why can't you be impartial and just give counter evidence? Why did you have to approach this whole thing with this attitude of "Let me give you a fucking tip".. Is it because I hit a nerve? Because one aspect of what I'm talking about is right and it's hard to face the truth? I don't know. I can only speculate.

            Cocaine was at one point in time not known to be addictive. You could be right here with that analogy. But we can't fully prove it can we? The answers given by an LLM are too varied to form a definitive answer. Cocaine EVENTUALLY outputs a definitive symptom of addiction and other bad outcomes that are statistically significant. So even though at one point in time we didn't know... over time cocaine yielded definitive answers. but LLMs used for programming? What are we even measuring? We don't even know. So it's hard to see some definitive answer revealing itself over time. All I see are endless debates where I'm right, and I can't convince a kid like you that you're wrong.

            • z0r 12 hours ago

              Are you really >60 years old? You have a young posting style.

              • ninetyninenine 11 hours ago

                I am. People say I look like I'm in my mid forties so that may be a factor.

  • bdangubic a day ago

    smart people are reading comments like and going “I am glad I am in the same market as people making such comments” :)

    • ookblah a day ago

      seriously, the near future is going to be:

      1) people who reject it completely for whatever reason. 2) people who use it lazily and produce a lot of garbage (lets be honest, this is probably going to happen a lot which is why maybe group #1 hates this future. reminds me of the outsourcing era) 3) people who selectively use it to their advantage.

      no point in groups 1 and 3 trying to convince each other of anything.

      • cgriswald a day ago

        I think that has been the state of affairs for awhile now.

        I think your explanation for group 1 is true to a degree but have two other additional explanations: (1) Some element of group 1 is ideologically opposed. It might be copyright, or Luddism, or some other concern for our fellow humans. (2) Some are deluded into thinking there are only two groups and that group 3 people are all delusional.

        Although it is probably an uphill battle I do think both groups 1 and 3 have things to learn from each other.

        • kentm a day ago

          To be fair, there are a lot of people (especially on Hacker News) in group 2 convincing themselves that they are in group 3. And people in group 1 see that and think that group 3 is a lot lot smaller than AI acolytes think.

    • IAmGraydon a day ago

      I’m glad for now. Understanding how to utilize AI to your advantage is still an edge at the moment, but it won’t be long before almost everyone figures it out.

      • bdangubic a day ago

        it’ll be years because 87.93% of SWEs are subpar like the post I made comment on.

      • raincole a day ago

        Yeah. Interestingly enough, I've found utilizing AI is a very shallow skill that anyone should be able to learn in days. But (luckily) people have some tendency preventing them from doing so.

        • bdangubic 19 hours ago

          with all due respect, this cannot be further from the truth. not only can you not get good in days but it is an ongoing journey. I have spent many, many month learning ins and outs and still spend an hour or two every day on learning/perfecting/…

  • billy99k a day ago

    You can think that..and you will eventually be left behind. AI is not going anywhere and can be used as a performance booster. Eventually, it will be a requirement for most tech-based jobs.

    • andersmurphy a day ago

      This reminds me of crypto’s “have fun being poor”. Except now it’s “have fun being left behind/being unemployed”. The more things change the more things stay the same.

      • billy99k a day ago

        A bit different when you actually see the results.

        A guy I went to highschool with complains endlessly about AI generated art and graphics (he's an artist) and like you, just wants to bury his head in the sand.

        Consumers don't care if art is generated by AI or humans and in a short period of time, you won't be able to tell the difference.

        With the money being poured into AI by all major tech companies, you will be unemployed if you don't keep up with AI.

        • tombarys 10 hours ago

          > "Consumers don't care if art is generated by AI or humans"

          Maybe not yet. The real "art" consumers were always very sensitive and asking for originality (thus scarcity). It is an essential principle of the art that it is a result of thousands/millions of deliberate choices. If you use machine for creation, you less choices. You delegate most of your talented/crazy/hard choices to the model (which is based on such choices of already talented but combines them in a random way). The result is thin, diluted even it seems like deliberate. In my opinion the most art lovers will continue to seek for the dense art made by human, asking for some kind of proof. :) The real art will be even more appreciated. I guess.

        • andersmurphy a day ago

          If the last few years of the AI hype cycle has taught me anything is there's massive late movers advantage.

          Anyone who spent time learning the AI tools over that period of time has basically wasted their time. Working with agents is nothing like prompt engineering. I imagine whatever comes after will be nothing like agents etc. Sounds like those who try to keep up with AI will be equally unemployed.

          • billy99k a day ago

            If the HN community is an example of this, they will be left behind regardless because they will avoid all tooling and the benefits that comes along with it.

            I suppose I shouldn't care too much. Less competition for people like me that have embraced the change.

            • andersmurphy a day ago

              Thing is short/medium term VC subsidies require lots of users to embrace AI. If they don't the money dries up and you end up paying the full price for these models. Which are currently heavily discounted (this is an understatement). How much are you currently paying for your usage 20$/m? 200$/m? How does that look when it's 2000$/m? 20000$/m?

              • billy99k a day ago

                With all of the competition in big tech, prices will go down.

        • abenga a day ago

          We care. If I get a video recommendation on YouTube and it is AI-created, I blacklist the channel. I will never listen to AI music. Even articles, the only way I will keep reading someone's writing is if I never find out they don't use it. I consume media and art to commune with my fellow man, not to look at pretty bitmaps and read just strings of prose.

          • billy99k a day ago

            You are not the average consumer.

      • mwigdahl a day ago

        Yes, and it was exactly the same with compilers. All hype and fad -- everyone who's serious about software development writes in assembly.

        • andersmurphy a day ago

          It's false comparison compilers are deterministic. The only probabilistic behavior I've seen has been for performance (query planning/branch prediction).

          I mean you're not wrong the serious people drop into assembly when they need too. Even if you work in a context where you can't or don't drop down into assembly being able to make your own compilers is incredibly useful.

    • sampl3username a day ago

      Left behind what? Consumeristic trash?

      • dragontamer a day ago

        Don't you see that the future is XML SOAP RPCs? If you don't master this new technology now, you'll be left behind!!

        Then again, maybe I'm too old now and being left behind if I remember the old hype like this....

        The entirety of the tech field is constantly hyping the current technology out of FOMO. Whether or not it works out in the future it's always the same damn argument.

      • billy99k a day ago

        The workforce in tech.

    • ryeats a day ago

      I was being a bit melodramatic, I'll use it occasionally and If AI gets better it can join my team again I don't love writing boilerplate I just know it's not good at writing maintainable code yet.

    • rsynnott a day ago

      I mean, the promoters of every allegedly productivity improving fad have been saying this sort of thing for all of the twenty-odd years I’ve been in the industry.

      If LLMs eventually become useful to me, I’ll adopt LLMs, I suppose. Until then, we’ll, fool me once…

    • BrouteMinou a day ago

      When all you got is pontificating...

  • threatripper a day ago

    You sound bitter. Did you try using more AI for the bug fixing? It gets better and better.

    • ryeats a day ago

      My interest tend to be bleeding edge where there is little training data. I do use AI to rubber duck but can rarely use it's output directly.

      • threatripper a day ago

        I see. In my experience current LLMs are great for generating boilerplate code for basic UIs but fail at polishing UI and business logic. If it's important you need to rewrite the core logic completely because they may introduce subtle bugs due to misunderstandings or sloppiness.

        • ryeats a day ago

          Yep you are also right, some amount of boilerplate code is perfectly reasonable since some problems are similar but just different enough and unique enough they don't merit designing an architecture that gets rid of the boilerplate. this is probably the most useful thing that AI could do for us. I think I am more worried as a maintainer that we won't see that we are copying all that boilerplate too often and it's subtle bugs are multiplied and now we have to maintain all that code because AI doesn't yet do that.

    • skydhash a day ago

      Cognitive load are not related to the difficulty of a task. It’s about how much mental energy is spent monitoring it. To reduce cognitive load, you either boost confidence or avoid caring. You can’t have confidence in AI output and most people proposing it looks like they’re preaching to not care about quality (because quantity yay).

      • threatripper a day ago

        But quality is going up a lot. Granted, it's not up to human levels yet, but it is going up fast. Also we will see more complex quality control in AI output, tailored to specific use cases and sold at a premium. Right now these don't exist and if they existed it would be too expensive to run 100x requests for the same amount of output. So humans are stuck in quality control, for now.

    • Arainach a day ago

      One of the biggest problems with AI is that it doesn't get better and better. It makes the same mistakes over and over instead of learning like a junior eng would.

      AI is like the absolute worst outsourced devs I've ever worked with - enthusiastically saying "yes I can do that" to everything and then delivering absolute garbage that takes me longer to fix/convince them to do right than it would have taken for me to just do it myself.

      • threatripper 9 hours ago

        Current models have no memory, they don't learn. You have to learn for them for now. You have to put the learnings in the instructions and in code comments. If you don't describe WHAT your code SHOULD do and WHY you write it in THAT particular way it will have no idea and the code may just look like bad non-standard code waiting to be "improved".

        It works best if you keep close to mainstream styles and if you keep it easy and straight-forward.

ants_everywhere a day ago

My writing style is pretty labor intensive [0]. I go through a lot of drafts and read things out loud to make sure they work well etc. And I tend to have a high standard for making sure I source things.

I personally think an LLM could help with some of this, and this is something I've been thinking about the past few days. But I'd have to build a pipeline and figure out a way to make it amplify what I like about my voice rather than have me speak through its voice.

I used to have a sort of puritanical view of art. And I think a younger version of myself would have been low key horrified at the amount of work in great art that was delegated to assistants. E.g. a sculptor (say Michelangelo) would typically make a miniature to get approval from patrons and the final sculpture would be scaled up. Hopefully for major works, the master was closely involved in the scaling up. But I would bet that for minor works (or maybe even the typical work) assistants did a lot of the final piece.

The same happens (and has always happened) with successful authors. Having assistants do bits here or there. Maybe some research, maybe some corrections, maybe some drafts. Possibly relying on them increasingly as you get later in your career or if you're commercially successful enough to need to produce at greater scale.

I think LLMs will obviously fit into these existing processes. They'll also be used to generate content that is never checked by a human before shipping. I think the right balance is yet to be seen, and there will always be people who insist on more deliberate and slower practices over mass production.

[0] Aside from internet comments of course, which are mostly stream of consciousness.

  • bgwalter a day ago

    Michelangelo worked alone on the David for more than two years:

    https://en.wikipedia.org/wiki/David_(Michelangelo)#Process

    Maybe later he got lazier. I haven't really heard of famous authors using assistants for drafts instead of research (I don't mean commercial authors like Stephen King).

    Even research many authors simply could not afford.

    • ants_everywhere a day ago

      Maybe Michelangelo was a bad choice, but I hope it's clear from my wording that I was using Michelangelo as an example and not saying anything specific his use of assistants compared to his peers. And David is a masterpiece not a minor work.

      I don't see where the article says he worked alone on David. It does seem that he used a miniature (bozzetto) and then scaled up with a pointing machine. One possibility is he made the miniature and had assistants rough out the upscaled copy before doing the fine work himself. Essentially, using the assistants to do the work you'd do on a band saw if you were carving out of wood.

      > I haven't really heard of famous authors using assistants for drafts instead of research (I don't mean commercial authors like Stephen King).

      Restricting to non-commercial authors would narrow it down since hiring assistants to write drafts probably only makes financial sense if the cost of the assistant is less than the cost of your time it would take drafting.

      Alexander Dumas is maybe a bit higher brow than Stephen King

      > He founded a production studio, staffed with writers who turned out hundreds of stories, all subject to his personal direction, editing, and additions. From 1839 to 1841, Dumas, with the assistance of several friends, compiled Celebrated Crimes, an eight-volume collection of essays on famous criminals and crimes from European history. https://en.wikipedia.org/wiki/Alexandre_Dumas

      But in general I agree, drafts are often the heart of the work and it's where I'd expect masters to spend a lot of their time. Similarly with the statue miniatures.

    • netule a day ago

      James Patterson comes to mind. He simply writes detailed outlines for the plots of his novels and has other authors write them for him. The books are then published under his name, which is more like a brand at that point.

  • tombarys 10 hours ago

    Good point! Thanks.

    I like the perspective of "choices" during creation. It is an essential principle of the real art that it is a result of thousands/millions of deliberate choices. This is what we admire on the art. If you use mostly machine (or other kind of ways that decide instead and for you) for creation, you as an creator simply do less choices.

    In this case, you delegate many of your experienced/crazy/hard decisions to the model (which is based on such decision made already by other artists but combines them in a random way). It is like decompressing JPG – some things are just hallucinated by machine.

    From the perspective of pure human creativity, the result is thin, diluted. Even it seems like deliberate. In my opinion art lovers will seek for the dense art made by human, maybe asking even more for some kind of "proof" of the human-based process. What do you think?

  • BolexNOLA a day ago

    At its most basic level I just like throwing things I’ve written at ChatGPT and telling it to rewrite it in “x” voice or tone, maybe condense it or expand on some element, and I just pick whatever word comes to mind for the style. Half the time I don’t really use what it spits out. I am a much stronger editor than I am a writer, so when I see things written a different way it really helps me break through writer’s block or just the inertia of moving forward on something. I just treat it like a mediocre sounding board and frankly it’s been great for that.

    When I was in high school I really leaned on friends for edits. Not just because of the changes they would make (though they often did make great suggestions), but for the changes I would make to their changes after. That’s what would inevitably turn my papers from a B into an A. It’s basically the same thing in principle. I need to see something written in a way I would not write it or I start talking in circles/get too wordy. And yes this comment is an example of that haha

mrbluecoat a day ago

I avoided cell phones too when they first came out. I didn't want the distraction or "digital leash". Now it's a stable fixture in my life. Some technology is simply transformational and is just a matter of time until almost everyone comes to accept it at some level. Time will tell if AI breaks through the hype curve but my gut feeling is it will within 5 years.

  • GlacierFox a day ago

    My phone is a fixture in my life but spend a lot of effort trying to rid myself of it actually. The thing for me is currently, on the receiving end is that I just don't read anything (apart from books) like it has any semblance of authenticity anymore. My immediate assumption is that a large chunk of it or sometimes the entire piece has been written or substantially altered by AI. Seeing this transferring into the publishing and writing domain is just simply depressing.

  • uludag a day ago

    I avoided web3/crypto/bitcoin altogether when they came out. I'm happy I did and I don't see myself diving into this world anytime soon. I've also never used VR/AR, never owned a headset, never even tried one. Again, I don't see this changing any time soon.

    Some technology is just capital trying to find growth in new markets and doesn't represent a fundamental value add.

    • scarier 20 hours ago

      I don't totally disagree with you, but I think it's important to note that just because a technology isn't value-adding to you doesn't mean it isn't fundamentally value-adding in general. VR has been game-changing in immersive simulation for me, for example.

  • cheschire a day ago

    smart phones became a fixture because they were a key enabler for dozens of other things like fitness tracking fads, logging into key services, communication methods that were not available on desktop, etc. If AI becomes a key enabler of business, then yeah people won't have a choice.

    I expect this will be around the time that websites are no longer a thing and we see companies directly pumping information into AI agents which are then postured as the only mechanism for receiving certain information.

    As an example, imagine Fandango becoming such a powerful movie agent that theaters no longer need websites. You don't ask it questions. Instead, it notifies YOU based on what it knows about your schedule, your preferences, your income, etc. Right around 5pm it says "Hey did you know F1 is showing down the street from you at Regal Cinema in IMAX tonight at 7:30? That will give you time to finish your 30 minute commute and pickup your girlfriend! Want me to send her a notification that you want to do this?"

    People install a litany of agents on their smartphones, and they train their agents based on their personal preferences etc, and the agents then become the advertisers directly feeding relevant and timely information to you that maximizes your spend.

    MCP will probably kill the web as we know it.

    • TheOtherHobbes a day ago

      That's not what will happen. The ad tech companies will pivot and start selling these services as neutral helpers, when in fact they'll use their knowledge of your schedule, preferences, and income to spend money on goods and services you don't really want.

      It will be controlling and disempowering - manipulative personality-profiled "suggestions" with a much higher click rate than anything we have today.

      And the richer you are, the more freedom you'll have to opt out and manage your own decisions.

    • sampl3username a day ago

      >smart phones became a fixture because they were a key enabler for dozens of other things like fitness tracking fads, logging into key services, communication methods that were not available on desktop, etc. If AI becomes a key enabler of business, then yeah people won't have a choice.

      This. I need access to banking , maps and 2FA. If I could use a dumb phone, with just a camera, GPS and whatsapp, I would use it.

      • wright-goes a day ago

        Access to banking is indeed critical, but when? And for 2FA, which accounts, and when? As bank apps become more invasive and they also fail to offer substantive 2FA (e.g. the forcing of text messaging as a 2FA option falls outside my risk tolerance), I've segmented my devices' access.

        The ability to transfer funds is something I'm now fine doing via a dedicated device with a dedicated password manager account, and I'm fine uninstalling banks' apps from my phone and dis-enrolling cell phone numbers.

        Given the wanton collection and sale of my data by many entities I hadn't expected (naivety on my part), I've restricted access to critical services by device and or web browser only. It's had the added bonus of making me more purposeful in what I'm doing, albeit at the expense of a convenience. Ultimately, I'm not saying my approach is right for everyone, but for me it's felt great to take stock of historical behavior and act accordingly.

      • Findecanor a day ago

        I bought my first smartphone in 2020 after my old compact camera died, and I couldn't find a replacement to buy because they had been supplanted by smartphones.

    • coliveira a day ago

      If this happens I have an excellent business strategy. Human concierges that will help people with specific areas of their lives. Sell a premium service where paid humans will interact with all this noise so clients will never have to talk to machines.

  • ApeWithCompiler a day ago

    True, but at least for me also true: Smartphones are a stable fixture in my life and by now I try to get rid of them as much as possible.

  • threatripper a day ago

    What AI currently lacks is mainly context. A well trained, experienced human knows their reader very well and knows what they don't need to write. And for what they write they know the tone they need to hit. I totally expect that in the future this will totally turn around, the Author will write the facts and framework with the help of AI and your AI will extract and condense it for your consumption. Your AI knows everything about you. Knows everything you ever consumed. Knows how you think and what it needs to tell you in which tone to give you the best experience. You will be informed better than ever before. The future in AI will be bright!

  • timeon a day ago

    Analogies are not arguments.

mobeets a day ago

I’m with you—-I think you did a good job of summarizing all the places that LLMs are super practical/useful, but agreed that for prose (as someone who considers themselves a proficient writer), it just never seems to contribute anything useful. And those who are not proficient writers, I’m sure it can be helpful, but it certainly doesn’t contribute any new ideas if you’re not providing them.

  • jml78 a day ago

    I am not a writer. My oldest son,16, started writing short stories. He did not use AI in any aspect of the words on the page. I did however recommend that he feed his stories and ask a LLM for feedback on things that are confusing, unclear, or holes in the plot.

    Not to take any words it gives but read what it says and decide if those things are true, if so, make edits. I am not saying it is a great editor but it is better than any other resource he has access to as a teenager. Yeah better than me or his mom

    • moregrist a day ago

      Have you looked for:

      - Writing groups. They often have sessions that provide feedback and also help writers find/build a sense of community. Your son would also get to listen to other writers talk about their work, problems they’ve run into and overcome, and other aspects of their craft.

      - School (sometimes library) writing workshops. This helps students develop bonds with their peers and helps both students: the ones giving feedback are learning to be better editors.

      Both of these offer a lot of value in terms of community building and also getting feedback from people vested in the the craft of writing.

      • jml78 a day ago

        Good feedback, we live a somewhat unusual lifestyle. We are digital nomads that live on a sailboat. I think some of that is possible and I will recommend he look for some online writing groups but the places we generally sail to are countries where schools/libraries aren’t going to have those types of things. It is challenge enough flying him back to the US to take AP exams

      • ryeats a day ago

        The open question is will someone who learns this way actually develope taste and mastery. I think the answer is mixed because some will use it as a crutch but it will also be able to give them a little bit of insight beyond what they could learn by reading and inquisitive minds will be able to grow discerning.

    • zB2sj38WHAjYnvm a day ago

      This is very sad.

      • endemic a day ago

        Why? Seems like a good idea, relying on the LLM to write for you won’t develop your skills, but using it as an editor is a good middle ground. Also there’s no shame in saying an LLM is “better” than you at a task.

        • ryanblakeley a day ago

          Creative expression is also about relationships with other people and connecting with an audience. Treating it like product optimization seems hollow and lonely. There's friction to asking another person to read and give feedback on something you wrote, but it's the kind of friction that helps you grow.

        • sampl3username a day ago

          Art is fundamentally a human activity. No amount of artistic work can be delegated to a machine, or else the art is dehumanised.

          • strken a day ago

            This seems like it would ban drawing tablets, musical instruments, and a lot of other things which seem silly to ban.

            • GeoAtreides a day ago

              In this particular instance the medium is not the message. or the art.

      • zaphod420 a day ago

        It's not sad, it's using modern tools to learn. People that don't embrace the future get left behind.

        • DanHulton a day ago

          You say that as if it's a justification, not an observation.

          For one, the world doesn't need to be that way, I.e. We don't need to "leave behind" anyone who doesn't immediately adopt every single piece of new technology. That's simple callousness and doesn't need to be ruthlessly obeyed.

          And for two, it's provably false. What is "the future?" VR? The metaverse? Blockchain? NFTs? Hydrogen cells? Driverless cars? There has been exactly ZERO penalty for not embracing any of these, all sold to us by hucksters as "the future".

          We're going to have to keep using a classic piece of technology for a while now, the Mark 1 Human Brain, to properly evaluate new technology and what its place in our society is, and we oughn't be reliant on profound-seeming but overly-simplistic quotes as that.

          Be a little more discerning, and think for yourself before you lose the ability to.

          • jml78 a day ago

            Dan,

            Do you have kids? Outside of discipline, and even there, I want to have a positive relationship with my sons.

            My oldest knows that I am not a writer, there are a ton areas that I can give legit good advice. I can actually have a fun conversation about his stories, but I have no qualifications to tell him what he might want to change. I can say what I like but my likes/dislikes are not what an editor does. I actually stay away from dislikes on his writing because who cares what I don’t like.

            I would rather encourage him to write, write more, and get some level of feedback even if I don’t think my feedback is valuable.

            LLMs have been trained on likely all published books, it IS more qualified than me.

            If he continues to write and gets good enough should he seek a human editor sure.

            But I never want me to be a reason he backs away from something because my feedback was wrong. It is easier for people to take critical feedback from a computer than their parents. Kids want to please and I don’t want him writing stuff because he think it will be up my alley.

            • whoisyc a day ago

              There is something deeply disturbing about your attitude towards making mistake.

              You think you shouldn’t give advice because your feedback is not valuable and may even cause your son to give up writing, but you have so far given no reason why AI wouldn’t. From the entire ChatGPT “glazing” accident I can also argue that the AI can also give bad feedback. Heck most mainstream models are fine tuned to sounds like a secretary that never says no.

              Sorry if this sounds rude, but it feels like the real reason you ask your son to get AI feedback is to avoid being personally responsible for mistakes. You are not using AI as a tool, you are using it as an scapegoat in case anything goes wrong.

            • skydhash a day ago

              > LLMs have been trained on likely all published books, it IS more qualified than me.

              It has also be trained on worthless comments on the internet, so that’s not a great indicator.

            • stefanka a day ago

              > But I never want me to be a reason he backs away from something because my feedback was wrong.

              Do you want an LLM to be the reason? You can explain that your Feedback is opinionated or biased. And you know him better than any machine ever will

        • jml78 a day ago

          Exactly, I would rather read his stories and discuss them with him. My advice on anything outside of pure opinion is invalid

          • IanCal a day ago

            Having something else help doesn’t preclude reading with them - it also may have better advice. Very rarely is anyone suggesting an all or nothing approach when talking about adding a tool.

    • SV_BubbleTime a day ago

      Large Language Model, not Large Fact Model.

tolerance a day ago

For things like coding LLMs are useful and DEVONThink's recent AI integrations allow me to use local models as something like an encyclopedia or thesaurus to summarize unfamiliar blocks of text. At best I use it like scratch paper.

I formed the habit of exporting entire chats to Markdown and found them useless. Whatever I found of useful from a given response either sparked a superseding thought of my own or was just a reiteration of my own intuitive thoughts.

I've moved from ChatGPT to Claude. The results are practically the same as far as I can tell (although my gut tells me I get better code from Claude) but the I think Anthropic have a better feel for response readability. Sometimes processing a ChatGPT response is like reading a white paper.

Other than that, LLMs get predictable to me after a while and I get why people suspect that they're starting to plateau.

  • tombarys 10 hours ago

    You are right. It plateaued and even degraded in some way. Or we just got more sensitive to its bullshiting?

jdietrich a day ago

As a professional writer, the author of this post is likely a better writer than 99.99% of the population. A quick skim of his blog suggests that he's comfortably more intelligent than 99% of people. I think it's totally unsurprising that he isn't fully satisfied with the output of LLMs; what is remarkable is that someone in that position still finds plenty of reasons to use them.

Now consider someone further down the scale - someone at the 75th, 50th or 25th percentile. The output of an LLM very quickly goes from "much worse than what I could produce" to "as good as anything I could produce" to "immeasurably better than anything I could hope to ever produce".

  • nerevarthelame a day ago

    I'm worried that an increasing number of people are relying on LLMs for things as fundamental to daily life as expressing themselves verbally or critical thinking.

    Perhaps LLMs can move someone's results from the 25th percentile to the 50th for a single task. (Although there's probably a much more nuanced discussion to be had about that: people with poor writing skills can still have unique, valuable, and interesting perspectives that get destroyed in the median-ization of current LLM output.) But after a couple years of using LLMs regularly, I fear that whatever actual talent they have will atrophy below their starting point.

  • kaliszad 17 hours ago

    The author is a great guy and indeed quite smart and meticulous in areas he cares about deeply. He is a published author with a reasonably popular book considering the market size: https://www.melvil.cz/kniha-jak-sbalit-zenu-20/ he has edited probably more books than he would like to admit as well. It's not surprising he is able to write a good article.

    However good writing is a skill you can get good at with enough practice. Read a lot, write a lot of garbage, consult more experienced writers and eventually you will write readable articles soon. Do 10-100x more of that and you will be pretty great. The rest is some kind skill and experience in many other fields than writing which will inform how to write even better. Some of it is intelligence, luck, great mentors and perhaps something we call talent even. As with most things you can get far just by working diligently a lot.

  • ThrowawayR2 21 hours ago

    > "Now consider someone further down the scale - someone at the 75th, 50th or 25th percentile. The output of an LLM very quickly goes from "much worse than what I could produce" to "as good as anything I could produce" to "immeasurably better than anything I could hope to ever produce""

    That does, to my mind, explain all the vengeful "haw haw, you're all going to get left behind" comments from some LLM proponents. They actually do get benefit from LLMs, unlike the highest part of the scale who are overrepresented on HN, without realizing what that implies and they think they can overtake the highest part of the scale by using them. Well, we'll see.

  • antegamisou a day ago

    Idk, LLM writing style somehow almost always ends up sounding like an insufferable smartass Redditor spiel. Maybe it's only appealing to the respective audience.

tombarys a day ago

I am a book publisher & I love technology. It can empower people. I have been using LLM chatbots since they became widely available. I regularly test machine translation at our publishing house in collaboration with our translators. I have just completed two courses in artificial intelligence and machine learning at my alma mater, Masaryk University, and I am training my own experimental models (for predicting bestsellers :). I consider machine learning to be a remarkable invention and catalyst for progress. Despite all this, I have my doubts.

  • esjeon a day ago

    I know a publisher who translates books (English to Korean). He works alone these days. Using GPT, he can produce a decent-quality first draft within a day or two. His later steps are also vastly accelerated because GPT reliably catches typos and grammar errors. It doesn't take more than a month to translate and print a book from scratch. Marvelous.

    But I still don't like that the same model struggles w/ my projects...

    • tombarys 10 hours ago

      This is a topic for another article! We tried hard to use (test) translation tools in some real-life scenarios. The results seemed like they can help first but then we spent a lot of time again to reach our standards. As a side-effect, our translators and editors felt they are losing their own creativity and sensitivity in that process.

      We are a publisher which succeeded due to the highest-quality translations. Our readers appreciated it and ask for it. Czech language is very rich and these machines are not able to make the most of it. The non-fiction sphere needs a lot of fact-checking e.g. in local and field terminology too. So even we can imagine the process of translation could be technically shortened by machine translation, it would probably ruin our reputation in a long term.

      At least for now...

paradox460 13 hours ago

Same. I might use them for some things here and there, but not for writing. When I'm writing blog posts, people are coming to my articles to read what I've written, not what some glorified markov chain spits out.

K0balt a day ago

Ai is useful in closed loop applications, often it can even do a decent job of closing the loop itself… but you need to understand that it is a fundamentally extractive, not creative, process. The body of human cultural knowledge is the underlying resource , and AI is the drill with which we pull out the parts we want.

Coding, robotics, navigation of constrained data spaces such as translation, tagging, indexing, logging, parsing, data transformations… those are all strong target candidates for transformer architecture automation.

Creative thought is not.

magic_hamster a day ago

There have been quite a few skeptic blog posts recently about LLM. Some say they won't use it for coding, others for getting creative ideas, and others won't use it for editing and publishing. However, the silent issue all these posts have in common is that resistance is futile.

To be fair, I also don't like using Copilot when working on code. In many cases it turns into a weird experience when the agent generates the next line(s) and I basically become a discriminator judging if the thing really understands my problem and solution. To be honest, it's boring even if eventually it might make me turn in code faster.

With that said, I cannot ignore that LLMs are happening, and this is the future. The models keep improving but more importantly, the ecosystem keeps improving with things like MCP and better defined context for LLM tools.

We might be looking at a somewhat grim prospect. But like it or not, this is the future. Adapt and survive.

  • tombarys 9 hours ago

    I understand. The question is what does it mean to "survive" for someone.

    For me survival means: - continuing to do my best at the language level – even if more people would start be gradually satisfied with less - I just believe that education, critical thinking and evidence-based principles are at core of humanity progress and one day it will make comeback - I am ok with smaller income and not wishing to exchange it for creating bullshit

    The adaptation for me means: - generally: stay open-minded - I have to understand and somehow accept that the prospect is a bit grim but not to fall into some extreme and doom thinking - I have to explore new ways how to augment human-oriented creativity (with or without these tools)

    What do you think?

metalrain a day ago

Pretty similar view than others have expressed in veiks of "LLMs can be good, just not at my [area of expertise]".

  • esjeon a day ago

    I'm pretty sure they were generally (if not completely) correct when they said that.

    It's either the tech is advancing so quickly that many people can't keep up, or simply the cost of adapting outweighs the potential profit from their remaining careers, even when taking the new tech into account.

kelvinjps10 a day ago

What about grammar and spelling corrections?

  • shakna a day ago

    Not the author, but another author here and...

    Well, it has a problem with my use of the Oxford comma, for one. Because a huge amount of the corpus is American English, and mine ain't. So it fails on grammar repeatedly.

    And if you introduce any words of your own, it will sometimes randomly correct them to something else, and randomly correct other words to the made up ones. And it can't always tell when it's made such a change. And sometimes it does that even if you're just mixing existing languages like French or English. So you can make it useless for spellcheck by touching more than one language.

    I do keep trying, despite the fact my stuff has been stolen and is in the training data, because of all the proselytising, but right now... No AI is useful for my writing. Not even just for grammar and spelling.

dvfjsdhgfv 5 hours ago

> in a programming environment, you can immediately verify the answer by evaluating the code (at least for code snippets).

Well, it's a trap. You see a snippet is right, you accept it. Next time you do it faster, and faster. And then you get one that seems right but it's not. If you're lucky, it will cause an error.

romarioj2h a day ago

AI is a tool like any other, and it can be used well or poorly, just like any other tool. It's important to know its limits. Being a tool, it must be studied for proper use.

johnnyfived a day ago

What's interesting about thinking of code as art is that there rarely a variety of ways of implementing logic that's all optimal. So if you decide on the implementation and have a LLM code it, you likely won't need to make major changes given the right guidelines (I just mean like a single script, for the sake of comparison).

Writing is entirely different, and for some reason, generic writing even when polished (ChatGPT-esque tone) is so much more intolerable than say AI-generated imagery. Images can blend in the background, reading takes active processing so we're much more sensitive. And for the end user of a product, they care 0 or next to 0 about AI code.

  • tombarys 9 hours ago

    > "Images can blend in the background, reading takes active processing so we're much more sensitive. And for the end user of a product, they care 0 or next to 0 about AI code."

    Very interesting point!

mrits a day ago

I think there are a lot of good reasons to be cognitive lazy. Now might not be the time to learn about how something works.