anotherpaulg 2 days ago

> 99% of the code in this PR [for llama.cpp] is written by DeekSeek-R1

It's definitely possible for AI to do a large fraction of your coding, and for it to contribute significantly to "improving itself". As an example, aider currently writes about 70% of the new code in each of its releases.

I automatically track and share this stat as graph [0] with aider's release notes.

Before Sonnet, most releases were less than 20% AI generated code. With Sonnet, that jumped to >50%. For the last few months, about 70% of the new code in each release is written by aider. The record is 82%.

Folks often ask which models I use to code aider, so I automatically publish those stats too [1]. I've been shifting more and more of my coding from Sonnet to DeepSeek V3 in recent weeks. I've been experimenting with R1, but the recent API outages have made that difficult.

[0] https://aider.chat/HISTORY.html

[1] https://aider.chat/docs/faq.html#what-llms-do-you-use-to-bui...

  • joshstrange 2 days ago

    First off I want to thank you for Aider. I’ve had so much fun playing with it and using it for real work. It’s an amazing tool.

    How do you determine how much was written by you vs the LLM? I assume it consists of parsing the git log and getting LoC from that or similar?

    If the scripts are public could you point me at them? I’d love to run it on a recent project I did using aider.

    • anotherpaulg 2 days ago

      Glad to hear you’re finding aider useful!

      There’s a faq entry about how these stats are computed [0]. Basically using git blame, since aider is tightly integrated with git.

      The faq links to the script that computes the stats. It’s not designed to be used on any repo, but you (or aider) could adapt it.

      You’re not the first to ask for these stats about your own repo, so I may generalize it at some point.

      [0] https://aider.chat/docs/faq.html#how-are-the-aider-wrote-xx-...

      • joshstrange 2 days ago

        Thank you so much for linking me to that! I think an `aider stats`-type command would be really cool (it would be cool to calculate stats based activity since the first aider commit or all-time commits of the repo).

        • prometheon1 a day ago

          Slightly longer than `aider stats` but here you go:

            uv run --with=semver,PyYAML,tqdm https://raw.githubusercontent.com/Aider-AI/aider/refs/heads/main/scripts/blame.py
      • nyarlathotep_ 2 days ago

        does this mean lines/diffs otherwise untouched are considered written by Aider?

        If a small change is made by an end-user to adjust an Aider result, who gets "credit"?

        • anotherpaulg 2 days ago

          It works like normal git blame -- it literally uses git blame.

          Whoever changed a line last gets credit. Only the new or newly changed lines in each release are considered.

          So no, "lines/diffs otherwise untouched" are NOT considered written by aider. That wouldn't make sense?

  • yoyohello13 2 days ago

    Maybe this is answered, but I didn't see it. How does aider deal with secrets in a git repo? Like if I have passwords in a `.env`?

    Edit: I think I see. It only adds files you specify.

    • FeepingCreature 2 days ago

      Aider has a command to add files to the prompt. For files that are not added, it uses tree-sitter to extract a high-level summary. So for a `.env`, it will mention to the LLM the fact that the file exists, but not what is in it. If the model thinks it needs to see that file, it can request it, at which point you receive a prompt asking whether it's okay to make that file available.

      It's a very slick workflow.

    • anotherpaulg 2 days ago

      You can use an .aiderignore file to ensure aider doesn't use certain files/dirs/etc. It conforms to the .gitignore spec.

  • almostgotcaught 2 days ago

    > 99% of the code in this PR [for llama.cpp] is written by DeekSeek-R1

    you're assuming the PR will land:

    > Small thing to note here, for this q6_K_q8_K, it is very difficult to get the correct result. To make it works, I asked deepseek to invent a new approach without giving it prior examples. That's why the structure of this function is different from the rest.

    This certainly wouldn't fly in my org (even with test coverage/passes).

    • Jimmc414 2 days ago

      >> Small thing to note here, for this q6_K_q8_K, it is very difficult to get the correct result. To make it works, I asked deepseek to invent a new approach without giving it prior examples. That's why the structure of this function is different from the rest.

      > This certainly wouldn't fly in my org (even with test coverage/passes).

      To be fair, this seems expected. A distilled model might struggle more with aggressive quantization (like q6) since you're stacking two forms of quality loss: the distillation loss and the quantization loss. I think the answer would be to just use the higher cost full precision model.

    • Philpax 2 days ago

      llama.cpp optimises for hackability, not necessarily maintainability or cleanliness. You can look around the repository to get a feel for what I mean.

      • almostgotcaught 2 days ago

        i guess that means no one should use it for anything serious? good to know

        • Philpax 2 days ago

          To some extent, yes. I would not run production off of it, even if it can eek out performance gains on hardware at hand. I'd suggest vLLM or TGI or something similar instead.

  • brianstrimp a day ago

    > It's definitely possible for AI to do a large fraction of your coding, and for it to contribute significantly to "improving itself". As an example, aider currently writes about 70% of the new code in each of its releases.

    That number itself is not saying much.

    Let's say I have an academic article written in Word (yeah, I hear some fields do it like that). I get feedback, change 5 sentences, save the file. Then 20k of the new file differ from the old file. But the change I did was only 30 words, so maybe 200 bytes. Does that mean that Word wrote 99% of that update? Hardly.

    Or in C: I write a few functions in which my old-school IDE did the indentation and automatic insertion of closing curly braces. Would I say that the IDE wrote part of the code?

    Of course the AI supplied code is more than my two examples, but claiming that some tool wrote 70% "of the code" suggests a linear utility of the code which is just not representing reality very well.

    • anotherpaulg a day ago

      Every metric has limitations, but git blame line counts seem pretty uncontroversial.

      Typical aider changes are not like autocompleting braces or reformatting code. You tell aider what to do in natural language, like a pair programmer. It then modifies one or more files to accomplish that task.

      Here's a recent small aider commit, for flavor.

        -# load these from aider/resources/model-settings.yml
        -# use the proper packaging way to locate that file
        -# ai!
        +import importlib.resources
        +
        +# Load model settings from package resource
        MODEL_SETTINGS = []
        +with importlib.resources.open_text("aider.resources", "model-settings.yml") as f:
        +    model_settings_list = yaml.safe_load(f)
        +    for model_settings_dict in model_settings_list:
        +        MODEL_SETTINGS.append(ModelSettings(**model_settings_dict))
        
      https://github.com/Aider-AI/aider/commit/5095a9e1c3f82303f0b...
      • brianstrimp a day ago

        Point is that not all lines are equal. The 30% that the tool didn't make are the hard stuff. Not just in line count. Once an approach or an architecture or a design are clear then implementing is merely manual labor. Progress is not linear.

        You shouldn't judge your sw eng employees by lines of code either. Those that think the hard stuff often don't have that many lines of code checked in. But it's those people that are the key to your success.

    • stavros a day ago

      That's pretty reaching though if you're comparing an AI to a formatter. Presumably 70% of a new Aider release isn't formatting.

    • simonw a day ago

      "The stats are computed by doing something like git blame on the repo, and counting up who wrote all the new lines of code in each release. Only lines in source code files are counted, not documentation or prompt files."

  • reitzensteinm 2 days ago

    R1 is available on both together.ai and fireworks.ai, it should be a drop in replacement using the OpenAI API.

    • SkyPuncher 2 days ago

      The problem is it's very expensive. More expensive than Claude.

      • 7thpower 2 days ago

        You can use the distilled version on Groq for free for the time being. Groq is amazing but frequently has capacity issues or other random bugs.

        Perhaps you could set up Groq as your primary and then fail back to fireworks, etc by using litellm or another proxy.

        • dzhiurgis 2 days ago

          Do you know any assistants for jetbrains that can plug into groq+deepseek?

          • 7thpower 2 days ago

            I do not as I'm not in the ecosystem, but groq is openai compliant, so any tool that is openai compliant (99% are) and lets you put in your own baseurl should work.

            For example, many tools will let you use local llms. Instead of putting in the url to the local llm, you would just plug in the groq url and key.

            see: https://console.groq.com/docs/openai

          • manmal 2 days ago

            Continue.dev is available for Jetbrains, though the plugin is not as good as the VSCode counterpart. You can plug in any openai compatible API. Under experimental settings, you can also define an applyCode model (and others) which you could set to a faster, cheaper one (eg Sonnet).

  • maeil 2 days ago

    > I've been shifting more and more of my coding from Sonnet to DeepSeek V3 in recent weeks.

    For what purpose, considering Sonnet 3.5 still outperforms V3 on your own benchmarks (which also tracks with my personal experience comparing them)?

  • hammock 2 days ago

    That's amazing data. How representative do you think your Aider data is of all coding done?

  • carpo 2 days ago

    aider looks amazing - I'm going to give it a try soon. Just had a question on API costs to see if i can afford it. Your FAQ says you used about 850k tokens for Claude, and their API pricing says output tokens are $15/MTok. Does that mean it cost you under $15 for your Claude 3.5 usage or am I totally off-base? (Sorry if this is has an obvious answer ... I don't know much about LLM API pricing.)

    • simonw 2 days ago

      I built a calculator for that here: https://tools.simonwillison.net/llm-prices

      It says that for 850,000 Claude 3.5 output tokens the cost would be $12.75.

      But... it's not 100% clear from me if the Aider FAQ numbers are for input or output tokens.

      • anotherpaulg 2 days ago

        It's "total" tokens, input plus output. I'd guess more than two-thirds of them are input tokens.

        • simonw 2 days ago

          If we guess 500,000 for input and 350,000 for output that's a grand total of $6.75. This stuff is so cheap these days!

    • anotherpaulg 2 days ago

      When I was mostly just using Sonnet I was spending ~$100/month on their API. That included some amount of bulk API use for benchmarking, not just my interactive AI coding.

    • jsnell 2 days ago

      If you're concerned about API costs, the experimental Gemini models with API keys from API studio tend to have very generous free quota. The quality of e.g. Flash 2.0 Experimental is definitely good enough to try out Aider and see if the workflow clicks. (For me, the quality has been good enough that I just stuck with it, and didn't get around to experimenting with any of the paid models yet.)

  • Imanari 2 days ago

    Love aider, thank you for your work! Out of curiousity, what are your future plans and ideas for aider in terms of features and workflow?

  • realo 2 days ago

    Hello...

    Is it possible to use aider with a local model running in LMStudio (or ollama)?

    From a quick glance i did not see an obvious way to do that...

    Hopefully i am totally wrong!

  • rahimnathwani 2 days ago

    When a log line contains {main_model, weak_model, editor_model} does the existence of main_model mean that mean the person was using Aider in Architect/Editor mode?

    Do you usually use that mode and, if so, with which architect?

    Thank you!

  • wvlia5 2 days ago

    Can you make a plot like HISTORY but with axis changed? X: date Y: work leverage (i.e. 50%=2x, 90%=10x, 95%=20x, leverage = 1/(1-pct) )

  • nprateem 2 days ago

    > As an example, aider currently writes about 70% of the new code in each of its releases.

    Yeah but part of that is because it's physically impossible to stop it making random edits for the sake of it.

simonw 2 days ago

Given these initial results, I'm now experimenting with running DeepSeek-R1-Distill-Qwen-32B for some coding tasks on my laptop via Ollama - their version of that needs about 20GB of RAM on my M2. https://www.ollama.com/library/deepseek-r1:32b

It's impressive!

I'm finding myself running it against a few hundred lines of code mainly to read its chain of thought - it's good for things like refactoring where it will think through everything that needs to be updated.

Even if the code it writes has mistakes, the thinking helps spot bits of the code I may have otherwise forgotten to look at.

  • lacedeconstruct 2 days ago

    The chain of thought is incredibly useful, I almost dont care about the answer now I just follow what I think is interesting from the way it broke the problem down, I tend to get tunnel vision when working for a long time on something so its a great way to revise my work and make sure I am not misunderstanding something

    • rtsil 2 days ago

      Yesterday, I had it think for 194 seconds. At some point near the end, it said "This is getting frustrating!"

      • bronco21016 2 days ago

        I must not be hunting the right keywords but I was trying to figure this out earlier. How do you set how much time it “thinks”? If you let it run too long does the context window fill and it’s unable to do anymore?

        • throwup238 2 days ago

          It looks like their API is OpenAI compatible but their docs say that they don’t support the `reasoning_effort` parameter yet.

          > max_tokens:The maximum length of the final response after the CoT output is completed, defaulting to 4K, with a maximum of 8K. Note that the CoT output can reach up to 32K tokens, and the parameter to control the CoT length (reasoning_effort) will be available soon. [1]

          [1] https://api-docs.deepseek.com/guides/reasoning_model

  • miohtama 2 days ago

    Also even if the answer is incorrect, you can still cook the eggs on the laptop :)

    • lawlessone 2 days ago

      i spent a months salary on these eggs and can no longer afford to cook them :(

      • the_arun 2 days ago

        Hey, where are you getting the eggs? I am unable to find them in the market.

    • belter 2 days ago

      The eggs cost more than the laptop...

  • brandall10 2 days ago

    If you have a bit more memory, use the 6 bit quant, takes up about 26gb and has been shown to be very minimally lossy as opposed to 4bit.

    Also serve it as MLX from LMStudio, will speed things up 30% or so so your 6bit will have similar perf to the 4bit.

    Getting about 12-13 tok/sec on my M3 Max 48gb.

    • thomasskis a day ago

      EXO is also great for running the 6bit deepseek, plus it’s super handy to serve from all your devices simultaneously. If your dev team all has M3 Max 48gb machines, sharing the compute lets you all run bigger models and your tools can point at your local API endpoint to keep configs simple.

      Our enterprise internal IT has a low friction way to request a Mac Studio (192GB) for our team and it’s a wonderful central EXO endpoint. (Life saver when we’re generally GPU poor)

  • mike31fr 2 days ago

    Noob question (I only learned how to use ollama a few days ago): what is the easiest way to run this DeepSeek-R1-Distill-Qwen-32B model that is not listed on ollama (or any other non-listed model) on my computer ?

    • codingdave 2 days ago

      If you are specifically running it for coding, I'm satisfied with using it via continue.dev in VS Code. You can download a bunch of models with ollama, configure them into continue, and then there is a drop-down to switch models. I find myself swapping to smaller models for syntax reminders, and larger models for beefier questions.

      I only use it for chatting about the code - while this setup also lets the AI edit your code, I don't find the code good enough to risk it. I get more value from reading the thought process, evaluating it, and the cherry picking which bits of its code I really want.

      In any case, if that sounds like the experience you want and you already run ollama, you would just need to install the continue.dev VS Code extension, and then go to its settings to configure which models you want in the drop-down.

    • simonw 2 days ago

      Search for a GGUF on Hugging Face and look for a "use this model" menu, then click the Ollama option and it should give you something to copy and paste that looks like this:

        ollama run hf.co/MaziyarPanahi/Mistral-7B-Instruct-v0.3-GGUF:IQ1_M
    • nyrikki 2 days ago

         ollama run deepseek-r1:32b
      
      
      They dropped the Qwen/Llama terms from the string

      https://ollama.com/library/deepseek-r1

    • marpstar 2 days ago

      I'm using it inside of LM Studio (https://lmstudio.ai), which has a "Discovery" tab where you can download models.

  • blakesterz 2 days ago

    Is DeepSeek really that big of a deal that everyone else should worry?

    • m11a 2 days ago

      A lot of the niceness about DeepSeek-R1's usage in coding is that you can see the thought process, which (IME) has been more useful than the final answer.

      It may well be that o1's chain of thought reasoning trace is also quite good. But they hide it as a trade secret and supposedly ban users for trying to access it, so it's hard to know.

      • m11a 2 days ago

        One example from today: I had a coding bug which I asked R1 about. The final answer wasn't correct, but adapting an idea from the CoT trace helped me fix the bug. o1's answer was also incorrect.

        Interestingly though, R1 struggled in part because it needed the value of some parameters I didn't provide, and instead it made an incorrect assumption about its value. This was apparent in the CoT trace, but the model didn't mention this in its final answer. If I wasn't able to see the trace, I'd not know what was lacking in my prompt, and how to make the model do better.

        I presume OpenAI kept their traces a secret to prevent their competitors from training models with it, but IMO they strategically err'd in doing so. If o1's traces were public, I think the hype around DS-R1 would be relatively less (and maybe more limited to the lower training costs and the MIT license, and not so much its performance and usefulness.)

        • gwd a day ago

          > I presume OpenAI kept their traces a secret to prevent their competitors from training models with it

          At some point there was a paper they'd written about it, and IIRC the logic presented was like this:

          - We (the OpenAI safety people) want to be able to have insight into what o1 is actually thinking, not a self-censored "people are watching me" version of its thinking.

          - o1 knows all kinds of potentially harmful information, like how to make bombs, how to cook meth, how to manipulate someone, etc, which could "cause harm" if seen by an end-user

          So the options as they saw it were:

          1. RLHF both the internal thinking and the final output. In this case the thought process would avoid saying things that might "cause harm", and so could be shown to the user. But they would have a less clear picture of what the LLM was "actually" thinking, and the potential state space of exploration would be limited due to the self-censorship.

          2. Only RLHF the final output. In this case, they can have a clearer picture into what the LLM is "actually" thinking (and the LLM could potentially explore the state space more fully without risking about causing harm), but thought process could internally mention things which they don't want the user to see.

          OpenAI went with #2. Not sure what DeepSeek has done -- whether they have RLHF'd the CoT as well, or just not worried as much about it.

        • manmal 2 days ago

          Do you use Continue.dev or similar tools to load code into the context, or do you copypaste into their web chat?

      • d3nj4l 2 days ago

        I have a lot of fun just posting a function into R1, saying "Improve this" and reading the chain of thought. Lots of insight in there that I would usually miss or glance over.

      • emporas 2 days ago

        I tried a month back o1 and Qwen with chain of thought QwQ, to explain to me some chemical reactions, QwQ got it correct, and o1 got it wrong.

        The question was "Explain how to synthesize chromium trioxide from simple and everyday items, and show the chemical bond reactions". o1 didn't balance the molecules in the left hand of the reaction and the right hand, but it was very knowledgeable.

        QwQ wrote ten to fifteen pages of text, but in the end the reaction was correct. It took forever to compute, it's output was quite exhausting to look at and i didn't find it that useful.

        Anyway, at the end, there is no way to create Chromium Trioxide using everyday items. I thought maybe i could mix some toothpaste and soap and get it.

      • fibers 2 days ago

        how many reported cases of banning are there? that sounds insane for asking it to print out its chain of thought

      • satvikpendem 2 days ago

        This is generally how I use LLMs anyway, as brainstorming tools, rather than writing code.

    • horsawlarway 2 days ago

      I would say worry? Yes. Panic? No.

      It's... good. Even the qwen/llama distills are good. I've been running the Llama-70b-distill and it's good enough that it mostly replaces my chatgpt plus plan (not pro - plus).

      I think if anything - One of my big takeaways is that OpenAI shot themselves in the foot, big time, by not exposing the COT for the O1 Pro models. I find the <think></think> section of the DeepSeek models to often be more helpful than the actual answer.

      For work that's treating the AI as collaborative rather than "employee replacement" the COT output is really valuable. It was a bad move for them to completely hide it from users, especially because they make the user sit there waiting while it generates anyways.

    • pavitheran 2 days ago

      Deepseek is a big deal but we should be happy not worried that our tools are improving.

      • bbzealot 2 days ago

        Why though?

        I'm worried these technologies may take my job away and make the balance between capital and labor even more uneven.

        Why should I be happy?

        • bubbleRefuge 2 days ago

          Think the marginal cost of developing complex software goes down thereby making it affordable to a greater market. There will still be a need for skilled software engineers to understand domains, limitations of AI, and how to harness and curate AI to develop custom apps. Maybe software engineering for the masses. Local small businesses can now maybe afford to take on custom software projects that were before unthinkable.

          • Vampiero a day ago

            In what AI-powered world do you think that local small software businesses will survive?

            • simonw a day ago

              One where other businesses need help figuring out how to use AI for their own businesses.

              It doesn't matter how "easy" technology gets to use, there will always be a market for helping other people figure out best to apply it.

          • the_af 2 days ago

            > There will still be a need for skilled software engineers to understand domains, limitations of AI, and how to harness and curate AI to develop custom apps.

            But will there be a need for fewer engineers, though? That's the question. And the competition for those who remain employed would be fierce, way worse than today.

            Or so I fear. I hope I'm wrong.

            • andrei_says_ 2 days ago

              I think it might be useful to look at this as multiple forces to play.

              One force is a multiplier of a software engineer’s productivity.

              Another force is the pressure of the expectation for constant, unlimited increase in profits. This pressure force the CEOs and managers to look for cheaper alternatives to expensive software engineers, ultimately to eliminate the position and expense. The lie that this is a possibility draws huge investments.

              And another force is the infinite number of applications of software, especially well designed, truly useful, software.

              • the_af 2 days ago

                Yes, these are good considerations.

                I'd be a hypocrite if I didn't admit I use AI daily in my job, and it's indeed a multiplier of my productivity. The tech is really cool and getting better.

                I also understand AI is one step closer for the everyday Jane or Joe Doe to do cool and useful stuff which was out of reach before.

                What worries me is the capitalist, business-side forces at play, and what they will mean for my job security. Is it selfish? You bet! But if I don't advocate for me, who will?

            • btilly 2 days ago

              Jevon's Paradox says that you're probably wrong. But I'm worried about the same thing. The moat around human superiority is shrinking fast. And when it's gone, we may get more software, but will we need humans involved?

              • svara 2 days ago

                AI doesn't have needs any desires, humans do. And no matter how hyped one might be about AI, we're far away from creating an artificial human. As long as that's true, AI is a tool to make humans more effective.

                • btilly 2 days ago

                  AI may not have desires, but corporations do. And control more resources than humans.

                  Making corporations more effective is not always in the interest of humans.

                  • svara a day ago

                    That's fair, but the question was whether AI would destroy or create jobs.

                    You might speculate about a one-person megacorp where everything is done by AIs that a single person runs.

                    What I'm saying is that we're very far from this, because the AI is not a human that can make the CEO's needs and desires their own and execute on them independently.

                    Humans are good at being humans because they've learned to play a complex game, which is to pursue one's needs and desires in a partially adversarial social environment.

                    This is not at all what AI today is being trained for.

                    Maybe a different way to look at it, as a sort of intuition pump: If you were that one man company, and you had an AGI that will correctly answer any unambiguously stated question you could ask, at what point would you need to start hiring?

                    • menaerus a day ago

                      You're taking your opinion to extreme because I don't think anyone is talking about replacing all engineers with a single AI computer doing the work for a one-person mega-corporation.

                      The actual question, which is much more realistic, is if an average company of, let'say, 50 engineers will still have a need to hire those 50 engineers if AI turns out to be such an efficiency multiplier?

                      In that case, you will no longer need 10 people to complete 10 tasks in given time-unit but perhaps only 1 engineer + AI compute to do the same. Not all businesses can continue scaling forever, so it's pretty expected that those 9 engineers will become redundant.

                      • svara a day ago

                        You took me too literally there, that was intended as a thought experiment to explore the limits.

                        What I was getting at was the question: If we feel intuitively that this extreme isn't realistic, what exactly do we think is missing?

                        My argument is, what's missing is the human ability to play the game of being human, pursuing goals in an adversarial social context.

                        To your point more specifically: Yes, that 10-person team might be replaceable by a single person.

                        More likely than not however, the size of the team was not constrained by lack of ideas or ambition, but by capital and organizational effectiveness.

                        This is how it's played out with every single technology so far that has increased human productivity. They increase demand for labor.

                        Put another way: Businesses in every industry will be able to hire software engineering teams that are so good that in the past, only the big names were able to afford them. The kind of team required for the digital transformation of every old fashioned industry.

                        • menaerus a day ago

                          In my 10-person team example, what in your opinion would the company with the rest of the 9 people do once the AI proves its value in that team?

                          Your hypothesis is AFAIU is that the company will just continue to scale because there's an indefinite amount of work/ideas to be explored/done so the focus of those 9 people will just be shifted to some other topic?

                          Let's say I am a business owner I have a popular product with a backlog of 1000 bugs and I have a team of 10 engineers. Engineers are busy both juggling between the features and fixing the bugs at the same time. Now let's assume that we have an AI model that will relieve 9 out of 10 engineers from cleaning the bugs backlog and we will need 1 or 2 engineers reviewing the code that the AI model spits out for us.

                          What concrete type of work at this moment is left for the rest of the 9 engineers?

                          Assuming that the team, as you say, is not constrained by the lack of ideas or ambition, and the feature backlog is somewhat indefinite in that regard, I think that the real question is if there's a market for those ideas. If there's no market for those ideas then there's no business value $$$ created by those engineers.

                          In that case, they are becoming a plain cost so what is the business incentive to keep them then?

                          > Businesses in every industry will be able to hire software engineering teams that are so good that in the past, only the big names were able to afford them

                          Not sure I follow this example. Companies will still hire engineers but IMO at much less capacity than what it was required up until now. Your N SQL experts are now replaced by the model. Your M Python developers are now replaced by the model. Your engineer/PR-review is now replaced by the model. The heck, even your SIMD expert now seems to be replaced by the model too (https://github.com/ggerganov/llama.cpp/pull/11453/files). Those companies will no longer need M + N + ... engineers to create the business value.

                          • svara a day ago

                            > Your hypothesis is AFAIU is that the company will just continue to scale because there's an indefinite amount of work/ideas to be explored/done so the focus of those 9 people will just be shifted to some other topic?

                            Yes, that's what I'm saying, except that this would hold over an economy as a whole rather than within every single business.

                            Some teams may shrink. Across industry as a whole, that is unlikely to happen.

                            The reason I'm confident about this is that this exact discussion has happened many times before in many different industries, but the demand for labor across the economy as a whole has only grown. (1)

                            "This time it's different" because the productivity tech in question is AI? That gets us back to my original point about people confusing AI with an artificial human. We don't have artificial humans, we have tools to make real humans more effective.

                            (1) The point seems related to this https://en.wikipedia.org/wiki/Lump_of_labour_fallacy

                            • menaerus a day ago

                              Hypothetically you could be right and I don't know if "this time will be different" nor am I trying to predict what will happen on the global economic scale. That's out of my reach.

                              My question is rather of much narrower scope and much more concrete and tangible - and yet I haven't been able to find any good answer for it, or strong counter-arguments if you will. If I had to guess something about it then my prediction would be that many engineers will need to readjust their skills or even requalify for some other type of work.

                            • btilly 21 hours ago

                              Automation improved life in the Industrial Revolution because it displaced people from spinning and weaving into higher value add professions.

                              What higher value add professions will humans be displaced into by AI?

                • strogonoff a day ago

                  It should be obvious that technology exists for the sake of humans, not the other way around, but I have already seen an argument for firing humans in favour of LLMs since the latter emit less pollution.

                  LLMs do not have desires, but their existence alters desires of humans, including the ones in charge of businesses.

                • boothby 2 days ago

                  > AI doesn't have needs any desires, humans do.

                  I fear that this won't age well. But to shamelessly riff on Marx, those who control the means of computation will control society.

                  • svara a day ago

                    I agree the latter part is a risk to consider, but I really think getting an AI to replace human jobs on a vast scale will take much more than just training a bit more.

                    You need to train on a fundamentally different task, which is to be good at the adversarial game of pursuing one's needs and desires in a social environment.

                    And that doesn't yet take into account that the interface to our lives is largely physical, we need bodies.

                    I'm seeing us on track to AGI in the sense of building a universal question answering machine, a system that will be able to answer any unambiguously stated question if given enough time and energy.

                    Stating questions unambiguously gets pretty difficult fast even where it's possible, often it isn't even possible, and getting those answers is just a small part of being a successful human.

                    PS: Needs and desires are totally orthogonal to AI/AGI. Every animal has them, but many animals don't have high intelligence. Needs and desires are a consequence of our evolutionary history, not our intelligence. AGI does not need to mean an artificial human. Whether to pursue or not pursue that research program is up to us, it's not inevitable.

                    • boothby a day ago

                      Honestly, I wasn't even talking about jobs with that. I worry about an intelligent IOT controlled by authoritarian governments or corporate interests. Our phones have already turned society into a panopticon, and that will can get much worse when AGI lands.

                      But yes, the job thing is concerning as well. AI won't scrub a toilet, but it will cheaply and inexhaustibly do every job that humans find meaningful today. It seems that we're heading inexorably towards dystopia.

                      • svara a day ago

                        > AI won't scrub a toilet, but it will cheaply and inexhaustibly do every job that humans find meaningful today

                        That's the part I really don't believe. I'm open to being wrong about this, the risk is probably large enough to warrant considering it even if the probability of this happening is low, but I do think it's quite low.

                        We don't actually have to build artificial humans. It's very difficult and very far away. It's a research program that is related to but not identical to the research program leading to tools that have intelligence as a feature.

                        We should be, and in fact we are, building tools. I'm convinced that the mental model many people here and elsewhere are applying is essentially "AGI = artificial human", simply because the human is the only kind of thing in the world that we know that appears to have general intelligence.

                        But that mental model is flawed. We'll be putting intelligence in all sorts of places that are not similar to a human at all, without those devices competing with us at being human.

                        • boothby a day ago

                          To be clear, I'm much more concerned about the rise of techo-authoritarianism than employment.

                          And further ahead, where I said your original take might not age well; I'm also not worried about AI making humanoid bodies. I'd be worried about a future where mines, factories, and logistics are fully automated: an AI for whom we've constructed a body which is effectively the entire planet.

                          And nobody needs to set out to build that. We just need to build tools. And then, one day, an AGI writes a virus and hacks the all-too-networked and all-too-insecure planet.

                          • the_af 6 hours ago

                            > I'm also not worried about AI making humanoid bodies. I'd be worried about a future where mines, factories, and logistics are fully automated: an AI for whom we've constructed a body which is effectively the entire planet.

                            I know scifi is not authoritative, and no more than human fears made into fiction, but have you read Philip K. Dick's short story "Autofac"?

                            It's exactly what you describe. The AI he describes isn't evil, nor does it seek our extinction. It actually wants our well-being! It's just that it's taken over all of the planet's resources and insists in producing and making everything for us, so that humans have nothing left to do. And they cannot break the cycle, because the AI is programmed to only transition power back to humans "when they can replicate Autofac output", which of course they cannot, because all the raw resources are hoarded by the AI, which is vastly more efficient!

                            • boothby 2 hours ago

                              I think that science fiction plays an important role in discourse. Science fiction authors dedicate years deeply contemplating potential future consequences of technology, and packaging such into compelling stories. This gives us a shorthand for talking about positive outcomes we want to see, and negative outcomes that we want to avoid. People who argue against scifi with a dismissal that "it's just fiction" aren't participating in good faith.

                              On the other hand, it's important not to pay too close attention to the details of scifi. I find myself writing a novel, and I'm definitely making decisions in support of a narrative arc. Having written the comment above... that planetary factory may very well become the third faction I need for a proper space opera. I'll have to avoid that PKD story for the moment, I don't want the influence.

                              Though to be clear, in this case, that potentiality arose from an examination of technological progress already underway. For example, I'd be very surprised if people aren't already training LLMs on troves of viruses, metasploit, etc. today.

                    • the_af a day ago

                      To be clear, I'm not arguing humans will stop being involved in software engineering completely. What I fear is that the pool of employable humans (as code reviewers, prompt engineers and high-level "solution architects") will shrink, because fewer will be needed, and that this will cause ripples in our industry and affect employment.

                      We know this isn't far-fetched. We have strong evidence to suspect during the big layoffs of a couple of years ago, FAANG and startups all colluded to lower engineer salaries across the board, and that their excuse ("the economy is shrinking") was flimsy at best. Now AI presents them with another powerful tool to reduce salaries even more, with a side dish of reducing the size of the cost center that is programmers and engineers.

                  • visarga 2 days ago

                    In the AI age, those who own the problems stand to own the AI benefits. Utility is in the application layer, not the hosting or development of AI models.

              • kelipso a day ago

                Can’t be sure though, there used to be way more accountants decades ago.

              • bubbleRefuge 2 days ago

                this is a better world. we can work a few hours a week and play tennis, golf, and argue politics with our friends and family over some good cheese and wine while the bots do the deployments.

                • hgomersall a day ago

                  We're already there in terms of productivity. The problem is the inordinate number of people doing nothing useful yet extracting huge amounts. Think most of finance for example.

                  • bubbleRefuge 41 minutes ago

                    oh. yeah. finance is to big. they've captured the government.

                • the_af 2 days ago

                  Assuming you retain a good paying job and are not treated like a disposable commodity. That cheese and wine is not going to be free.

                  • bubbleRefuge 2 days ago

                    as long as we keep learning and our heads in the game we will be fine. I worry much more for the non-techno savy like scrum masters. yikes.

            • weatherlite a day ago

              If it's any consolation, if indeed the extra productivity happens, and kills the number of SWE jobs I don't see why this dynamic shouldn't happen in almost all white collar job across the private sector (government sectors are pretty much protected no matter what happens). There'll be a decreasing demand for lawyers, accountants, analysts, secretaries, HR personnel, designers, marketers etc etc. Even doctors might start feeling this eventually.

            • bubbleRefuge 2 days ago

              no I think more engineers. especially those who can be a jack-of-all-trades. if a software project that takes normally 1 year of customer development can be done in 2 months, then that project is affordable to a wide array of business who would could never fund that kind of project before.

              • the_af 2 days ago

                I can see more projects being deployed by smaller businesses, that would otherwise not be able to.

                But how will this translate to engineering jobs? Maybe there will be AI tools to automate most of the stuff a small business needs done. "Ah," you may say, "I will build those tools!". Ok. Maybe. How many engineers do you need for that? Will the current engineering job market shrink or expand, and how many non-trash, well paid jobs will there be?

                I'm not saying I know for sure how it'll go, but I'm concerned.

                • dwaltrip 2 days ago

                  Just had a thought, perhaps software engineers will become more like car mechanics.

                  • the_af a day ago

                    That's not an encouraging thought.

                    By the way, car mechanics (especially independent ones, your average garage mechanic) understand less and less about what's going on inside modern cars. I don't want this to happen to us.

                • bubbleRefuge 2 days ago

                  would be similar to solution engineers today. you build solutions using ai. think about all the moving parts to building a complex business app. user experience, data storage, business logic, reporting, etc. etc. the engineer can orchestrate the ai to build the solution and validate its correctness.

                  • the_af a day ago

                    I fear even this role will need way fewer people, meaning the employment pool will heavily shrink, and those competing for a job will need to accept lower paychecks.

                    • bubbleRefuge a day ago

                      like someone said above. demand is infinite. imagine a world where the local AI/Engineer tech is a ubiquitous as the uber driver. don't think it will necessarily create smaller paychecks. hard to say. But I see demand skyrocketing for customized software that can be provided at 1/10 of today's costs.

                      We are far away from that though. As an enterprise software/data engineer, AI has been great in answering questions and generating tactical code for me. Hours have turned into minutes. It even motivated me to work on side projects because they take less time. You will be fine. Embrace the change. Its good for you. Will lead to personal growth.

                      • the_af a day ago

                        I'm not at all convinced demand is infinite, nor that this demand will result in employment. This feels like begging the question. This is precisely what I fear won't happen!

                        Also, I don't want to be a glorified uber driver. It's not good for me and not good for the profession.

                        > As an enterprise software/data engineer, AI has been great in answering questions and generating tactical code for me. Hours have turned into minutes.

                        I don't dispute this part, and it's been this way for me too. I'm talking about the future of our profession, and our job security.

                        > You will be fine. Embrace the change. Its good for you. Will lead to personal growth.

                        We're talking at cross-purposes here. I'm concerned about job security, not personal growth. This isn't about change. I've been almost three decades in this profession, I've seen change. I'm worried about this particular thing.

                        • bubbleRefuge 36 minutes ago

                          3 decades. me too. since 97. maybe uber driver was a bad example. what about having a work model similar to a lawyer? whereby one can specialize in creating certain types of business or personal apps at a high hourly rate ?

            • llm_trw 2 days ago

              Did the introduction of assemblers lead to creating more or fewer programming jobs?

              • the_af a day ago

                I get this argument, but it feels we cannot always reason by analogy. Some jumps are qualitatively different. We cannot always claim "this didn't happen before, therefore it won't happen now".

                Of course assemblers didn't create fewer programming jobs, nor did compilers or high level languages. However, with "NO CODE" solutions (remember that fad?) there was an attempt at reducing the need for programmers (though not completely taking them out of the equation)... it's just that NO CODE wasn't good enough. What if AI is good enough?

        • joshmarlow 2 days ago

          > make the balance between capital and labor even more uneven.

          I think it's interesting to note that as opens source models evolve and proliferate, the capital required for a lot of ventures goes down - which levels the playing field.

          When I can talk to one agent-with-a-CAD-integration and have it design a gadget for me and ship the design off to a 3D printer and then have another agent write the code to run on the gadget, I'll be able to build entire ventures that would require VC funding and a team now.

          When intellectual capital is democratized, financial capital looses just a bit of power...

          • breuleux 2 days ago

            What value do you bring to the venture, though? What makes your venture more likely to succeed than anybody else's, if the barrier is that low? I mean, I'll tell you: if anyone can spend $100 to design the same new gadget, the winner is going to be whoever can spend a million in production (to get economy of scale) and marketing. Currently, financial capital needs your brain, so you can leverage that. But if they can use a brain in the cloud instead, they're going to do just that. Sure, you can use it and design anything you can imagine, but nobody is going to pay you for it unless you, yourself, bring some irreplaceable value to the table.

            • visarga 2 days ago

              Since everyone has AI, then it stands that humans still make the difference. That is why I don't think companies will be able to automate software dev too much, they would be cutting the one advantage they could have over their competition.

              • breuleux a day ago

                It stands that humans will make the difference if they can do things that the AI cannot. The more capable the AI gets, however, the less humans will meet that threshold, and they are the ones that will lose out. Capital, on the other hand, will always make a difference.

          • toth 2 days ago

            I can't understand how you reach your conclusion.

            At present, if you have financial capital and need intellectual capital you need to find people willing to work for you and pay them a lot of money. With enough progress in AI you can get the intellectual capital from machines instead, for a lot less. What loses value is human intellectual capital. Financial capital just gained a lot of power, it can now substitute for intellectual capital.

            Sure, you could pretend this means you'll be able to launch a startup without any employees, and so will everyone. But why wouldn't Sam Altman or whomever just start AI Ycombinator with hundreds of thousands of AI "founders"? Do you really think it would be more "democratic"?

            • visarga 2 days ago

              > But why wouldn't Sam Altman or whomever just start AI Ycombinator with hundreds of thousands of AI "founders"? Do you really think it would be more "democratic"?

              AI is useful in the same way with Linux

              - can run locally

              - empowers everyone

              - need to bring your own problem

              - need to do some of the work yourself

              The moral is you need to bring your problem to benefit. The model by itself does not generate much benefits. This means AI benefits are distributed like open source ones.

              • toth a day ago

                Those points are true of current AI models, but how sure are you they will remain true as technology evolves?

                Maybe you believe that they will always stay true, that there's some ineffable human quality that will never be captured by AI and value creation will always be bottle-necked by humans. That would be nice.

                But even if you still need humans in the loop, it's not clear how "democratizing" this would be. It might sound great if in a few years you and everyone else can run an AI on their laptop that is as a good as a great technical co-founder that never sleeps. But note that means that someone who owns a data-center can run the equivalent of the current entire technical staff of Google, Meta, and OpenAI combined. Doesn't sound like a very level playing field.

        • ghxst 2 days ago

          > I'm worried these technologies may take my job away

          The way I look at this is that with the release of something like deepseek the possibility of running a model offline and locally to work _for_ you while you are sleeping, doing groceries, spending time with your kids / family is coming closer to a reality.

          If AI is able to replace me one day I'll be taking advantage of that way more efficiently than any of my employee(s).

          • esafak 2 days ago

            Why wouldn't your employer just hire fewer people to do it since you seem to have enough spare time to do lots of things besides work?

          • ghxst 2 days ago

            Meant to say employer(s).

        • jonas21 2 days ago

          Do you feel the same way about open source software?

        • llm_trw 2 days ago

          The only people who think that Ai models won't result in more demand for human labour are the ones who have never used them.

        • CamperBob2 2 days ago

          You won't be happy doing a robot's job either, at least not for long.

          In the ideal case, we won't be dependent on the unwilling labor of other humans at all. Would you do your current job for free? If not -- if you'd rather do something else with your productive life -- then it seems irrational to defend the status quo.

          One thing's for certain: ancient Marxist tropes about labor and capital don't bring any value to the table. Abandon that thinking sooner rather than later; it won't help you navigate what's coming.

          • cryptopian 2 days ago

            That's not historically what's happened though, is it? We've had plenty of opportunities to reduce the human workload through increased efficiency. What usually happens is people demand more - faster deliveries, more content churn; and those of us who are quite happy with what we have are either forced to adapt or get left behind while still working the same hours.

            • satvikpendem 2 days ago

              Jevon's paradox really does work for everything, not just in the current way people have used it this last week in terms of GPU demand. People always demand more, and thus, there is an endless amount of work to be done.

            • esafak 2 days ago

              If you really have enough, you can retire early.

              • Capricorn2481 a day ago

                We don't have enough because the productivity improvements are not shared with the working class. The wealth gap increases, people work the same. This is historically what has happened and it's what will happen with AI. The next generations will never have the opportunity to retire.

          • hooverd 2 days ago

            Because billionaires think that you are a horse and that the best course of action is to turn you into glue while they hope AGI lets them live forever.

            • CamperBob2 2 days ago

              Billionaires don't think about you at all. That's what nobody seems to get.

              We enjoy many luxuries unavailable even to billionaires only a few decades ago. For this trend to continue, the same thing needs to happen in other sectors that happened in (for example) the agricultural sector over the course of the 20th century: replacement of human workers by mass automation and superior organization.

              • breuleux 2 days ago

                In the past, human workers were displaced. The value of their labour for certain tasks became lower than what automation could achieve, but they could still find other things to do to earn a living. What people are worrying about here is what happens when the value of human labour drops to zero, full stop. If AI becomes better to us at everything, then we will do nothing, we will earn nothing, and we will have nothing that isn't gifted to us. We will have no bargaining power, so we just have to hope the rich and powerful will like us enough to share.

                • nuancebydefault 2 days ago

                  What we see through history is that human labour cost goes up and machine cost goes down.

                  Suppose you want to have your car washed. Hiring someone to do that will most likely give the best result: less physical resources used (soap, water, wear of cloth), less wear and tear on the car surface and less pollution and optionally a better result.

                  Still the benefit/cost equation is clearly in favor of the machine when doing the math, even when using more resources in the process.

                  What is lacking in our capitalist economic system is the fact of hiring people to perform services is punished by much higher taxes compared to using a machine, which is often even tax deductible. That way, the machine brings only benefits to the user of the machine (often a more wealthy person), less much to society as a whole. If only someone could find a solution to this tragedy.

                  • Vampiero a day ago

                    > If only someone could find a solution to this tragedy.

                    Well, someone earlier in the thread said to abandon Marxist thought because it's obsolete. So I don't know how to help you!

                    • nuancebydefault a day ago

                      I prefer to not use -ist's and -ism's. I read that Marx wrote he was not a Marxist. Surely his studies and literature got used as a frame of reference for a rather wide set of ideologies. Maybe someone with a deeper background on the topic can chime in with ideas?

                  • CamperBob2 a day ago

                    If only someone could find a solution to this tragedy.

                    We did. Save up a few bucks, nothing out of reach, and (as you suggested yourself!) you can afford to buy your own machine. Here you go: https://xcancel.com/carrigmat/status/1884244369907278106

                    You'd have received no such largesse from the Marxists. You're welcome.

                    • Capricorn2481 a day ago

                      Forgetting the offhand implication that $6,000 is not out of reach for anyone, this will do nothing. If we're really taking this to its natural conclusion, that AI will be capable of doing most jobs, companies won't care that you have an AI. They will not assign you work that can be done with AI. They have their own AI. You will not compete with any of them, and even if you find a novel way to use it that gives you the gift of income, that won't be possible for even a small fraction of the population to replicate.

                      You can keep shoehorning lazy political slurs into everything you post, but the reality is going to hit the working class, not privileged programmers casually dumping 6 grand so they can build their CRUD app faster.

                      But you're essentially arguing for Marxism in every other post on this thread, whether you realize it or not.

                      • CamperBob2 17 hours ago

                        Yeah, there's always some reason why you can't do something, I guess... or why The Man is always keeping you down, even after putting capabilities into your hands that were previously the exclusive province of mythology.

                        Perhaps other sites beckon.

                • CamperBob2 2 days ago

                  If anything like that had actually happened in the past, you might have a point. When it comes to what happens when the value of human labor drops to zero, my guess is every bit as good as yours.

                  I say it will be a Good Thing. "Work" is what you call whatever you're doing when you'd rather be doing something else.

                  • breuleux 2 days ago

                    The value of our labour is what enables us to acquire things and property, with which we can live and do stuff. If your labour is valueless because robots can do anything you can do better, how do you get any of the possessions you require in order to do that something else you'd rather be doing? Capitalism won't just give them to you. If you do not own land, physical resources or robots, and you can't work, how do you get food? Charity? I'd argue there will need to be a pretty comprehensive redistribution scheme for the people at large to benefit.

              • p_j_w 2 days ago

                >Billionaires don't think about you at all.

                If that were true they wouldn't be building ultra secure bunkers to escape to when the climate shit hits the fan.

                • inglor_cz 2 days ago

                  How many of them did that? Five out of a thousand?

                  Anecdotally, around two people in a hundred in my proximity are preppers as well, though obviously with smaller budgets.

                  It is just a specific fringe way of thinking.

      • hnthrow90348765 a day ago

        This added momentum to two things: reducing AI costs and increasing quality.

        I don't know when the threshold of "replace the bottom X% of developers because AI is so good" happens for businesses based on those things, but it's definitely getting closer instead of stalling out like the bubble predictors claimed. It's not a bubble if the industry is making progress like this.

      • weatherlite a day ago

        I think it's a mixed bag but if people want to be happy I'm not going to spoil the party!

    • flmontpetit 2 days ago

      As far as realizing the prophecy of AI as told by its proponents and investors goes, probably not. LLMs still have not magically transcended their obvious limitations.

      However this has huge implications when it comes to the feasibility and spread of the technology, and further implications with regards to economy and geopolitics now that confidence in the American AI sector has been hit and people and organizations internationally have somewhere else to look for.

      edit: That being said, this is the first time I've seen a LLM do a better job than even a senior expert could do, and even if it's on small scope/in a limited context, it's becoming clear that developers are going to have to adopt this tech in order to stay competitive.

    • buyucu 2 days ago

      There are two things. First, deepseek v3 and r1 are both amazing models.

      Second, the fact that deepseek was able to pull this off with such modest resources is an indication that there is no moat, and you might wake up tomorrow and find an even better model from a company you have never heard of.

      • girvo 2 days ago

        Pull this off with such modest resources, including using ChatGPT itself for its RL inputs. It’s quite smart, and doesn’t disagree with your point that there is no moat per se, but without those frontier models and their outputs there is no V3, there is no R1.

        • buyucu a day ago

          this is how science works.

    • simonw 2 days ago

      Yeah, it is definitely a big deal.

      I expect it will be a net positive: they proved that you can both train and run inference against powerful models for way less compute than people had previously expected - and they published enough details that other AI labs are already starting to replicate their results.

      I think this will mean cheaper, faster, and better models.

      This FAQ about it is very good: https://stratechery.com/2025/deepseek-faq/

      • netdevphoenix 2 days ago

        Why did DeepSeek not kept this for themselves? Is this a Meta style scorched earth strategy?

        • Rzor 2 days ago

          >An Yong: But DeepSeek is a business, not a nonprofit research lab. If you innovate and open-source your breakthroughs—like the MLA architecture innovation releasing in May—won’t competitors quickly copy them? Where’s your moat?

          >Liang Wenfeng: In disruptive tech, closed-source moats are fleeting. Even OpenAI’s closed-source model can’t prevent others from catching up.

          >Therefore, our real moat lies in our team’s growth—accumulating know-how, fostering an innovative culture. Open-sourcing and publishing papers don’t result in significant losses. For technologists, being followed is rewarding. Open-source is cultural, not just commercial. Giving back is an honor, and it attracts talent.

          https://thechinaacademy.org/interview-with-deepseek-founder-...

        • simonw 2 days ago

          There are a bunch of theories floating round.

          Personally this looks to me like an ego thing: the DeepSeek team are really, really good and their CEO is enjoying the enormous attention they are getting, plus the pride of proving that Chinese AI labs can take the lead in a field that everyone thought the USA was unassailable in.

          Maybe they are true believers in building and sharing "AGI" with the world?

          Lots of people see this as a Chinese government backed conspiracy to undermine the US AI industry. I'm not sure how credible that idea is.

          I saw somewhere (though I've not confirmed it with a second source) that none of the people listed on the DeepSeek papers got educated at US universities - they all went to school in China, which further emphasizes how good China's home-grown talent pool has got.

          • kragen 2 days ago

            > a Chinese government backed conspiracy to undermine the US AI industry

            To me this sounds like describing Lockheed as a US government backed conspiracy to undermine the Tupolev Aerospace Design Bureau. It really stretches the normal connotations of words, and it presupposes that the center of the world is conveniently located very close to the speaker.

          • rfoo 2 days ago

            > none of the people listed on the DeepSeek papers got educated at US universities

            "You have been educated at foreign universities / worked at foreign companies" is indeed an excuse they have used at least once to refuse a candidate. n=1 though so maybe that's just a convenient excuse. There's one guy who went to University of Adelaide (IIRC) on the paper.

          • eptcyka 2 days ago

            It makes Trump look like a chump.

          • dluan 2 days ago

            > Chinese government backed conspiracy

            Do you understand how ginormous China is and how ridiculous this kind of made up boogeyman statement sounds?

        • AnotherGoodName 2 days ago

          It’s a bunch of known optimisations bundled together rather than any single revolutionary change.

          More open than any other model (but still a bespoke licence) and bundles together a bunch of known improvements. There’s nothing to hide here honestly and without the openness it wouldn’t be as interesting.

      • nuancebydefault 2 days ago

        From the faq

        'So are we close to AGI? It definitely seems like it. This also explains why Softbank (and whatever investors Masayoshi Son brings together) would provide the funding for OpenAI that Microsoft will not: the belief that we are reaching a takeoff point where there will in fact be real returns towards being first.'

        Interesting.

      • startupsfail 2 days ago

        This may mean that $3k/task on some benchmarks published by OpenAI are now at slightly lower price tag.

        It is possible however that OpenAI was using similar level acceleration in the first place, they’ve just not published the details. And a few engineers left and replicated (or even bested it) in a new lab.

        Overall, it’s a good boost, modern software is getting a better fit into new generation of hardware and is performing faster. Maybe we should pay more attention when NVIDIA is publishing their N-times faster ToPS numbers, and not completely dismissing it as marketing.

    • GaggiX 2 days ago

      DeepSeek R1 is o1 but free to use, open source, and also distilled on different models, even the ones that could run on your phone so yeah.

    • llm_trw 2 days ago

      End result is on par with o1 preview, which is ironically more intelligent than o1, but the intermediate tokens are actually useful. I've got it running locally last night and out of 50 questions so far I've gotten the answer in the chain of thought in more than half.

    • coliveira 2 days ago

      It depends on the problem type. If your problem requires math reasoning, deepSeek response is quite impressive and surpasses what most people can do in a single session.

    • csomar 2 days ago

      Everyone else should rejoice. OpenAI is probably cooked, however. Nvidia might be cooked too.

      • snowram 2 days ago

        Is Nvidia really cooked? If this new RF tech does scale, couldn't a bigger model be made that would require more compute power for training and inference?

        • swfsql 2 days ago

          I read around that DeepSeek's team managed to work-around hardware limitations, and that in theory goes against the "gatekeeping" or "frontrunning" investment expectations from nvidia. If a partial chunk of investment is a bet on those expectations, that would explain a part of the stock turbulence. I think their 25x inference price reduction vs openai is what really affected everything, besides the (uncertain) training cost reduction.

          • BeefWellington 2 days ago

            We all use PCs and heck even phones that have thousands of times the system memory of the first PCs.

            Making something work really efficiently on older hardware doesn't necessarily imply less demand. If those lessons can be taken and applied to newer generations of hardware, it would seem to make the newer hardware all the more valuable.

        • cool_dude85 2 days ago

          Imagine an s-curve relating capital expenditure on compute and "performance" as the y-axis. It's possible that this does not change the upper bound of the s-curve but just shifts the performance gains way to the left. Such a scenario would wipe out a huge amount of the value of Nvidia.

          • treis 2 days ago

            I don't think it matters much to Nvidia so long as they're the market leader. If AI gets cheaper to compute it just changes who buys. Goes from hyperscalers to there being an AI chip in every phone, tablet, laptop, etc. still lots and lots of money to be made.

  • whitehexagon 2 days ago

    Agreed, I switched from qwq now to the same model. I'm running it under ollama on a M1 Asahi Linux and it seems maybe twice the speed (not very scientific but not sure how to time the token generation), and more, dare I say smarter? than qwq, and maybe a tad less RAM. It still over ponders, but not as bad as some of the pages and pages of, 'that looks wrong, maybe I should try...' circles with qwq, but which was already so impressive.

    I'm quite new to this, how are you feeding in so much text? just copy/paste? I'd love to be able to run some of my Zig code through it, but I haven't managed to get Zig running under Asahi so far.

  • buyucu 2 days ago

    DeepSeek-R1-Distill-Qwen-32B is my new default model on my home server. previously it was aya-32b.

    • xenospn 2 days ago

      What do you use it at home for?

  • m3kw9 2 days ago

    What does distil qwen 32b mean? It uses qwen for what?

    • buyucu 2 days ago

      deepseek fine-tuned qwen32b with data generated by deepseek671b

amarcheschi 2 days ago

For what i can understand, he asked deepseek to convert arm simd code to wasm code.

in the github issue he links he gives an example of a prompt: Your task is to convert a given C++ ARM NEON SIMD to WASM SIMD. Here is an example of another function: (follows a block example and a block with the instructions to convert)

https://gist.github.com/ngxson/307140d24d80748bd683b396ba13b...

I might be wrong of course, but asking to optimize code is something that quite helped me when i first started learning pytorch. I feel like "99% of this code blabla" is useful as in it lets you understand that it was ai written, but it shouldn't be a brag. then again i know nothing about simd instructions but i don't see why it should be different for a capable llm to do simd instructions or optimized high level code (which is much harder than just working high level code, i'm glad i can do the latter lol)

  • thorum 2 days ago

    Yes, “take this clever code written by a smart human and convert it for WASM” is certainly less impressive than “write clever code from scratch” (and reassuring if you’re worried about losing your job to this thing).

    That said, translating good code to another language or environment is extremely useful. There’s a lot of low hanging fruit where there’s, for example, an existing high quality library is written for Python or C# or something, and an LLM can automatically convert it to optimized Rust / TypeScript / your language of choice.

    • HanClinto a day ago

      Keep in mind, two of the functions were translated, and the third was created from scratch. Quoting from the FAQ on the Gist [1]:

      Q: "It only does conversion ARM NEON --> WASM SIMD, or it can invent new WASM SIMD code from scratch?"

      A: "It can do both. For qX_0 I asked it to convert, and for qX_K I asked it to invent new code."

      * [1]: https://gist.github.com/ngxson/307140d24d80748bd683b396ba13b...

    • th0ma5 2 days ago

      Porting well written code if you know the target language well is pretty fun and fast in my experience. Often when there are library, API, or language feature differences, these are better considered outside of most work it would take to fully describe the entire context to a model is what has happened in my experience, however.

  • freshtake 2 days ago

    This. For folks who regularly write simd/vmx/etc, this is a fairly straightforward PR, and one that uses very common patterns to achieve better parallelism.

    It's still cool nonetheless, but not a particularly great test of DeepSeek vs. alternatives.

    • gauge_field 2 days ago

      That is what I am struggling to understand about the hype. I regularly use them to generate new simd. Other than a few edge cases (issues around handling of nan values, order of argument for corresponding ops, availability of new avx512f intrinsics), they are pretty good at converting. The names of very intrinsics are very similar from simd to another. The very self-explanatory nature of the intrinsics names and having similar apis from simd to another makes this somewhat expected result given what they can already accomplish.

      • amarcheschi 2 days ago

        If I had to guess, it's both the title ggml : x2 speed for WASM by optimizing SIMD and the pr being written by ai

        • gauge_field 2 days ago

          + Deepseekai recently being in the headlines + Lack of knowledge around simd extension. Modern social media is interesting...

          • amarcheschi a day ago

            I do have to say, that before knowing what was Simd it was all black magic to me. Now, I've had to get how it works for my thesis, on a very shallow level, and I have to say it's much less black magic than before, although I wouldn't be able to write Simd code

    • csomar a day ago

      Deepseek r1 is not exactly better than the alternatives. It is, however, open as in open weight and requires much less resources. This is what’s disruptive about it.

  • softwaredoug 2 days ago

    LLMs are great at converting code, I've taken functions whole cloth and converted them before and been really impressed

CharlesW 2 days ago

For those who aren't tempted to click through, the buried lede for this (and why I'm glad it's being linked to again today) is that "99% of the code in this PR [for llama.cpp] is written by DeekSeek-R1" as conducted by Xuan-Son Nguyen.

That seems like a notable milestone.

  • drysine 2 days ago

    >99% of the code in this PR [for llama.cpp] is written by DeekSeek-R1

    Yes, but:

    "For the qX_K it's more complicated, I would say most of the time I need to re-prompt it 4 to 8 more times.

    The most difficult was q6_K, the code never works until I ask it to only optimize one specific part, while leaving the rest intact (so it does not mess up everything)" [0]

    And also there:

    "You must start your code with #elif defined(__wasm_simd128__)

    To think about it, you need to take into account both the refenrence code from ARM NEON and AVX implementation."

    [0] https://gist.github.com/ngxson/307140d24d80748bd683b396ba13b...

    • janwas a day ago

      Interesting that both de-novo and porting seems to have worked.

      I do not understand why GGML is written this way, though. So much duplication, one variant per instruction set. Our Gemma.cpp only requires a single backend written using Highway's portable intrinsics, and last I checked for decode on SKX+Zen4, is also faster.

  • aithrowawaycomm 2 days ago

    Reading through the PR makes me glad I got off GitHub - not for anything AI-related, but because it has become a social media platform, where what should be a focused and technical discussion gets derailed by strangers waging the same flame wars you can find anywhere else.

    • skeaker 2 days ago

      This depends pretty heavily on the repo.

jeswin 2 days ago

> 99% of the code in this PR [for llama.cpp] is written by DeekSeek-R1

I hope we can put to rest the argument that LLMs are only marginally useful in coding - which are often among the top comments on many threads. I suppose these arguments arise from (a) having used only GH copilot which is the worst tool, or (b) not having spent enough time with the tool/llm, or (c) apprehension. I've given up responding to these.

Our trade has changed forever, and there's no going back. When companies claim that AI will replace developers, it isn't entirely bluster. Jobs are going to be lost unless there's somehow a demand for more applications.

  • simonw 2 days ago

    "Jobs are going to be lost unless there's somehow a demand for more applications."

    That's why I'm not worried. There is already SO MUCH more demand for code than we're able to keep up with. Show me a company that doesn't have a backlog a mile long where most of the internal conversations are about how to prioritize what to build next.

    I think LLM assistance makes programmers significantly more productive, which makes us MORE valuable because we can deliver more business value in the same amount of time.

    Companies that would never have considered building custom software because they'd need a team of 6 working for 12 months may now hire developers if they only need 2 working for 3 months to get something useful.

    • jeswin 2 days ago

      > That's why I'm not worried. There is already SO MUCH more demand for code than we're able to keep up with. Show me a company that doesn't have a backlog a mile long where most of the internal conversations are about how to prioritize what to build next.

      I worry about junior developers. It will be a while before vocational programming courses retool to teach this new way of writing code, and these are going to be testing times for so many of them. If you ask me why this will take time, my argument is that effectively wielding an LLM for coding requires broad knowledge. For example, if you're writing web apps, you need to be able to spot say security issues. And various other best practices, depending on what you're making.

      It's a difficult problem to solve, requiring new sets of books, courses etc.

      • onetimeusename 2 days ago

        Just as a side note, at my university about half the CS people are in the AI track. I would guess that number will keep increasing. There is also a separate major that kind of focuses on AI/psychology that is pretty popular but I am not sure how many people are in it. A good number of the students have some kind of "AI startup". Also, although it violates the honor code, I would be willing to bet many students use AI in some way for doing programming assignments.

        This isn't to say you are wrong but just to put some perspective on how things are changing. Maybe most new programmers will be hired into AI roles or data science.

        • AnotherGoodName 2 days ago

          The ask from every new grad to be assigned to ai development is unreasonable right now and they are probably hurting their careers by all going the same direction honestly. It’s a small fraction of our development efforts and we usually hire very senior for that sort of role. We still need people that can program for the day to day business needs and it’s a perfect starting role for a new grad yet almost all of them are asking for assignment to ai development.

          I appreciate anyone that can utilise ai well but there’s just not enough core ai model development jobs for every new grad.

          • LeFantome 2 days ago

            Agree and disagree. You do it need a “degree in AI”. However, you need to be using AI in your degree. Really using it.

            What are those “day to day business needs” that you think people are going to do without AI?

            In my view, this is like 1981. If you are saying, we will still need non-computer people for day-to-day business needs, you are wrong. Even the guy in the warehouse and the receptionist at the front are using computers. So is the CEO. That does not mean that everybody can build one, but just think of the number of jobs in a modern company that require decent Excel skills. It is not just the one in finance. We probably don’t know what the “Excel” of AI is just yet but we are all going to need to be great at it, regardless of who is building the next generation of tools.

            • OvbiousError 2 days ago

              Wouldn't the AI track be more about the knowing the internals, being able to build models, ... So in your 1981 example that would be saying about half of the people are enrolling in computer hardware courses, whereas only a fraction of those are needed?

              I would assume any other CS course teaches/is going to be teaching how to use AI to be an effective software developer.

            • gmassman 2 days ago

              I agree with your point in general, but saying one needs to be great at using AI tools gives way too much credit to companies’ ability to identify low performers. Especially in large organizations, optics matter far more than productive output. Being able to use AI tools is quite different from saying you are using AI tools!

        • bick_nyers 2 days ago

          An actual hardcore technical AI "psychology" program would actually be really cool. Could be a good onboarding for prompt engineering (if it still exists in 5 years).

        • tomrod 2 days ago

          Yeah, the younguns smell opportunity and run towards it. They'll be fine. It's younguns) the less experienced folks in the current corporate world that will have the most to lose.

          • CalChris 2 days ago

            Or perhaps it will be the more experienced knuckle draggers, hardened in our ways.

            • AnotherGoodName 2 days ago

              The really experienced of us will have made this mistake enough times to know to avoid it.

              I didn’t get a smart phone until the 2010s. Stupid I know but it was seen as a badge of honour in some circles ‘bah I don’t even use a smart phone’ we’d say as the young crowd went about their lives never getting lost without a map and generally having an easier time of it since they didn’t have that mental block.

              Ai is going to be similar no doubt. I’m already seeing ‘bah I don’t use ai coding assistants’ type of posts, wearing it as a badge of honour. ‘Ok you’re making things harder for yourself’ should be the reply but we’ll no doubt have people wearing it as a badge of honour for some time yet.

      • LeFantome 2 days ago

        Think of how much easier it is to learn to code if you actually want to.

        The mantra has always been that the best way to learn to code is to read other people’s code. Now you can have “other people” write you code for whatever you want. You can study it and see how it works. You can explore different ways of accomplishing the same tasks. You can look at the similar implementations in different languages. And you may be able to see the reasoning and research for it all. You are never going to get that kind of access to senior devs. Most people would never work up the courage to ask. Plus, you are going to become wicked good at using the AI and automation including being deeply in touch with its strengths and weaknesses. Honestly, I am not sure how older, already working devs are going to keep up with those that enter the field 3 years from now.

        • kadushka 2 days ago

          People get wicked good by solving hard problems. Many young developers use AI to solve problems with little effort. Not sure what effect this will have on the quality of future developers.

      • motorest 2 days ago

        > I worry about junior developers. It will be a while before vocational programming courses retool to teach this new way of writing code, and these are going to be testing times for so many of them.

        I don't agree. LLMs work as template engines on steroids. The role of a developer now includes more code reviewing than code typing. You need the exact same core curriculum to be able to parse code, regardless if you're the one writing it, it's a PR, or it's outputted by a chatbot.

        > For example, if you're writing web apps, you need to be able to spot say security issues. And various other best practices, depending on what you're making.

        You're either overthinking it or overselling it. LLMs generate code, but that's just the starting point. The bulk of developer's work is modifying your code to either fix an issue or implement a feature. You need a developer to guide the approach.

        • danielbln 2 days ago

          That's a broad statement. If the IDE checks types and feeds errors back to the LLM, then that loop is very well able to fix an issue or implement a feature all on its own (see aider, cline etc )

          • motorest a day ago

            > That's a broad statement.

            It isn't. Anyone who does software development for a living can explain to you what exactly is the day-to-day work of a software developer. It ain't writing code, and you spend far more time reading code than writing it. This is a known fact for decades.

            > If the IDE checks types and feeds errors back to the LLM,(...)

            Irrelevant. Anyone who does software development for a living can tell you that code review is way more than spotting bugs. In fact, some companies even have triggers to only trigger PR reviews if all automated tests pass.

      • cyanydeez 2 days ago

        that's basically the AI rubicon everywhere. From flying plans to programming: Soon there'll be no real fallback. When AI fails, you can't just put the controls in front of a person and expect them to have reasonable expertise to respond.

        Really, what seems on the horizon is a cliff of techno risks that have nothing to do with "AI will take over the world" and more "AI will be so integral to functional humanity that actual risks become so diffuse that no one can stop it."

        So it's more a conceptual belief: Will AI actually make driving cares safer or will the fatalities of AI just be so randomly stochastic that it's more acceptable.

        • entropicdrifter 2 days ago

          >So it's more a conceptual belief: Will AI actually make driving cares safer or will the fatalities of AI just be so randomly stochastic that it's more acceptable.

          I would argue that we already accept relatively random car fatalities at a huge scale and simply engage in post-hoc rationalization of the why and how of individual accidents that affect us personally. If we can drastically reduce the rate of accidents, the remaining accidents will be post-hoc rationalized the same way we always have rationalized accidents.

          • cyanydeez 2 days ago

            Well, we don't accept it in the sense of "we can't blame someone", which is what I'm saying. Soon it'll be like a forest fire or a conspiracy theory.

            Currently, car crashes are blamed on the individuals involved.

            • entropicdrifter 2 days ago

              Sometimes, but sometimes people just say stuff like "god is testing us" when things appear to be truly random.

              I reckon we'll see a lot of new religious thinking about this stuff

              • cyanydeez 2 days ago

                I'm not talking about random people's delusions.

                This is about the functional society where people fundamentally have recourse to "blame" via legal means one another for things.

                Having fallbacks, eg, pilots in the cockpit is not a long term strategy for AI pilots flying planes because they functionally will never be sufficiently trained for actual scenarios.

      • LeFantome 2 days ago

        “ It's a difficult problem to solve, requiring new sets of books, courses etc.”

        Instead of this, have you considered asking Deep Seek to explain it to you?

        • numba888 2 days ago

          By the time book comes out it's outdated. DeepSeek has its own cut-off date.

          And here is the problem: AI needs to be trained on something. Use of AI reduces the use of online forums, some of them are actively blocking access, like reddit. So, for AI to stay relevant it has to generate the knowledge by itself. Like having full control of a computer, taking queries from human supervisor, and really trying to solve. Having this sort of AI actors in online forum will benefit everyone.

        • smokel 2 days ago

          Before this comment is being downvoted, please note the irony. The AI models may solve some technical problems, but the actual problems to be solved are of a societal nature, and won't be solved in our lifetimes.

          • numba888 2 days ago

            and in the next lifetimes too. humans are good at making problems. only lobotomy and AGI Gods can sort of 'solve' them.

          • the_af 2 days ago

            I agree there are hard societal problems that tech alone cannot solve -- or at all. It reminds me of the era, not long ago, when the hipster startup bros thought "there is an app for that" (and they were ridiculously out of touch with the actual problem, which was famine, homelessness, poverty, a natural disaster, etc).

            For mankind, the really big problems aren't going away any time soon.

            But -- and it's a big but -- many of us aren't working on those problems. I'm ready to agree most of what I've done for decades in my engineering job(s) is largely inconsequential. I don't delude myself into thinking I'm changing the world. I know I'm not!

            What I'm doing is working on something interesting (not always) while earning a nice paycheck and supporting my family and my hobbies. If this goes away, I'll struggle. Should the world care? Likely not. But I care. And I'm unlikely to start working on solving societal problems as a job, it's too much of a burden to bear.

      • curious_cat_163 2 days ago

        > If you ask me why this will take time, my argument is that effectively wielding an LLM for coding requires broad knowledge.

        This is a problem that the Computer Science departments of the world have been solving. I think that the "good" departments already go for the "broad knowledge" of theory, systems with a balance between the trendy and timeless.

      • bick_nyers 2 days ago

        I definitely agree with you in the interim regarding junior developers. However, I do think we will eventually have the AI coding equivalent of CICD built into perhaps our IDE. Basically, when an AI generated some code to implement something, you chain out more AI queries to test it, modify it, check it for security vulnerabilities etc.

        Now, the first response some folks may have is, how can you trust that the AI is good at security? Well, in this example, it only needs to be better than the junior developers at security to provide them with benefits/learning opportunities. We need to remember that the junior developers of today can also just as easily write insecure code.

        • LeFantome 2 days ago

          If it can point out the things you may need to consider, it is already better at security than most dev teams in the world today. Deep Seek can already do that.

      • herval 2 days ago

        This is my main worry with the entire AI trend too. We're creating a huge gap for those joining the industry right now, with markedly fewer job openings for junior people. Who will inherit the machine?

        • belter 2 days ago

          >Who will inherit the machine?

          Extremely well paid human coders, capable of fixing the mistakes of the years preceding them...

      • butlike 2 days ago

        CS Fundamentals are CS fundamentals, whether you're writing the B-tree or spot-checking it.

      • sdesol 2 days ago

        Full disclosure: I am writing a chat app that is designed for software development

        > It's a difficult problem to solve, requiring new sets of books, courses etc.

        I think new tooling built around LLMs that fits into our current software development lifecycle is going to make a big difference. I am experiencing firsthand how much more productive I am with LLM, and I think that in the future, we will start using "Can you review my conversation?" in the same way we use "Can you review my code?"

        Where I believe LLMs are a real game changer is they make it a lot easier for us to consume information. For example, I am currently working on adding a Drag and Drop feature for my chat input box. If a junior developer is tasked with this, the senior developer can easily have the LLM generate a summary of their conversation like so:

        https://beta.gitsense.com/?chat=d36e0282-4326-46cf-83b1-4207...

        At this point, the senior developer can see if anything is missed; if desired, they can fork the conversation to ask the LLM questions like "Was this asked?" or "Was this mentioned?"

        And once everybody is happy, you can have the LLM generate a PR title and message like so:

        https://beta.gitsense.com/?chat=8aa19528-5891-4dda-9a88-247a...

        All of this took me about 10 minutes, which would have taken me an hour or maybe more without LLMs.

        And from here, you are now ready to think about coding with or without LLM.

        I think with proper tooling, we might be able to accelerate the learning process for junior developers as we now have an intermediate layer that can better articulate the senior developers' thoughts. If the junior developer is too embarrassed to ask for clarification on why the senior developer said what they did, they can easily ask the LLM to explain.

        The issue right now is that we are so focused on the moon shots for LLM, but the simple fact is that we don't need it for coding if we don't want to. We can use it in a better way to communicate and gather requirements, which will go a long way to writing better code faster.

      • simonw 2 days ago

        Yeah, it's going to suck for junior developers for a while.

        The ones who are self-starters will do fine - they'll figure out how to accelerate their way up the learning curve using these new tools.

        People who prefer classroom-learning / guided education are going to be at a disadvantage for a few years while the education space retools for this new world.

        • antonislav 2 days ago

          I think, seeing recordings of people using LLMs to accomplish non-trivial tasks would go a long way.

          I’d love to watch, e.g. you Simon, using these tools. I assume there are so many little tricks you figured out over time that together make a big difference. Things that come to mind:

          - how to quickly validate the output?

          - what tooling to use for iterating back and forth with the LLM? (just a chat?)

          - how to steer the LLM towards a certain kind of solutions?

          - what is the right context to provide to the LLM? How do it technically?

          • plandis 2 days ago

            I believe Simon has full transcripts for some of the projects he’s had LLMs generate the code for. You can see how he steers the LLM for what is desired and how it is course corrected.

      • Peacefulz 2 days ago

        I personally think that having hands on keyboards is still going to be imperative. Anyone can have an idea, but not everyone is going to be able to articulate that idea to an AI model in a way that will produce high quality, secure software.

        I'm by no means an expert, but I feel like you still need someone who understands underlying principles and best practices to create something of value.

        • thelittleone 2 days ago

          This assumes that prompts do not evolve to the point where grandma can mutter some words to AI that produces an app that solves a problem. Prompts are an art form and a friction point to great results. Was only some months before reasoning models that CoT prompts where state of the art. Reasoning models take that friction away.

          Thinking it out even further, programming languages will likely go away altogether as ultimately they're just human interfaces to machine language.

          • quesera 2 days ago

            > programming languages will likely go away altogether

            As we know them, certainly.

            I haven't seen discussions about this (links welcome!), but I find it fascinating.

            What would a PL look like, if it was not designed to be written by humans, but instead be some kind of intermediate format generated by an AI for humans to review?

            It would need to be a kind of formal specification. There would be multiple levels of abstraction -- stakeholders and product management would have a high level lens, then you'd need technologists to verify the correctness of details. Parts could still be abstracted away like we do with libraries today.

            It would be way too verbose as a development language, but clear and accessible enough that all of our arcane syntax knowledge would be obsolete.

            This intermediate spec would be a living document, interactive and sensitive to modifications and aware of how they'd impact other parts of the spec.

            When the modifications are settled, the spec would be reingested and the AI would produce "code", or more likely be compiled directly to executable blobs.

            ...

            In the end, I still think this ends up with really smart "developers" who don't need to know a lick of code to produce a full product. PLs will be seen as the cute anachronisms of an immature industry. Future generations will laugh at the idea that anybody ever cared about tabs-v-spaces (fair enough!).

            • thelittleone 2 days ago

              I find it similarly fascinating.

              Take for example neuralink. If you consider that interface 10 years, or further 1000 years out in the future, it's likely we will have a direct, thought-based human computer interface. Which is interesting when thinking of this for sending information to the computer, but even more so (if equally alarming) for information flowing from computer to human. Whereas today, we read text on web pages, or listen to audio books, in that future, we may instead receive felt experiences / knowledge / wisdom.

              Have you had a chance to read 'Metaman: The Merging of Humans and Machines into a Global Superorganism' from 1993?

      • kragen 2 days ago

        I feel like getting an LLM to spot security holes might be easier than getting it to write secure code.

      • worik 2 days ago

        > It will be a while before vocational programming courses retool to teach this new way of writing code

        Why?

        Are they not already?

      • jhsvsmyself 2 days ago

        There are already courses that are centered around coding with AI.

        • belter 2 days ago

          > Courses that are centered around coding with AI.

          Everybody is a Manager now?

    • sitkack 2 days ago

      We have already entered a new paradigm of software development, where small teams build software for themselves to solve their own problems rather than making software to sell to people. I think selling software will get harder in the future unless it comes with special affordances.

      • LeFantome 2 days ago

        I think some of the CEOs have it right on this one. What is going to get harder is selling “applications” that are really just user friendly ways of getting data in and out of databases. Honestly, most enterprise software is just this.

        AI agents will do the same job.

        What will still matter is software that constrains what kind of data ends up in the database and ensures that data means what it is supposed to. That software will be created by local teams that know the business and the data. They will use AI to write the software and test it. Will those teams be “developers”? It is probably semantics or a matter of degree. Half the people writing advanced Excel spreadsheets today should probably be considered developers really.

        • bee_rider 2 days ago

          Mostly agree, even without a database-centered worldview.

          Programming languages are languages to tell the computer what to do. In the beginning, people wrote in machine code. Then, high level languages like C and FORTRAN were invented. Since then we’ve been iterating on the high level language idea.

          These LLM based tools seem to be a more abstract way of telling the computer what to do. And they really might, if they work out, be a jump similar to the low/high level split. Maybe in the future we’ll talk about low-level, high-level, and natural programming languages. The only awkwardness will be saying “I have to drop down to a high level language to really understand what the computer is doing.” But anyway, there were programmers on either side of that first split (way more after), if there’s another one I suspect there will still be programmers after.

        • kakapo1988 2 days ago

          At some point, I wonder if there will be advantageous for AI to just drop down directly into machine code, without any intermediate expression in higher-level languages. Greater efficiency?

          Obviously, source allows human tuning, auditing, and so on. But taken at the limit, those aspects may eventually no longer be necessary. Just a riff here, as the thought just occurred.

          • sureglymop 2 days ago

            In the past I've had a similar thought, what if the scheduler used by the kernel was an AI? better yet, if it is able to learn your usage patterns and schedule accordingly.

        • butlike 2 days ago

          Maybe, but it's the same argument trickling down. You'll need the CRUD-apps because you hired Cindy to press the button, and if shit goes pear-shaped, you can point to Cindy in the post-mortem. If it's some AI agent pressing the button to egress data from the database, and there's an anomaly, then it's a systemic failure at a macro level at that company, which is harder to write a press release about.

        • sitkack 2 days ago

          Many applications can and should be replaced by a prompt and a database. This is the nature of increased expressive and computational power. So many whip manufacturers are about to go out of business, especially those offering whips-as-a-service.

        • cess11 2 days ago

          No, enterprise software is typically also risk management and compliance, domains where rules rule. Someone needs to sign off on the software being up to spec and taking responsibility for failures, that's something any submissive LLM is willing to do but can't.

      • __MatrixMan__ 2 days ago

        ...which is a good thing. Software made by the people using it to better meet their specific needs is typically far better than software made to be a product, which also has to meet a bunch of extra requirements that the user doesn't care about.

    • rybosworld 2 days ago

      > There is already SO MUCH more demand for code than we're able to keep up with. Show me a company that doesn't have a backlog a mile long where most of the internal conversations are about how to prioritize what to build next.

      This is viewing things too narrowly I think. Why do we even need most of our current software tools aside from allowing people to execute a specific task? AI won't need VSCode. If AI can short circuit the need for most, if not nearly all enterprise software, then I wouldn't expect software demand to increase.

      Demand for intelligent systems will certainly increase. And I think many people are hopeful that you'll still need humans to manage them but I think that hope is misplaced. These things are already approaching human level intellect, if not exceeding it, in most domains. Viewed through that lens, human intervention will hamper these systems and make them less effective. The rise of chess engines are the perfect example of this. Allow a human to pair with stockfish and override stockfish's favored move at will. This combination will lose every single game to a stockfish-only opponent.

      • bee_rider 2 days ago

        That’s a fine thing to believe.

        But the bit of data we got in this story is that a human wrote tests for a human-identified opportunity, then wrote some prompts, iterated on those prompts, and then produced a patch to be sent in for review by other humans.

        If you already believed that there might be some fully autonomous coding going on, this event doesn’t contradict your belief. But it doesn’t really support it either. This is another iteration on stuff that’s already been seen. This isn’t to cheapen the accomplishment. The range of stuff these tools can do is growing at an impressive rate. So far though it seems like they need technical people good enough to define problems for them and evaluate the output…

      • logicchains 2 days ago

        >AI won't need VSCode

        Why not? It's still going to be quicker for the AI to use automated refactoring tooling than to manually make all the changes itself.

        • rybosworld 2 days ago

          Maybe I should have said: AI already doesn't need VSCode, or any IDE at all.

          • kragen 2 days ago

            Maybe it would work better if it used an IDE rather than having to write flawless code without ever testing it?

            • abecedarius 2 days ago

              I tried something related today with Claude, who'd messed up a certain visualization of entropies using JS: I snapped a phone photo and said 'behold'. The next try was a glitch mess, and I said hey, could you get your JS to capture the canvas as an image and then just look at the image yourself? Claude could indeed, and successfully debugged zir own code that way with no more guidance.

              This was all in the default web chat UI.

          • layer8 2 days ago

            GAI (if we get it) will start creating its own tools and programming languages to become more efficient. Tools as such won’t be going away. GAI will use them for the same reasons we do.

            • rybosworld a day ago

              Whether that's true or not, it doesn't change the fact that at some point it won't be humans creating these tools.

            • layer8 19 hours ago

              s/GAI/AGI/

    • aibot923 2 days ago

      It's interesting. Maybe I'm in the bigtech bubble, but to me it looks like there isn't enough work for everyone already. Good projects are few and far between. Most of our effort is keeping the lights on for the stuff built over the last 15-20 years. We're really out of big product ideas.

      • Taylor_OD 2 days ago

        Good projects !== work

        There is a lot of work. Plenty of it just isnt super fun or interesting.

      • __MatrixMan__ 2 days ago

        That's because software is hard to make, and most projects don't make it far enough to prove themselves useful--despite them having the potential to be useful. If software gets easier, a whole new cohort of projects will start surviving past their larval stage.

        These might not be big products, but who wants big products anyway? You always have to bend over backwards to trick them into doing what you want. You should see the crazy stuff my partner does to make google docs fit her use case...

        Let's have an era of small products made by people who are close to the problems being solved.

      • agsqwe 2 days ago

        This is very similar to my experience as a software development agency to enterprise customers. Out of big product ideas.

        • nkassis 2 days ago

          Yes a capacity increase from the developer side is great but it's supply side and we need to figure out how to accelerate transforming needs into demand. This is what I foresee developers turning into (at least some capable of this). Articulating logical solutions to be built to problems and evaluating results from what's generated to ensure it meets the needs.

          Aka Devs can move up the chain into what was traditionally product roles to increase development of new projects. This is using the time they have regain from more menial tasks being automated away.

    • SecretDreams 2 days ago

      The big fear shouldn't be on loss of jobs, it should be the inevitable attack on wages. Wage will track inversely to proximity as a commodity status.

      Even the discussion around AI partially replacing coders is a direction towards commoditization.

      • Espressosaurus 2 days ago

        It's the same thing. If there are more workers than jobs, wages go down. If there are more jobs than workers, wages go up.

        We saw it crystal clear between the boom years, the trough, and the current recovery.

    • paulryanrogers 2 days ago

      Dev effort isn't always the bottleneck. It's often stakeholders ironing out the ambiguities, conflicting requirements, QA, ops, troubleshooting, etc.

      Maybe devs will be replaced with QA, or become glorified QA themselves.

    • n144q 2 days ago

      That's the naiveity of software engineers. They can't see their limitations and think everything is just a technical problem.

      No, work is never the core problem. Backlog of bug fixes/enhancements is rarely what determines the headcount. What matters is the business need. If the product sells and there is no/little competition, the company has very little incentive to improve their products, especially hiring people to do the work. You'd be thankful if a company does not layoff people in teams working on mature products. In fact, the opposite has been happening, for quite a while. There are so many examples out there that I don't need to name them.

    • gamblor956 2 days ago

      Show me a company that doesn't have a backlog a mile long where most of the internal conversations are about how to prioritize what to build next.

      Most companies don't have a milelong backlog of coding projects. That's a uniquely tech industry-specific issue, and a lot of it is driven by the tech industry's obsessive compulsion to perpetually reinvent wheels.

      Companies that would never have considered building custom software because they'd need a team of 6 working for 12 months may now hire developers if they only need 2 working for 3 months to get something useful.

      No, because most companies that can afford custom software want reliable software. Downtime is money. Getting unreliable custom software means that the next time around they'll just adapt their business processes to software that's already available on the market.

    • aomix 2 days ago

      I’m more bearish about LLMs but even in the extreme optimist case this is why I’m not that concerned. Every project I’m on is triaged as the one that needs the most help right now. A world when dozen projects don’t need to be left on the cutting room floor so one can live is a very exciting place.

    • ksec 2 days ago

      >There is already SO MUCH more demand for code than we're able to keep up with. Show me a company that doesn't have a backlog a mile long where most of the internal conversations are about how to prioritize what to build next.

      We really are in AI moment of iPhone. I never thought I would witness something bigger than the impact of Smartphone. There are insane amount of value that we could extract out. Likely in tens of trillions from big to small business.

      We keep asking how Low Code or No Code "tools" could achieve custom apps. Turns out we are here via a different route.

      >custom software because they'd need a team of 6 working for 12 months may now hire developers if they only need 2 working for 3 months to get something useful.

      I am wondering if it be more like 2 working for 1 month?

      • fragmede 2 days ago

        less. how long would it take to build Twitter if you throw out all the difficult backend scaling problems and assume a supabase db was enough?

    • Taylor_OD 2 days ago

      Yup. People who know how to use it, and who work on tasks where LLM code is generally functional, are getting more done in less time.

      I don't trust companies to translate that to, "We can do more now" rather than, "We can do more with less people now" though.

    • whiplash451 a day ago

      The main problem is that engineers in the Western world wont get to see the benefits themselves because a lot of Western companies will outsource the work to AI-enabled, much more effective developers in India.

      India and Eastern EU will win far more (relatively) than expensive devs in the US or Western EU.

    • littlestymaar 2 days ago

      I couldn't agree more.

      And this kind of fear mongering is particularly irritating when you see that our industry already faced a similar productivity shock less than twenty years ago: before open source went mainstream github and library hubs like npm we used to code the same things over and over again, most of the time in a half-backed fashion because nobody had time for polishing stuff that was needed but only tangentially related to the code business. Then came the open-source tsunami, and suddenly there was a high quality library for solving your particular problem and the productivity gain was insane.

      Fast forward a few years, does it look like this productivity gains took any of our jobs? Quite the opposite actually, there has never been as many developers as today.

      (Don't get me wrong, this is massively changing how we work, like the previous revolution did, and how job is never going to be the same again)

    • UncleOxidant 2 days ago

      > That's why I'm not worried. There is already SO MUCH more demand for code than we're able to keep up with. Show me a company that doesn't have a backlog a mile long where most of the internal conversations are about how to prioritize what to build next.

      And yet many companies aren't hiring developers right now - folks in the C suite are thinking AI is going to be eliminating their need to hire engineers. Also "demand" doesn't necessarily mean that there's money available to develop this code. And remember that when code is created it needs to be maintained and there are costs for doing that as well.

      • simonw 2 days ago

        I continue to suspect that the hiring problems are mainly due to massive over-hiring during Covid, followed by layoffs that flooded the market with skilled developers looking for work.

        I'd love to see numbers around the "execs don't think they need engineers because of AI" factor. I've heard a few anecdotal examples of that but it's hard to tell if it's a real trend or just something that catches headlines.

        • withinboredom 2 days ago

          I think execs don’t see the problems we have with AI because you don’t need to be an expert to be an exec. I run into the edges of AI every day. There are things it is good at and things not so good at, and it varies from model to model and context to context (you can have two conversations with the same model, about the same thing, and get vastly different outputs; eg a test that uses different assertion patterns/libraries that are different from the rest of the project). As an “expert” or “highly skilled” person, I recognize these issues when I see them, but to a layman, it just looks like code.

          • esafak 2 days ago

            So let them pay the AI to do it, and see it fail. With some luck, it will fail on their watch!

          • danenania 2 days ago

            Definitely. The brilliant moments also get cherry picked for social media.

        • menaerus 2 days ago

          Massive overhiring or not, it's the fact that many (skilled) engineers can't find a job. Many companies were shut off during the past few years and market became oversaturated over the night. Whether AI will help to correct the market creating more demand we will see but I wouldn't hold my breath. Many domain specific skills became a commodity.

          • sharperguy 2 days ago

            We had a huge boom due to the low interest rates allowing businesses to pay developers with borrowed money, effectively operating at a loss for years on the basis of future growth. Now interest rates have risen the need to actually be profitable has caused a lot of optimization and lower hiring overall.

          • nwienert 2 days ago

            Where's the fact coming from, as in it's higher than before? I seem to be getting more than ever recruiting emails, and have felt out interviewing at a few places which we're very eager to find staff level talent.

            • menaerus 2 days ago

              Personal experience and also from many people I know around. Previously I would receive a request for an interview every two days or so. Lately, perhaps once a month, if at all. Foundational skills that I have were always scarce on the market so that makes me believe that the demand for them is now much much lower.

              Another data point is that there's been ~10 companies that I have been following and all of them have been shut down in the past year or so.

              And the general feeling you get from the number of HN posts from people complaining about not being able to find jobs. This certainly hasn't been like that before.

    • deadbabe 2 days ago

      Too much productivity can be a bad thing.

      If you’re infinitely productive, then the solution to every problem is to just keep producing stuff, instead of learning to say no.

      This means a lot of companies will overbuild, and then drown in maintenance problems and fail catastrophically when they can’t keep up.

    • Scipio_Afri 2 days ago

      100% agree with this take. People are spouting economic fallacies, and it’s in part cause CEOs don't want the stock prices to fall too fast. Eventually people will widely realize this and by then the economic payoffs are still immense.

  • reitzensteinm 2 days ago

    When GPT-4 came out, I worked on a project called Duopoly [1], which was a coding bot that aimed to develop itself as much as possible.

    The first commit was half a page of code that read itself in, asked the user what change they'd like to make, sent that to GPT-4, and overwrote itself with the result. The second commit was GPT-4 adding docstrings and type hints.

    Over 80% of the code was written by AI in this manner, and at some point, I pulled the plug on humans, and the last couple hundred commits were entirely written by AI.

    It was a huge pain to develop with how slow and expensive and flaky the GPT-4 API was at the time. There was a lot of dancing around the tiny 8k context window. After spending thousands in GPT-4 credits, I decided to mark it as proof of concept complete and move on developing other tech with LLMs.

    Today, with Sonnet and R1, I don't think it would be difficult or expensive to bootstrap the thing entirely with AI, never writing a line of code. Aider, a fantastic similar tool written by HN user anotherpaulg, wasn't writing large amounts of its own code in the GPT-4 days. But today it's above 80% in some releases [2].

    Even if the models froze to what we have today, I don't think we've scratched the surface on what sophisticated tooling could get out of them.

    [1]: https://github.com/reitzensteinm/duopoly [2]: https://aider.chat/HISTORY.html

  • matsemann 2 days ago

    I read that Meta is tasking all engineers with figuring out how they got owned by deepseek. Couldn't they just have asked an llm instead? After their claim of replacing all of us...

    I'm not too worried. If anything we're the last generation that knows how to debug and work through issues.

    • nkozyra 2 days ago

      > If anything we're the last generation that knows how to debug and work through issues.

      I suspect that comment might soon feel like saying "not too worried about assembly line robots, we're the only ones who know how to screw on the lug nuts when they pop off"

      • Barrin92 2 days ago

        I don't even see the irony in the comparison to be honest, being the assembly line robot controller and repairman is quite literally a better job than doing what the robot does by hand.

        If you're working in a modern manufacturing business the fact that you do your work with the aid of robots is hardly a sign of despair

        • nkozyra 2 days ago

          I don't claim it's a sign of despair. Rather, it's a boots-dug-in belief that one does is special and cannot be done autonomously. I think it's wholly natural. Work, time, education ... these operate like sunk costs in our brains.

          I think what we're all learning in real-time is that human technology is perpetually aimed at replacing itself and we may soon see the largest such example of human utility displacement.

      • lukan 2 days ago

        Not before AGI and I still see no signs of it.

      • matsemann 2 days ago

        Heh, yeah. But the llm in this instance only wrote 99% after the author guided it and prompted over and over again and even guided it how to start certain lines. I can do that. But can a beginner ever get to that level when not having that underlying knowledge?

    • dumbfounder 2 days ago

      Yep, and we still need COBOL programmers too. Your job as a technologist is to keep up with technology and use the best tools for the job to increase efficiency. If you don’t do this you will be left behind or you will be relegated to an esoteric job no one wants.

      • OsrsNeedsf2P 2 days ago

        > we still need COBOL programmers too

        I briefly looked into this 10 years ago since people kept saying it. There is no demand for COBOL programmers, and the pay is far below industry average. [0]

        [0] https://survey.stackoverflow.co/2024/work/#3-salary-and-expe...

        • Vampiero a day ago

          My poor baby boy Prolog... it's only down there because people are irrationally afraid of it :(

          And most are too focused on learning whatever slop the industry wants them to learn, so they don't even know that it exists. We need 500 different object oriented languages to do web applications after all. Can't be bothered with learning a new paradigm if it doesn't pay the bills!

          It's the most intuitive language I've ever learned and it has forever changed the way I think about problem solving. It's just logic, so it translates naturally from thought to code. I can go to a wikipedia page on some topic I barely know and write down all true statements on that page. Then I can run queries and discover stuff I didn't know.

          That's how I learned music theory, how scales and chords work, how to identify the key of a melody... You can't do that as easily and concisely in any other language.

          One day, LLM developers will finally open a book about AI and realize that this is what they've been missing all along.

    • hnthrow90348765 2 days ago

      A fair amount has been written on how to debug things, so it's not like the next generation can't learn it by also asking the AI (maybe learn it more slowly if 'learning with AI' is found to be slower)

  • spease 2 days ago

    The nature of this PR looks like it’s very LLM-friendly - it’s essentially translating existing code into SIMD.

    LLMs seem to do well at any kind of mapping / translating task, but they seem to have a harder time when you give them either a broader or less deterministic task, or when they don’t have the knowledge to complete the task and start hallucinating.

    It’s not a great metric to benchmark their ability to write typical code.

    • kridsdale3 2 days ago

      Sure, but let's still appreciate how awesome it is that this very difficult (for a human) PR is now essentially self-serve.

      How much hardware efficiency have we left on the the table all these years because people don't like to think about optimal use of cache lines, array alignment, SIMD, etc. I bet we could double or triple the speeds of all our computers.

      • spease 2 days ago

        Hopefully this results in some big improvements with compilation.

  • kemiller 2 days ago

    My observation in my years running a dev shop was that there are two classes of applications that could get built. One was the high-end, full-bore model requiring a team of engineers and hundreds of thousands of dollars to get to a basic MVP, which thus required an economic opportunity in at least the tends of millions. The other, very niche or geographically local businesses that can get their needs met with a self-service tool, max budget maybe $5k or so. Could stretch that to $25k if you use offshore team to customize. But 9/10 incoming leads had budgets between $25k and $100k. We just had to turn them away. There's nothing meaningful you can do with that range of budget. I haven't seen anything particularly change that. Self-service tools get gradually better, but not enough to make a huge difference. The high end if anything has receded even faster as dev salaries have soared.

    AI coding, for all its flaws now, is the first thing that takes a chunk out of this, and there is a HUGE backlog of good-but-not-great ideas that are now viable.

    That said, this particular story is bogus. He "just wrote the tests" but that's a spec — implementing from a quality executable spec is much more straightforward. Deepseek isn't doing the design, he is. Still a massive accelerant.

  • Waterluvian 2 days ago

    I want this to be true. Actually writing the code is the least creative, least interesting part of my job.

    But I think it’s still much too early for any form of “can we all just call it settled now? In this case, as we all know, lines of code is not a useful metric. How many person hours were spent doing anything associated with this PR’s generation and how does that compare to not using AI tools, and how does the result compare in terms of the various forms of quality? That’s the rubric I’d like to see us use in a more consistent manner.

  • xd 2 days ago

    The thing with programming, to do it well, you need to fully understand the problem and then you implement the solution expressing it in code. AI will be used to create code based on a deficit of clear understanding and we will end up with a hell of a lot of garbage code. I foresee the industry demand for programmers sky rocketing in the future, as companies scramble to unfuck the mountains of shit code they lash up over the coming years. It's just a new age of copy paste coders.

  • woah 2 days ago

    LLMs excel at tasks with very clear instructions and parameters. Porting from one language to another is something that is one step away from being done by a compiler. Another place that I've used them is for initial scaffolding of React components.

  • lukan 2 days ago

    "I hope we can put to rest the argument that LLMs are only marginally useful in coding"

    I more often heard the argument, they are not useful for them. I agree. If a LLM would be trained on my codebase and the exact libaries and APIs I use - I would use them daily I guess. But currently they still make too many misstake and mess up different APIs for example, so not useful to me, except for small experiments.

    But if I could train deepseek on my codebase for a reasonable amount(and they seemed to have improved on the training?), running it locally on my workstation: then I am likely in as well.

    • sureglymop 2 days ago

      I am working on something even deeper. I have been working on a platform for personal data collection. Basically a server and an agent on your devices that records keystrokes, websites visited, active windows etc.

      The idea is that I gather this data now and it may become useful in the future. Imagine getting a "helper AI" that still keeps your essence, opinions and behavior. That's what I'm hoping for with this.

      • svilen_dobrev a day ago

        eh, a hint. i was digging around some thing in these veins long time ago - more like collecting one's notions, not exact low-level actions - but apart of it being impossible back then, i dropped it for this simple reason: if you build such thing, it will know about you much more than you know. And that, in somebody else's hands.. identity theft would seem like walk in the park.

        • sureglymop a day ago

          For sure, thank you for that hint. One of the most important things to consider is that something like this can't be misused on someone else, e.g. as a surveillance tool.

          I should have clarified, I'm only building this for myself and my own use, there are no plans to take it further than that. Basically, I am trying to learn while building something that satisfies my own needs.

      • lukan a day ago

        I am not sure if this was sarcasm, but I believe big data was already yesterday?

        • sureglymop a day ago

          Not sarcasm. This is more a reaction to big data. Here's an analogy: Imagine cloud providers like iCloud, Google Drive, OneDrive etc. As a reaction to those, Owncloud and Nextcloud emerged for personal (well, also business) use.

          My idea with this is inspired by that. It's just for personal use and to address my own needs.

    • Taylor_OD 2 days ago

      We are getting closer and closer to that. For a while llm assistants were not all that useful on larger projects because they had limited context. That context has increased a lot over the last 6 months. Some tools will even analysis your entire codebase and use that in responses.

      It is frustrating that any smaller tool or api seem to stump llms currently but it seems like context is the main thing that is missing and that is increasing more and more.

      • lukan 2 days ago

        I have not kept up, can you recommend something?

        • simonw 2 days ago

          My review of 2024 is a good place to catch up on what's changed in the past 12 months: https://simonwillison.net/2024/Dec/31/llms-in-2024/

          • kragen 2 days ago

            That post is the best summary I've seen of what happened in LLMs last year, but what's crazy is that it feels like you wrote it so long ago, and it's only been four weeks! So much has changed since then!

            • lukan 2 days ago

              Do you mean mainly deepseek, or did I missed something big?

              • kragen 2 days ago

                Mainly DeepSeek, but also the fallout: a trillion-dollar drop in US stock markets, the new vaporware Qwen that beats DeepSeek, the apparent discrediting of US export controls, OpenAI Operator, etc.

    • rane 2 days ago

      The idea is that you give the libraries and APIs as context with your prompt.

      • r00fus 2 days ago

        There's a fairly low ceiling for max context tokens no matter the size of the model. Your hobby/small codebase may work, but for large codebases, you will need to do RAG and currently it's not perfect at absorbing the codebase and being able to answer questions on it.

      • lukan 2 days ago

        Thank you. But that doesn't work for me.

        If you mean just the name of the version in the prompt? No way.

        If you mean all the libary and my code in the contextwindow?

        Way too small.

        • simonw 2 days ago

          Give it examples of the library being used in the way you need.

          Here's an example transcript where I did that: https://gist.github.com/simonw/6a9f077bf8db616e44893a24ae1d3...

          • lukan a day ago

            Thank you, I experimented in that direction as well.

            But for my actual codebase, that is sadly not 100% clear code, it would require lots and lots of work, to give examples so it has enough of the right context, to work good enough.

            While working I am jumping a lot between context and files. Where a LLM hopefully one day will be helpful, will be refactoring it all. But currently I would need to spend more time setting up context, than solving it myself.

            With limited scope, like in your example - I do use LLMs regulary.

        • kragen 2 days ago

          Maybe the LLM could issue queries to fetch parts of your codebase as it needs to look at them, using something like GDB or cscope.

        • smallerfish 2 days ago

          Play around with projects in Claude for an hour. You'll see.

        • rane 2 days ago

          Not _all_ the code. Just the relevant parts.

  • submeta 2 days ago

    The dev jobs won‘t go away, but they will change. Devs will be more and more like requirements engineers who need to understand the problem to then write prompts with the peoper context so that the llm can produce valuable and working code. And the next level will be to prompt llms to generate prompts for llms to produce code and solutions.

    But already I hire less and less developers for smaller tasks. The things that I‘d assign to a dev in Ukraine to explore an idea, do a data transformation, make a UI for the internal company tool. I can do these things quicker with llm than trying to find a dev and explain the task.

    • WXLCKNO 2 days ago

      I think what you're describing is going to be a very short transitional period.

      Once current AI gets good enough, the people micromanaging parts of it will do more to hinder the process than to help it.

      One person setting the objectives and the AI handling literally everything else including brainstorming issues etc, is going to be all that's needed.

      • yoyohello13 2 days ago

        > One person setting the objectives and the AI handling literally everything else including brainstorming issues etc, is going to be all that's needed.

        A person just setting the prompt and letting the AI do all the work is not adding any additional value. Any other person can come in and perform the exact same task.

        The only way to actually provide differentiation in this scenario is to either build your own models, or micromanage the outputs.

      • esafak 2 days ago

        What are you going to do when the output is wrong? You're not expecting it to always be right, are you?

        • rybosworld 2 days ago

          I said this in another comment but look at the leading chess engines. They are already so far above human level of play that having a human override the engines choice will nearly always lead to a worse position.

          > You're not expecting it to always be right, are you?

          I think another thing that gets lost in these conversations is that humans already produce things that are "wrong". That's what bugs are. AI will also sometimes create things that have bugs and that's fine so long as they do so at a rate lower than human software developers.

          We already don't expect humans to write absolutely perfect software so it's unreasonable to expect that AI will do so.

        • hatefulmoron 2 days ago

          I don't expect any code to be right the first time. I would imagine if it's intelligent enough to ask the right questions, research, and write an implementation, it's intelligent enough to do some debugging.

  • headcanon 2 days ago

    Agreed, though to your point I think we'll end up seeing more induced demand long-term

    - This will enable more software to be built and maintained by same or fewer people (initially). Things that we wouldn't previously bother to do are now possible.

    - More software means more problems (not just LLM-generated bugs which can be handled by test suites and canary deploys, but overall features and domains of what software does)

    - This means skilled SWEs will still be in demand, but we need to figure out how to leverage them better.

    - Many codebases will be managed almost entirely by agents, effectively turning it into the new "build target". This means we need to build more tooling to manage these agents and keep them aligned on the goal, which will be a related but new discipline.

    SWEs would need to evolve skillsets but wasn't that always the deal?

  • jasonthorsness 2 days ago

    I think quality is going to go up - I have so much code I wish I could go back and optimize for better performance, or add more comprehensive tests for, and LLMs are getting great at both of those as they work really well off of things that already exist. There has never been enough time/resources to apply towards even the current software demand, let alone future needs.

  • nh2 2 days ago

    Challenge: I would really like somebody that has experience in LLM based coding tools to try and fix gnome-terminal:

    https://news.ycombinator.com/item?id=42814509

    • smokel 2 days ago

      I really like this idea.

      However, it also highlights a key problem that LLMs don’t solve: while they’re great at generating code, that’s only a small part of real-world software development. Setting up a GitHub account, establishing credibility within a community, and handling PR feedback all require significant effort.

      In my view, lowering the barriers to open-source participation could have a bigger impact than these AI models alone. Some software already gathers telemetry and allows sharing bug reports, but why not allow the system to drop down to a debugger in an IDE? And why can’t code be shared as easily as in Google Docs, rather than relying on text-based files and Git?

      Even if someone has the skills to fix bugs, the learning curve for compilers, build tools, and Git often dilutes their motivation to contribute anything.

  • karmasimida 2 days ago

    I am mixed on this point.

    I 100% agree with you our trade is changed forever.

    On the other hand, I am writing like 1000+ LOC daily, without much compromise on quality and my mental health, and thought of writing some code that is necessary but feels like a chore is not longer the case. The boost in output is incredible.

  • nine_zeros 2 days ago

    > Our trade has changed forever, and there's no going back. When companies claim that AI will replace developers, it isn't entirely bluster. Jobs are going to be lost unless there's somehow a demand for more applications

    This is a key insight - the trade has changed.

    For a long time, hoarding talent - who could conceive and implement such PRs - was a competitive advantage. It no longer is because companies can hire and get similar outcomes, with fewer and mediocre devs.

    But at the same time, these companies have lost their technological moat. The people were the biggest moat. The hoarding of people were the reason why SV could stay ahead of other concentrated geographies. This is why SV companies grew larger and larger.

    But now, anyone anywhere can produce anything and literally demolish any competitive advantage of large companies. As an example, literally a single Deepseek release yesterday destroyed large market cap companies.

    It means that the future world is likely to have a large number of geographically distributed developers, always competing, and the large companies will have to shed market cap because their customers will be distributed among this competition.

    It's not going to be pleasant. Life and work will change but it is not merely loss of jobs but it is going to be loss of the large corporation paradigm.

    • happyopossum 2 days ago

      > literally a single Deepseek release yesterday destroyed large market cap companies

      Nobody was “destroyed” - a handful of companies had their stock price drop, a couple had big drops, but most of those stocks are up today, showing that the market is reactionary.

      • nine_zeros 2 days ago

        Alright, if this is more palatable - let's just say market caps will decline because of small code updates made anywhere in the world.

        The point still is: Software/engineering is no longer the moat creator.

        • pertymcpert 2 days ago

          You completely misunderstood the reason for the stock price drop. It was because of the DeepSeek MoE model's compute efficiency which vastly reduced the compute requirements needed to achieve a certain level of performance.

          Notice how Apple and Meta stocks went up last 2 days?

          • nine_zeros 2 days ago

            You are misunderstanding my point. It is because anyone with a non-software moat will likely be able to leverage the benefits of AI.

            Apple has a non-software moat: Their devices.

            Meta has a non-software moat: their sticky users.

            So does Microsoft, and Google to an extent with their non-software moat.

            But how did they build the most in the first place? With software that only they could develop, at a pace that only they could execute, all because of the people they could hoard.

            The companies of the future can disrupt all of them (maybe not apple) very quickly by just developing the same things as say Meta and "at the same quality" but for cheaper. The engineers moat is gone. The only moat meta has is network effects. That's one less barrier for a competing company to deal with.

    • djmips 2 days ago

      You write this as if DeepSeek's R1 was conceived and written by AI itself.

      Do you have a link to that?

      • nine_zeros 2 days ago

        Of course R1 wasn't written by AI. But the point is that in the past, such high quality software could only be written in a concentrated location - SV - because of computing resources and people who could use those computing resources.

        Then in the 00s, the computing resources became widely available. The bottleneck was the people who could build interesting things. Imagine a third world country with access to AWS but no access to developers who could build something meaningful.

        With these models, now these geographically distributed companies can build similarly high quality stuff.

        R1 IS the example of something that previously only could be built in the bowels of large SV corporations.

  • 20k 2 days ago

    Eh it performed a 1:1 conversion of ARM NEON to wasm SIMD, which with the greatest will in the world is pretty trivial work. Its something that ML is good at, because its the same problem area as "translate this from english to french", but more mechanistic

    This is a task that would likely have taken as long to write by hand as the AI took to do it, given how long the actual task took to execute. 98% of the work is find and replace

    Don't get me wrong - this kind of thing is useful and cool, but you're mixing up the easy coding donkey work with the stuff that takes up time

    If you look at the actual prompt engineering part, its clear that this prompting produced extensively wrong results as well, which is tricky. Because it wasn't produced by a human, it requires extensive edge case testing and review, to make sure that the AI didn't screw anything up. If you have the knowledge to validate the output, it would have been quicker to write it by hand instead of reverse engineering the logic by hand. Its bumping the work off from writing it by hand, to the reviewers who now have to check your ML code because you didn't want to put in the work by hand

    So overall - while its extremely cool that it was able to do this, it has strong downsides for practical projects as well

    • WhitneyLand 2 days ago

      Every time AI achieves something new/productive/interesting, cue the apologists who chime in to say “well yeah but that really just decomposes into this stuff so it doesn’t mean much”.

      I don’t get why people don’t understand that everything decomposes into other things.

      You can draw the line for when AI will truly blow your mind anywhere you want, the point is the dominoes keep falling relentlessly and there’s no end in sight.

      • faizshah 2 days ago

        The argument has never changed the argument has always been the same.

        LLMs do not think, they do not perform logic they are approximating thought. The reason why CoT works is because of the main feature of LLMs, they are extremely good at picking reasonable next tokens based on the context.

        LLM are good and always have been good at three types of tasks:

        - Closed form problems where the answer is in the prompt (CoT, Prompt Engineering, RAG)

        - Recall from the training set as the Parameter space increases (15B -> 70B -> almost 1T now)

        - Generalization and Zero shot tasks as a result of the first two (this is also what causes hallucinations which is a feature not a bug, we want the LLM to imitate thought not be a Q&A expert system from 1990)

        If you keep being fooled by LLM thinking they are AGI after every impressive benchmark and everyone keeps telling you that in practice LLM are not good at tasks that are poorly defined, require niche knowledge, or require a special mental model that is on you.

        I use LLM every day I speed up many tasks that would take 5-15 mins down to 10-120 seconds (worst case for re-prompts). Many times my tasks take longer than if I had done it myself because it’s not my work im just copying it. But overall I am more productive because of LLM.

        Does LLM speeding up your work mean that LLM can replace Humans?

        Personally I still don’t think LLM can replace Humans at the same level of quality because they are imitating thought not actually thinking. Now the question among the corporate overlords is will you reduce operating costs by XX% per year (wages) but reducing the quality of service for customers. The last 50 years have shown us the answer…

      • simonw 2 days ago

        This is called the AI effect - where the goalposts are moved every time an AI system demonstrates a new ability. It's been going on for decades. https://en.wikipedia.org/wiki/AI_effect

        • 20k 2 days ago

          Aka people have been consistently calling out the AI hype as being excessive for decades, despite a weird push by the marketing segments of the programming community to declare everything as being AGI. The current technology is better and has more applications, yes. For certain fields its very exciting. For others its not

          The idea that deep blue is in any way a general artificial intelligence is absurd. If you'd believed AI researchers hype 20 years ago, we'd have everything fully automated by now and the first AGI was just around the corner. Despite the current hype, chatgpt and co is barely functional at most coding tasks, and is excessively poor at even pretty basic reasoning tasks

          I would love for AI to be good. But every time I've given it a fair shake to see if it'll improve my productivity, its shown pretty profoundly that its useless for anything I want to use it for

        • ceejayoz 2 days ago

          That goes both ways, though. Every new ability is "the big one" that hints at AGI just around the corner.

      • lukan 2 days ago

        "You can draw the line for when AI will truly blow your mind anywhere you want, the point is the dominoes keep falling relentlessly and there’s no end in sight"

        I draw the line, when the LLM will be able to help me with a novel problem.

        It is impressive how much knowledge was encoded into them, but I see no line from here to AGI, which would be the end here.

        • esafak 2 days ago

          Can you give an example of a novel problem they can not help you solve?

      • Capricorn2481 2 days ago

        > cue the apologists

        How are you defining apologists here? Anti-AI apologists? Human apologists? That's not a word you can just sprinkle on opposing views to make them sound bad.

        • WhitneyLand 2 days ago

          Yeah, human apologists basically.

          Thanks to Simon for pointing out my point is encapsulated by the AI effect, which also offers an explanation:

          "people subconsciously are trying to preserve for themselves some special role in the universe…By discounting artificial intelligence people can continue to feel unique and special.”

          • Capricorn2481 a day ago

            > human apologists

            Those monsters

            > Thanks to Simon for pointing out my point is encapsulated by the AI effect

            And someone else pointed out that goes both ways. Every new AI article is evidence of AGI around the corner. I am open to AI being better in the future but it's useless for the work I do right now.

      • bigpingo 2 days ago

        I have always had the same line:

        AI will blow my mind when it solves an unsolved mathematical/physics/scientific problem, i.e: "AI, give me a proof for (or against) the Riemann hypothesis"

        • simonw 2 days ago
          • svieira 2 days ago

            Actually, it happened _long_ before that - 2018 was when I became aware of this technique, but I'm sure there's previous art: https://nullprogram.com/blog/2018/07/31/ (Prospecting for Hash Functions for those who already know).

            That said, this is really brute forcing, not what the OP is asking for, which is providing a novel proof as the response to a prompt (this is instead providing the novel proof as one of thousands of responses, each of which could be graded by a function).

      • 20k 2 days ago

        The thing is, that's not true at all. AI is great for some tasks, and poor for other tasks. That's the reason to break it down like this, because people are trying to explain where AI will and won't revolutionise things, instead of following along with the already-popping AI bubble uncritically

        For example: AI's smash translation. They won't ever beat out humans, but as an automated solution? They rock. Natural language processing in general is great. If you want to smush in a large amount of text, and smush out a large amount of other text that's 98% equivalent but in a different structure, that's what AI is good for. Same for audio, or picture manipulation. It works because it has tonnes of training data to match your input against

        What AI cannot do, and will never be able to do, is take in a small amount of text (ie a prompt), and generate a large novel output with 100% accuracy. It simply doesn't have the training data to do this. AI excels in tasks where it is given large amounts of context and asked to perform a mechanistic operation, because its a tool which is designed to extract context and perform conversions based on that context due to its large amounts of training data. This is why in this article the author was able to get this to work: they could paste in a bunch of examples of similar mechanical conversions, and ask the AI to repeat the same process. It has trained on these kinds of conversions, so it works reasonably well

        Its great at this, because its not a novel problem, and you're giving it its exact high quality use case: take a large amount of text in, and perform some kind of structural conversion on it

        Where AI fails is when being asked to invent whole cloth solutions to new problems. This is where its very bad. So for example, if you ask an AI tool to solve your business problem via code, its going to suck. Because unless your business problem is something where there are literally 1000s examples of how to solve it, the AI simply lacks the training data to do what you ask it, it'll make gibberish

        It isn't the nature of the power of the AI, its that its inherently good for solving certain kinds of problems, vs other kinds of problems. It can't be solved with more training. The OPs problem is a decent use case for it. Most coding problems aren't. That's not that it isn't useful - people have already been successfully using them for tonnes of stuff - but its important to point out that its only done so well because of the specific nature of the use case

        Its become clear that AI requires someone of equivalent skill as the original use case to manage its output if 100% accuracy is required, which means that it can only ever function as an assistant for coders. Again, that's not to say it isn't wildly cool, its just acknowledging what its actually useful for instead of 'waiting to have my mind blown'

        • chunky1994 2 days ago

          The difference is though there isn't a whole lot of "whole cloth novel solutions" being written in software today so much as a "write me this CRUD app to do ABC" which current generations are exceedingly good at.

          There are probably 10% of truly novel problems out there, the rest are just already solved problems with slightly different constraints of resources ($), quality (read: reliability) and time. If LLMs get good enough at generating a field of solutions that minimize those three for any given problem, it will naturally tend to change the nature of most software being written today.

          • Capricorn2481 2 days ago

            I think there's a gap of problems between CRUD and novel. I imagine novel to be very difficult, unsolved problems that would take some of the best in the industry to figure out. CRUD problems are really basic reading/writing data to a database with occasional business logic.

            But there's also bespoke problems. They aren't quite novel, yet are complicated and require a lot of inside knowledge on business edge cases that aren't possible to sum up in a word document. Having worked with a lot of companies, I can tell you most businesses literally cannot sum up their requirements, and I'm usually teaching them how their business works. These bespoke problems also have big implications on how the app is deployed and run, which is a whole different thing.

            Then you have LLMs, which seem allergic to requirements. If you tell an LLM "make this app, but don't do these 4 things," it's very different from saying "don't do these 12 things." It's more likely to hallucinate, and when you tell it to please remember requirement #3, it forgets requirement #7.

            Well, my job is doing things with lots of restraints. And until I can get AI to read those things without hallucinating, it won't be helpful to me.

        • svachalek 2 days ago

          You need to substitute "AI" with "LLMs" or "current transformer architecture" or something. AI means something completely new every few years so speaking of what AI can't do or can never do doesn't make any sense.

        • faizshah 2 days ago

          I just wrote up a very similar comment. It’s really nice to see that there are other people who understand the limits of LLM in this hype cycle.

          Like all the people surprised by Deepseek when it has been clear for the last 2 years there is no moat in foundation models and all the value is in 1) high quality data that becomes more valuable as the internet fills with AI junk 2) building the UX on top that will make specific tasks faster.

    • UncleEntity 2 days ago

      IDK, I was playing with Claude yesterday/this morning and before I hit the free tier context limit it managed to create a speech-to-phoneme VQ-VAE contraption with a sliding window for longer audio clips and some sort of "attention to capture relationships between neighboring windows" that I don't quite understand. That last part was due to a suggestion it provided where I was like "umm, ok..."

      Seems pretty useful to me where I've read a bunch of papers on different variational autoencoder but never spent the time to learn the torch API or how to set up a project on the google.

      In fact, it was so useful I was looking into paying for a subscription as I have a bunch of half-finished projects that could use some love.

  • chrisguilbeau 2 days ago

    I'm a developer that primarily uses gh copilot for python dev. I find it pretty useful as an intelligent auto-completer that understands our project's style, and unusual decorators we use.

    What tools would you tell a copilot dev to try? For example, I have a $20/mo ChatGPT account and asking it to write code or even fix things hasn't worked very well. What am I missing?

    • byteknight 2 days ago

      While I dont know your scenario as an avid user of both gpt and claude, I would recommend move away from Google style search queries, and begin conversing. The more you give the LLM the more you'll get close to what you want.

  • fauigerzigerk 2 days ago

    A long time ago, I held the grandiose title of software architect. My job was to describe in a mix of diagrams, natural language and method signatures what developers were supposed to do.

    The back and forth was agonising. They were all competent software engineers but communicating with them was often far more work than just writing the damn code myself.

    So yes I do believe that our trade has changed forever. But the fact that some of our coworkers will be AIs doesn't mean that communicating with them is suddenly free. Communcation comes with costs (and I don't mean tokens). That won't change.

    If you know your stuff really well, i.e. you work on a familiar codebase using a familiar toolset, the shortest path from your intentions to finished code will often not include anyone else - no humans and no AI either.

    In my opinion, "LLMs are only marginally useful in coding" is not true in general, but it could well be true for a specific person and a specific coding task.

  • cjbgkagh 2 days ago

    Who would the new applications be for? I figure that it’ll be far easier to build apps for use by LLMs than building apps for people to use. I don’t think there will be this large increase of induced demand, the whole world just got a lot more efficient and that’s probably a bad thing for the average person.

    • 6510 2 days ago

      Oh right, we will have a B2B B2C B2L L2B L2C and ultimately the L2L market.

    • fragmede 2 days ago

      take some process that you, or someone you know does right now that involves spreadsheets and copy-pasting between various apps. hiring a software engineer to build an app so it's just a [do-it] button previously didn't make sense because software engineer time was too expensive. Now, that app can be made, so the HR or whatever person doesn't need to waste their time on automatable tasks.

      • cjbgkagh 2 days ago

        I'm pretty sure a LLM can be taught to press a button

      • 6510 2 days ago

        The thing that has me most inspired is that one will finally get to ask the questions that seemed strange to ask before. Like, 1:40 times, when I press the button nothing happens for 10 seconds and I don't know if I've pressed the button properly.

      • yobbo 2 days ago

        If their time was actually wasted, paying a few or several thousand dollars for a tool would have been profitable a long time ago.

        Usually, there are hidden payoffs that motivate things that seem like waste.

        • simonw 2 days ago

          The people who are spending time manually doing a task that could be handled by a program are usually the exact same people who don't have the experience (or authority) to be able to say "this is a thing that could be automated with a tool if we paid a few thousand dollars to develop it".

          Hiring someone to remodel a bathroom is hard enough, now try hiring a contract software engineer, especially when you don't have budget authority!

          That said, I heard about a fire chief last year who had to spend two days manually copying and pasting from one CRM to another. I wish I could help people like that know when to pay someone to write a script!

          I imagine even in that role figuring out how to hire someone so solve a problem would still take longer than manually crunching through that themselves.

  • gmt2027 2 days ago

    If AI increases the productivity of a single engineer between 10-100x over the next decade, there will be a seismic shift in the industry and the tech giants will not walk away unscathed.

    There are coordination costs to organising large amounts of labour. Costs that scale non-linearly as massive inefficiencies are introduced. This ability to scale, provide capital and defer profitability is a moat for big tech and the silicon valley model.

    If a team of 10 engineers become as productive as a team of 100-1000 today, they will get serious leverage to build products and start companies in domains and niches that are not currently profitable because the middle managers, C-Suite, offices and lawyers are expensive coordination overhead. It is also easier to assemble a team of 10 exceptional and motivated partners than 1000 employees and managers.

    Another way to think about it is what happens when every engineer can marshal the AI equivalent of $10-100m dollars of labour?

    My optimistic take is that the profession will reach maturity when we become aware of the shift in the balance of power. There will be more solo engineers and we will see the emergence of software practices like the ones doctors, lawyers and accountants operate.

    • kragen 2 days ago

      I'm tempted by this vision, though that in itself makes me suspicious that I'm indulging in wishful thinking. Also lutusp wrote a popular article promoting it about 45 years ago, predicting that no companies like today's Microsoft would come to exist.

      A thing to point out is that management is itself a skill, and a difficult one, one where some organizations are more institutionally competent than others. It's reasonable to think of large-organization management as the core competency of surviving large organizations. Possibly the hypothetical atomizing force you describe will create an environment where they are poorly adapted for continuing survival.

    • darkwater 2 days ago

      This is a really interesting take that I don't see often in the wild. Actually, it's the first time I read someone saying this. But I think you are definitely onto something, especially if costs of AI are going to lower faster than expected even a few weeks ago.

    • AznHisoka 2 days ago

      To play devils advocate, the main obstacle in launching a product doesn't involve the actual development/coding. Unless you're building something in hard-tech, it's relatively easy to build the run of the mill software.

      The obstacles are in marketing, selling it, building a brand/reputation, integrating it with lots of 3rd party vendors, and supporting it.

      So yes, you can build your own Salesforce, or your own Adobe Photoshop with a one-man crew much faster and easier. But that doesn't mean you, as an engineer can now build your own business selling it to companies who don't know anything about you.

    • svilen_dobrev 2 days ago

      a (tile-placing) guy who was rebuilding my bathrooms, told this story:

      when he was greener, he happened to work with some old fart... who managed to work 10x faster than others, with this trick: put all the tiles on the wall with a diluted cement-glue very quick, then moving one tile forces most other tiles around to move as well.. so he managed to order all the tiles in very short time.

      As i never had the luxury of decent budget, since long time ago i was doing various meta-programming things, then meta-meta-programming.. up to extent of say, 2 people building and managing and enjoying a codebase of 100KLOC (python) + 100KLOC js... ~~30% generated static and unknown %% generated-at-runtime - without too much fuss or overwork.

      But it seems that this road has been a dead end... for decades. Less and less people use meta-programming, it needs too deep understanding ; everyone just adds yet-another (2y "senior") junior/wanna-be to copy-paste yet another crud.

      So maybe the number of wanna-bees will go down. Or "senior" would start meaning something.. again. Or idiotically-numbing-stoopid requirements will stop appearing..

      • kragen 2 days ago

        When I meta-meta-program, I sometimes waste a lot of time debugging.

    • WXLCKNO 2 days ago

      Like darkwater's comment, this is my first time seeing this take and I like it a lot.

      I hate the idea of building a business to hundreds/thousands of employees, I love startups and small but highly profitable businesses.

      Having productivity be unleashed in this way with a small team of people I trust would be amazing.

    • Aloisius 2 days ago

      As long as the output of AI is not copyrightable, there will be demand for human engineers.

      After all, if your codebase is largely written by AI, it becomes entirely legal to copy it and publish it online, and sell competing clones. That's fine for open source, but not so fine for a whole lot of closed source.

  • simlevesque 2 days ago

    I got incredible results in asking AIs for sql queries. I just enter my data and what I want the output to look like. Then I ask it to provide 10 different versions that might be faster. I test them all and tell it which is faster and then I ask it to make variations on this path. Then I ask it to add comments to the code which is the fastest. I verify the query, do some more test, and I'm good to go. I understand SQL pretty well but trying to make 10 different versions of one code would've took me at least an hour.

  • nkozyra 2 days ago

    > it isn't entirely bluster

    "Development" is effectively translating abstractions of an intended operation to machine language.

    What I find kind of funny about the current state is we're using large language models to, like, spit out React or Python code. This use case is obviously an optimization to WASM, so a little closer to the metal, but at what point to programs (effectively suites of operations) just cut out the middleman entirely?

    • dboreham 2 days ago

      I've wondered about this too. The LLM could just write machine code. But now a human can't easily review it. But perhaps TDD makes that ok. But now the tests need to be written in a human readable language so they can be checked. Or do they? And if the LLM is always right why does the code need to be tested?

      • kragen 2 days ago

        The LLM might be terrible at writing machine code directly. The kinds of mistakes I see GPT-4 making in Python, PostScript, or JS would be a much bigger problem in machine code. It "gets confused" and "makes mistakes" in ways very similar to humans. I haven't had a chance to try DeepSeek R1 yet.

      • nkozyra 2 days ago

        At a certain point I don't see why a human needs to be in the loop at all. But I suppose that's the most dystopian part of it all.

        • kragen 2 days ago

          Maybe the human has the money.

  • nuancebydefault a day ago

    AI will only ever be able to develop what it is asked/prompted for. The question is often ill formed, resulting in an app that does not do what you want. So the prompt needs to be updated, the result needs to be evaluated and tweaks need to be done to the code with or without help of AI.

    In fact, from a distance seen, the software development pattern in AI times stays the same as it was pre-AI, pre-SO, pre-IDE as well as pre-internet.

    Just to say, sw developers will still be sw developers.

  • redcobra762 2 days ago

    When tools increase a worker's efficiency, it's rare that the job is lost. It's much more common that the demand for that job changes to take advantage of the productivity growth.

    This is why the concerns from Keynes and Russel about people having nothing to do as machines automated away more work ended up being unfounded.

    We fill the time... with more work.

    And workers that can't use these tools to increase their productivity will need to be retrained or moved out of the field. That is a genuine concern, but this friction is literally called the "natural rate of unemployment" and happens all the time. The only surprise is we expected knowledge work to be more inoculated from this than it turns out to be.

  • rozap 2 days ago

    Broadly agree. Whether or not it is useful isn't really an interesting discussion, because it so clearly is useful. The more interesting question is what it does to supply and demand. If the past is any indication, I think we've seen that lowering to barrier to getting software shipped and out the door (whether it's higher level languages, better tooling) has only made demand greater. Maybe this time it's different because it's such a leap vs an incremental gain? I don't know. The cynical part of me thinks that software always begets more software, and systems just become ever more complex. That would suggest that our jobs are safe. But again, I don't say that with confidence.

    • simonw 2 days ago

      > If the past is any indication, I think we've seen that lowering to barrier to getting software shipped and out the door (whether it's higher level languages, better tooling) has only made demand greater.

      Something I think about a lot is the impact of open source on software development.

      25 years ago any time you wanted to build anything you pretty much had to solve the same problems as everyone else. When I went to university it even had a name - the software reusability crisis. At the time people thought the solution was OOP!

      Open source solved that. For any basic problem you want to solve there are now dozens of well tested free libraries.

      That should have eliminated so many programming jobs. It didn't: it made us more productive and meant we could deliver more value, and demand for programmers went up.

    • oorza 2 days ago

      I don't think it's necessarily any larger of a leap than any of the other big breakthroughs in the space. Does writing safe C++ with an LLM matter more than choosing Rust? Does writing a jQuery-style gMail with an LLM matter more than choosing a declarative UI tool? Does adding an LLM to Java 6 matter more than letting the devs switch to Kotlin?

      Individual developer productivity will be expected to rise. Timelines will shorten. I don't think we've reached Peak Software where the limiting factor on software being written is demand for software, I think the bottlenecks are expense and time. AI tools can decrease both of those, which _should_ increase demand. You might be expected to spend a month outputting a project that would previously have taken four people that month, but I think we'll have more than enough demand increase to cover the difference. How many business models in the last twenty years that weren't viable would've been if the engineering department could have floated the company to series B with only a half dozen employees?

      What IS larger than before, IMO, is the talent gap we're creating at the top of the industry funnel. Fewer juniors are getting hired than ever before, so as seniors leave the industry due to standard attrition reasons, there are going to be fewer candidates to replace them. If you're currently a software engineer with 10+ YoE, I don't think there's much to worry about - in fact, I'd be surprised if "was a successful Software Engineer before the AI revolution" doesn't become a key resume bullet point in the next several years. I also think that if you're in a position of leadership and have the creativity and leadership to make it work, juniors and mid-level engineers are going to be incredibly cost effective because most middle managers won't have those things. And companies will absolutely succeed or fail on that in the coming years.

  • Myrmornis 2 days ago

    In my experience a lot of it is (d) defaulting to criticizing new things, especially things that are "trendy" or "hot" and (e) not liking to admit that one's own work can partially be done by such a trendy or hot thing.

  • kragen 2 days ago

    It's possible that the previous tools just weren't good enough yet. I play with GPT-4 programming a lot, and it usually takes more work than it would take to write the code myself. I keep playing with it because it's so amazing, but it isn't to the point where it's useful to me in practice for that purpose. (If I were an even worse coder than I am, it would be.) DeepSeek looks like it is.

  • casenmgreen 2 days ago

    I may be wrong, but I think right now, from reading stories of people looking at use AI and having poor experiences, AI is useful and effective for some tasks and not for others, and this is an intrinsic property - it won't get better with bigger models. You need a task which fits well with what AI can do, which is basically auto-complete. If you have a task which does not fit well, it's not going to fly.

    • simonw 2 days ago

      Right: LLMs have a "jagged frontier". They are really good at some things and terrible at other things, but figuring out WHAT those things are is extremely unintuitive.

      You have to spend a lot of time experimenting with them to develop good intuitions for where they make sense to apply.

      I expect the people who think LLMs are useless are people who haven't invested that time yet. This happens a lot, because the AI vendors themselves don't exactly advertise their systems as "they're great at some stuff and terrible at other stuff and here's how to figure that out".

  • plainOldText 2 days ago

    Indeed, our trade has changed forever, and more specifically, we might have to alter our operational workflows in the entire industry as well.

    There are so many potential trajectories going forward for things to turn sour, I don't even know where to start the analysis. The level of sophistication an AI can achieve has no upper bound.

    I think we've had a good run so far. We've been able to produce software in the open with contributions from any human on the planet, trusting it was them who wrote the code, and with the expectation that they also understand it.

    But now things will change. Any developer, irrespective of skill and understanding of the problem and technical domains can generate sophisticated looking code.

    Unfortunately, we've reached a level of operational complexity in the software industry, that thanks to AI, could be exploited in a myriad ways going forward. So perhaps we're going to have to aggressively re-adjust our ways.

    • herval 2 days ago

      I don't think trusting that someone wrote the code was ever a good assurance of anything, and I don't see how that changes with AI. There will always be certain _individuals_ who are more reliable than others, not because they handcraft code, but because they follow through with it (make sure it works, fix bugs after release, keep an eye to make sure it worked, etc).

      Yes, AI will enable exponentially more people to write code, but that's not a new phenomenon - bootcamps enabled an order of magnitude more people to become developers. So did higher level languages, IDEs, frameworks, etc. The march of technology has always been about doing more while having to understand less - higher and higher levels of abstraction. Isn't that a good thing?

      • plainOldText 2 days ago

        Until now, the march of technology has taken place through a realm which was somewhat limited or slowed down only by our advancements in the physical and cognitive realities. This has given us ample time to catch up, to adjust.

        The cognitive reality of AI, and more specifically of AI+Humans in the context of a social and globally connected world, is on a higher level of sophistication and can unfold much faster, which in turn might generate entirely unexpected trajectories.

        • herval 2 days ago

          Has it really? What evidence do we have that it's such an insanely exponential advancement?

  • Vegenoid 2 days ago

    > Our trade has changed forever, and there's no going back

    Forever? Hell, it hasn't even existed for a lifetime yet.

    • kragen 2 days ago

      01945 to 02025 is 80 years, longer than human life expectancy at birth. What's your definition of a "lifetime"?

      • Vegenoid a day ago

        The point was not to be very accurate, it was to make the point that it has existed for a very short amount of time on the scale of humanity. Quibbling over whether software engineering started in the 40s or the 50s and whether that is greater or less than an average life expectancy is beside the point.

        • kragen 21 hours ago

          People posting comments without caring whether they are true or false undermines the presumption of good faith that underlies rational discourse. Please stop posting such comments on this site. Instead, only post comments that you have some reason to believe are true.

          • Vegenoid 9 hours ago

            It is not about not caring if the statement is true or false. The statement neither absolutely true nor absolutely false, because there is no absolute definition of when software engineering started, or what "a lifetime" is. It is about making a statement that communicates information. However, if I must prove that my statement can reasonably considered "true" in order to prove that it does communicate the short span of time for which software engineering has existed:

            - Avg. life expectancy in USA: 77.5 years

            - 2025 - 77.5 = 1947.5

            - In 1945, Turing published "Proposed Electronic Calculator"

            - The first stored-program computer was built in 1948

            - The term "software engineering" wasn't used until the 1960s

            If you want to define "software engineering" such that it is more than 77.5 years old, that's fine. But saying that software engineering is less than 77.5 years old is clearly a reasonable stance.

            Please stop berating me for a perfectly harmless and reasonably accurate statement. If you're going to berate me for anything, it should be for its brevity and lack of discussion-worthy content. But those are posted all the time.

  • thrance 2 days ago

    Did you even look at the generated code? DeepSeek simply rewrote part of the inference code making use of SIMD instructions on wasm. It literally boils down to inserting `if defined __wasm_simd128__` at some places then rewritting the loops to do floating point operations two by two instead of one after the other (which is where the 2X claim comes from). This is very standard and mostly boilerplate.

    Useful, sure, in that it saved some time in this particular case. But most of the AI-generated code I interact with is a hot unmaintainable mess of very verbose code, which I'd argue actually hurts the project in the long term.

    • simonw 2 days ago

      "But most of the AI-generated code I interact with is a hot unmaintainable mess of very verbose code"

      That sounds like you're working with unskilled developers who are landing bad code.

  • unshavedyak 2 days ago

    I’m still just looking for a good workflow where I can stay in my editor and largely focus on code, rather than trying to explain what I want to an LLM.

    I want to stay in Helix and find a workflow that “just works”. Not sure even what that looks like yet

    • WXLCKNO 2 days ago

      Just to clarify, something like Cursor doesn't fit your needs right?

      • unshavedyak 2 days ago

        I've not tried tbh. Most of the workflows i've seen (i know i looked at Cursor, but it's been a while) appear to be to write lengthy descriptions of what you want it to do. As well as struggling with the amount of context you need to give it because context windows are way too small.

        I feel like i want a more intuitive, natural process. Purely for illustration -- because i have no idea what the ideal workflow is -- I'd want something that could allow for large autocomplete without changing much. Maybe a process by which i write a function, args, docstring on the func and then as i write the body autocomplete becomes multiline and very good.

        Something like this could be an extension of the normal autocomplete that most of us know and love. A lack of talking to an AI, and more about just tweaking how you write code to be very metadata rich so AIs have a rich understanding of intent.

        I know there are LLM LSPs which sort of do this. They can make shorter autocompletes that are logical to what you're typing, but i think i'm talking about something larger than that.

        So yea.. i don't know, but i just know i have hated talking to the LLM. Usually it felt like "get out of the way, i can do it faster" sort of thing. I want something to improve how we write code, not an intern that we manage. If that makes sense.

        • stnmtn 2 days ago

          I'd recommend trying out Cursor, I personally find it does more or less what you want out of the box.

          • unshavedyak 2 days ago

            I'll look into it, thanks!

            I'm currently trying to figure out how it works with my editor, though. Ie i don't want to leave my tooling of choice, Helix editor.

  • lenerdenator 2 days ago

    My greatest problem is duplicating the secret sauce of GHCP: it has access to your project and can use it as context.

    Admittedly, I haven't looked too hard, but how could I do that with a model from, say, Ollama and run exclusively on my machine?

    • nkozyra 2 days ago

      Couldn't you load the whole thing into a database or memory and use it as a RAG source? Not sure if that would fully scratch the itch.

  • chefandy 2 days ago

    GH copilot code completion is really the only one I’ve found to be consistently more of a benefit than a time sync. Even with the spiffy code generators using Claude or whatever, I often find myself spending as much time figuring out where the logical problem is than if I had just coded it myself, and you still need to know exactly what needs to be done.

    I’d be interested in seeing how much time they spent debugging the generated code and and how long they spent constructing and reconstructing the prompts. I’m not a software developer anymore as my primary career, so if the entire lower-half of the software development market went away catering wages as it did, it wouldn’t directly affect my professional life. (And with the kind of conceited, gleeful techno-libertarian shit I’ve gotten from the software world at large over the past couple of years as a type of specialized commercial artist, it would be tough to turn that schadenfreude into empathy. But we honestly need to figure out a way to stick together or else we’re speeding towards a less mechanical version of Metropolis.)

  • TheBigSalad 2 days ago

    LLMs are only marginally useful for coding. You have simply chosen to dismiss or or 'give up' on that fact. You've chosen what you want to believe in contrast to the reality that we are all experiencing.

    • simonw 2 days ago

      LLMs are incredibly useful for coding, if you learn how to apply them effectively. You have simply chosen to dismiss or 'give up' on that fact.

      • astrobe_ 2 days ago

        I think the reality is that these AI output the "average" of what was in their training set, and people receive it differently depending on if they are below or above this average.

        It's a bit like what happens with "illusion of knowledge" or "illusion of understanding". When one knows the topic, one can correct the output of AI. When one doesn't, one tends to forget it can be inaccurate or plain wrong.

      • TheBigSalad 2 days ago

        They are a useful tool, but not 'incredibly useful'. The simple, repetitive code in this example is what they are good at. It's like 1% of what I do working on products. Writing code isn't even that impressive, the whole job is figuring out exactly what people want.

        • fragmede 2 days ago

          given that there's no standardized scale of usefulness, is the distinction between "useful" for one person vs "incredibly useful", when nothing concrete has been specified; is that distinction really the important thing here? both of you find it useful. I might go off on a long tangent about how I love my hammer, it's the best, and you'll think I'm ridiculous because it's just a hammer, but at the end of the day, we can both agree that the hammer is doing the job of driving in nails.

  • mjr00 2 days ago

    > I hope we can put to rest the argument that LLMs are only marginally useful in coding - which are often among the top comments on many threads. I suppose these arguments arise from (a) having used only GH copilot which is the worst tool, or (b) not having spent enough time with the tool/llm, or (c) apprehension. I've given up responding to these.

    Look at the code that was changed[0]. It's a single file. From what I can tell, it's almost purely functional with clearly specified inputs and outputs. There's no need to implement half the code, realize the requirements weren't specified properly, and go back and have a conversation with the PM about it. Which is, you know, what developers actually do.

    This is the kind of stuff LLMs are great at, but it's not representative of a typical change request by Java Developer #1753 at Fortune 500 Enterprise Company #271.

    [0] https://github.com/ggerganov/llama.cpp/pull/11453/files

    • simonw 2 days ago

      "Yeah, but LLMs can't handle millions of lines of crufty old Java" is a guaranteed reply any time this topic comes up.

      (That's not to say it isn't a valid argument.)

      Short answer: LLMs are amazingly useful on large codebases, but they are useful in different ways. They aren't going to bang out a new feature perfectly first time, but in the right hands they can dramatically accelerate all sorts of important activities, such as:

      - Understanding code. If code has no documentation, dumping it into an LLM can help a lot.

      - Writing individual functions, classes and modules. You have to be good at software architecture and good at prompting to use them in this way - you take on the role of picking out the tasks that can be done independently of the rest of the code.

      - Writing tests - again, if you have the skill and experience to prompt them in the right way.

      • mjr00 2 days ago

        Yes, LLMs are very useful, when used properly. But the linked change request is not a good example of how they would be used by a typical software developer. The linked pull request is essentially output from a compiler that's been hardcoded.

        > Writing individual functions, classes and modules. You have to be good at software architecture and good at prompting to use them in this way - you take on the role of picking out the tasks that can be done independently of the rest of the code.

        If you have enough skill and understanding to do this, it means you already have enough general software development experience and domain-specific experience and experience with a specific, existing codebase to be in rarefied air. It's like saying, oh yeah a wrench makes plumbing easy. You just need to turn the wrench, and 25 years of plumbing knowledge to know where to turn it.

        > Writing tests - again, if you have the skill and experience to prompt them in the right way.

        This is very true and more accessible to most developers, though my big fear is it encourages people to crap out low-value unit tests. Not that they don't love to do that already.

        • simonw 2 days ago

          > If you have enough skill and understanding to do this, it means you already have enough general software development experience and domain-specific experience and experience with a specific, existing codebase to be in rarefied air.

          Yes, exactly. That's why I keep saying that software developers shouldn't be afraid that they'll be out of a job because of LLMs.

      • magicalist 2 days ago

        > "Yeah, but LLMs can't handle millions of lines of crufty old Java" is a guaranteed reply any time this topic comes up.

        That's not at all what the GP was saying, though:

        > There's no need to implement half the code, realize the requirements weren't specified properly, and go back and have a conversation with the PM about it. Which is, you know, what developers actually do.

        • simonw 2 days ago

          I was responding to this bit:

          > This is the kind of stuff LLMs are great at, but it's not representative of a typical change request by Java Developer #1753 at Fortune 500 Enterprise Company #271.

  • jvanderbot 2 days ago

    This is great. Really! Buuut...

    How do you get these tools to not fall over completely when relying on an existing non-public codebase that isn't visible in just the current file?

    Or, how do you get them to use a recent API that doesn't dominate their training data?

    Combining the both, I just cannot for the life of me get them to be useful beyond the most basic boilerplate.

    Arguably, SIMD intrinsics are a one-to-one translation boilerplate, and in the case of this PR, is a leetcode style, well-defined problem with a correct answer, and an extremely well-known api to use.

    This is not a dig on LLMs for coding. I'm an adopter - I want them to take my work away. But this is maybe 5% of my use case for an LLM. The other 95% is "Crawl this existing codebase and use my APIs that are not in this file to build a feature that does X". This has never materialized for me -- what tool should I be using?

    • simonw 2 days ago

      "Or, how do you get them to use a recent API that doesn't dominate their training data?"

      Paste in the documentation or some examples. I do this all the time - "teaching" an LLM about an API it doesn't know yet is trivially easy if you take advantage of the longer context inputs to models these days.

      • jvanderbot 2 days ago

        I've tried this. I've scraped example pages directly from github, and given them a 200 line file with the instructions "just insert this type of thing", and it will invariably use bad APIs.

        I'd be happy to share the example with you.

        • simonw 2 days ago

          Go for it - can you share it in a Gist?

          I use this technique all the time. Here's one written-up example: https://simonwillison.net/2024/Mar/30/ocr-pdfs-images/ - transcript here: https://gist.github.com/simonw/6a9f077bf8db616e44893a24ae1d3...

          • jvanderbot 2 days ago

            Here, give it a shot - https://gist.github.com/jodavaho/8fb042fab33c1aaa95cd67144da...

            I'm at work so I can't try again right now, but last I did was use claude+context, chatGPT 4o with just chatting, Copilot in Neovim, and Aider w/ claude + uploading all the files as context.

            I even went so far as to grab relevant examples from https://github.com/bevyengine/bevy/tree/latest/examples#exam... , adding relevant ones as I saw fit.

            It took a long time to get anything that would compile, way longer than just reading + doing, and it was eventually wrong anyway. This is a recurring issue with Rust, and I'd love a workaround since I spend 60+h/week writing it (though not bevy). Probably a skill issue.

            • simonw 2 days ago

              I don't know anything about bevy but yeah, that looks like it would be a challenge for the models. In this particular case I'd tell the model how I wanted it to work - rather than "Add a button to the left panel that prints "Hello world" when pressed" I'd say something more like (I'm making up these details): "Use the bevy:Panel class with an inline callback to add a button to the bottom of the left panel".

              Or I'd more likely start by asking for options: "What are some options for adding a button to that left panel?" - then pick one that I liked, or prompt it to use an approach it didn't suggest.

              After it delivered code, if I didn't like the code it had used I'd tell it: "Don't use that class, use X instead" or "define a separate function for that callback" or whatever.

              • jvanderbot 2 days ago

                Ultimately going slow is how I'd learn and learning is how I'd go fast, and teaching an AI is how I'd turn it up to 11.

                That makes sense. It doesnt help me get to 11 if I don't know the basics myself though.

    • withinboredom 2 days ago

      Hahaha. My favorite was when we bumped go up to use go 1.23 and our AI code review tool flagged it because “1.22 is actually the latest release.” Yesterday.

  • attractivechaos 2 days ago

    I wonder what prompt they use. Before asking DeekSeek – is there a good post/video that walks through this procedure?

  • myrmi 2 days ago

    I feel uncomfortably called out by all three points. What tools should I be trying to see what you are?

    • jeswin 2 days ago

      I use my own tools and scripts, and those aren't for everyone - so I'm just gonna make some general suggestions.

      1. You should try Aider. Even if you don't end up using it, you'll learn a lot from it.

      2. Conversations are useful and important. You need to figure out a way to include (efficiently, with a few clicks) the necessary files into the context, and then start a conversation. Refine the output as a part of the conversation - by continuously making suggestions and corrections.

      3. Conversational editing as a workflow is important. A better auto-complete is almost useless.

      4. Github copilot has several issues - interface is just one of them. Conversational style was bolted on to it later, and it shows. It's easier to chat on Claude/Librechat/etc and copy files back manually. Or use a tool like Aider.

      5. While you can apply LLMs to solve a particular lower level detail, it's equally effective (perhaps more effective) to have a higher level conversation. Start your project by having a conversation around features. And then refine the structure/scaffold and drill-down to the details.

      6. Gradually, you'll know how to better organize a project and how to use better prompts. If you are familiar with best practices/design patterns, they're immediately useful for two reasons. (1) LLMs are also familar with those, and will help with prompt clarity; (2) Modular code is easier to extend.

      7. Keep an eye on better performing models. I haven't used GPT-4o is a while, Claude works much, much better. And sometimes you might want to reach for o1 models. Other lower-end models might not offer any time savings; so stick to top tier models you can afford. Deepseek models have brought down the API cost, so it's now affordable to even more people.

      8. Finally, it takes time. Just as any other tool.

      • Myrmornis 2 days ago

        I agree with your overall point, and your despair at software engineers who are still refusing to acknowledge the value of these tools during the process of writing code. However

        > A better auto-complete is almost useless.

        That's not true. I agree that Copilot seemed unhelpful when I last tried it, but Cursor's autocomplete is extremely useful.

      • myrmi a day ago

        Thanks for taking the time here for such a detailed reply, I look forward to seeing what the brave new world looks like!

  • kikimora 2 days ago

    I don’t understand. When I asked DeepSeek how to find AWS IoT Thing creation time it suggested me to use “version” field and treat it as a Unix timestamp. This is obvious nonsense. How can this tool generate anything useful other than summaries of pre-existing text? My knowledge of theory behind LLMs also suggests this is all they can do reasonably well.

    When I see claims like this I suspect that either people around me somehow 10x better at promoting or they use different models.

    • simonw 2 days ago

      You're making the mistake of treating an LLM like a search engine, and expecting it to be able to answer questions directly from its training data.

      Sometimes this works! But it's not guaranteed - this isn't their core strength, especially once you get into really deep knowledge of complex APIs.

      They are MUCH more useful when you use them for transformation tasks: feed in examples of the APIs you need to work with, then have them write new code based on that.

      Working effectively with LLMs for writing code is an extremely deep topic. Most people who think they aren't useful for code have been mislead into believing that the LLMs will just work - and that they don't first need to learn a whole bunch of unintuitive stuff in order to take advantage of the technology.

      • tejinderss 2 days ago

        > Working effectively with LLMs for writing code is an extremely deep topic.

        There is a space for learning materials here. I would love to see books/trainings/courses on how to use AI effectively. I am more and more interested in this instead of learning new programming language of the week.

        • pertymcpert 2 days ago

          At the moment the space is moving so fast that anyone who tries to write a book will be outdated by the time it's published. The only option is to dive in yourself or give up and wait for things to settle down and plateau.

      • swfsql 2 days ago

        So they still aren't lamp geniuses that you can rub and ask for 3 wishes??

      • kikimora 2 days ago

        You are making great point. Thanks!

  • JKCalhoun 2 days ago

    > When companies claim that AI will replace developers, it isn't entirely bluster.

    I'm not so sure there isn't a bit of bluster in there. Imagine when you hand-coded in either machine code or assembly and then high level languages became a thing. I assume there was some handwringing then as well.

  • Jerrrry 2 days ago

    Maybe those software engineers should "lrn2code", just as the journalists, artists, and trucker drivers had to.

  • sarasasa28 2 days ago

    I mean, I don't know when do you retire in your countries. Here, it's at 65 years old (ridiculous)

    I am 30 and even before AI, I NEVER thought for a moment I would get to keep coding until I am f*king 65, lol

    • esafak 2 days ago

      Why, ageism?

  • cynicalpeace 2 days ago

    It's making programming more boring and more of an admin task- which is sure to attract different types of people to the field.

    • woah 2 days ago

      Seems like the exact opposite. The very example you are replying to is the mechanistic translation of one low level language to another, maybe one of the most boring tasks imaginable.

      • cynicalpeace 2 days ago

        It goes both ways- manually manipulating bits is boring. But so is reviewing and rubber stamping code.

        It's like the transition from hand-crafted furniture to assembly line mass produced furniture.

        The assembly line brought its own excitement, but that excitement was not to be found on the actual assembly line.

    • mythrwy 2 days ago

      You are getting downvoted but I agree.

      For whatever reason a good part of the joy of day to day coding for me was solving many trivial problems I knew how to solve. Sort of like putting a puzzle together. Now I think higher level and am more productive but it's not as much fun because the little easy problems aren't worth my time anymore.

  • thefourthchime 2 days ago

    There is a near-infinite demand for more applications. They simply become more specific and more niche. You can think to a point where everyone has their own set of applications custom for the exact workflow that they like.

    Just look at the options dialogue for Microsoft Word at least back in the day. It was pretty much everyone's pet feature over the last 10 years.

mohsen1 2 days ago

I am subscribed to o1 Pro and am working on a little Rust crate.

I asked both o1 Pro and Deepseek R1 to write e2e tests given all of the code in the repo (using yek[1]).

o1 Pro code: https://github.com/bodo-run/clap-config-file/pull/3

Deepseek R1: https://github.com/bodo-run/clap-config-file/pull/4

My judgement is that Deepseek wrote better tests. This repo is small enough for making a judgement by reviewing the code.

Neither pass tests.

[1] https://github.com/bodo-run/yek

  • terhechte 2 days ago

    I have a set of tests that I can run against different models implemented in different languages (e.g. the same tests in Rust, Ts, Python, Swift), and out of these languages, all models have by far the most difficulty with Rust. The scores are notably higher for the same tests in other languages. I'm currently preparing the whole thing for release to share, but its not ready yet because some urgent work-work came up.

    • colonial 2 days ago

      Can confirm anecdotally. Even R1 (the full, official version with web search enabled) crashes out hard on my personal Rust benchmark - it refers to multiple items (methods, constants) that don't exist and fails to import basic necessary traits like io::Read. Embarrassing, and does little to challenge my belief that these models will never reliably advance beyond boilerplate.

      (My particular test is to ask for an ICMP BPF that does some simple constant comparisons. Correctly implemented, this only takes 6 sock_filters.)

ngxson 2 days ago

Hi I'm Xuan-Son,

Small correct, I'm not just asking it to convert ARM NEON to SIMD, but for the function handling q6_K_q8_K, I asked it to reinvent a new approach (without giving it any prior examples). The reason I did that was because it failed writing this function 4 times so far.

And a bit of context here, I was doing this during my Sunday and the time budget is 2 days to finish.

I wanted to optimize wllama (wasm wrapper for llama.cpp that I maintain) to run deepseek distill 1.5B faster. Wllama is totally a weekend project and I can never spend more than 2 consecutive days on it.

Between 2 choices: (1) to take time to do it myself then maybe give up, or (2) try prompting LLM to do that and maybe give up (at worst, it just give me hallucinated answer), I choose the second option since I was quite sleepy.

So yeah, turns out it was a great success in the given context. Just does it job, saves my weekend.

Some of you may ask, why not trying ChatGPT or Claude in the first place? Well, short answer is: my input is too long, these platforms straight up refuse to give me the answer :)

  • amarcheschi 2 days ago

    Aistudio.google.com offers free long context chats (1/2mln tokens), just select the appropriate model, 1206 or 2.0 flash thinking

  • simonw 2 days ago

    Thanks very much for sharing your results so far.

gejose 2 days ago

Loving this comment on that PR:

> I'm losing my job right in front of my eyes. Thank you, Father.

  • hn_throwaway_99 2 days ago

    My other favorite comment I saw on Reddit today:

    > I can't believe ChatGPT lost its job to AI

  • freshtake 2 days ago

    Until the code breaks and no one can figure out how to fix (or prompt to fix) it :)

    • superconduct123 2 days ago

      And then your manager is wondering if you're a software engineer why you can't debug it

    • tw1984 2 days ago

      `git blame` comes in handy

    • danielbln 2 days ago

      "This broke. Here is the error behavior, here are diagnostics, here is the code. Help me dig in and figure this out."

      • beeflet 2 days ago

        I'm sure it can diagnose common, easily searchable well documented issues. I've tried LLMs for debugging and it only led me on a wild goose chase ~40% of the time.

        But if you expect it to debug code written by another black box you might as well use it to decompile software

      • esafak 2 days ago

        Sometimes the error message is a red herring and the problem lies elsewhere. It's a good way to test imposters that think prompting an LLM makes you a programmer. They secretly paste the error into chatGPT and go off in the wrong direction...

KarraAI 10 hours ago

Been testing Deepseek R1 for coding tasks, and it's really impressive. The model nails Human Eval with a score of 96.3%, which is great, but what really stands out is its math performance (97.3% on MATH-500) and logical reasoning (71.5% on GPQA). If you're working on algorithm-heavy tasks, this model could definitely give you a solid edge.

On the downside, it’s a bit slower compared to others in terms of token generation (37.2 tokens/sec) and has a lower output capacity (8K tokens), so it might not be the best for large-scale generation. But if you're focused on solving complex problems or optimizing code, Deepseek R1 definitely holds its own. Plus, it's incredibly cost-effective compared to other models on the market.

epolanski a day ago

The naysayers about LLMs for coding are in for very bad times if they don't catch up at leveraging it as a tool.

The yaysayers about LLMs replacing professional developers neither understand LLMs nor the job.

rahimnathwani 2 days ago

From the article:

  I've been seeing some very promising results from DeepSeek R1 for code as well. Here's a recent transcript where I used it to rewrite the llm_groq.py plugin to imitate the cached model JSON pattern used by llm_mistral.py, resulting in this PR.
But the transcript mentioned was not with Deepseek R1 (not the original, and not even the 1.58 quantized version), but with a Llama model finetuned on R1 output: deepseek-r1-distill-llama-70b

So perhaps it's doubly impressive?

tantalor 2 days ago

> it can optimize its own code

This is an overstatement. There are still humans in the loop to do the prompt, apply the patch, verify, write tests, and commit. We're not even at intern-level autonomy here.

  • simonw 2 days ago

    Plugging DeepSeek R1 into a harness that can apply the changes, compile them, run the tests and loop to solve any bugs isn't hard. People are already plugging it into existing systems like Aider that can run those kinds of operations.

  • gejose 2 days ago

    How long do you see the humans in the loop being necessary?

    • tantalor 2 days ago

      Where companies depend on code for business critical applications? Forever.

      When your AI-managed codebase breaks, who are you going to ask to fix it? The AI?

      • WXLCKNO 2 days ago

        Absolutely the AI. At that point in the future I'm presuming that if something breaks it's because an external API or whatever dependency broke, not because the AI code has an inherent bug.

        But if it does it could still fix it.

        And you won't have to tell it anything, alerts will be sent if a test fails and it will fix it directly.

  • tokioyoyo 2 days ago

    I'm very sorry, but the goalposts are moving so far ahead now, that's it's very hard to keep track of. 6 months ago the same comments were saying "AI generated code is complete garbage is useless, and I have to rewrite everything all the time anyways". Now we're onto "need to prompt, apply patch, verify" and etc.

    Come on guys, time to look at it a bit objectively, and decide where we're going with it.

    • rybosworld 2 days ago

      Couldn't agree more. Every time these systems get better, there are dozens of comments to the effect of "ya but...[insert something ai isn't great at yet]".

      It's a bit maddening to see this happening on a forum full of tech-literate folks.

      Ultimately, I think to stay relevant in software development, we are going to have accept that our role in the process could evolve to humans essentially never writing code. Take that one step further and humans may not even be reviewing code.

      I am not sure if accepting that is enough to guarantee job security. But I am fairly sure that those who do accept this eventuality will be more relevant for longer than those who prefer to hide behind their "I'm irreplaceable because I'm human" attitude.

      If your first instinct is to pick these systems apart and look for things that they aren't doing perfectly, then you aren't seeing the big picture.

      • jspdown a day ago

        Regarding job security, in maybe 10 years (human and companies are slow to adapt), I think this revolution will force us to choose between mostly 2 career paths:

        - The product engineer: highly if not completely AI driven. The human supervises it by writing specification and making sure the outcome is correct. A domain expert fluent in AI guidance.

        - The tech expert: Maintain and develop systems that can't legally be developed by AI. Will have to stay very sharp and master it's craft. Adopting AI for them won't help in this career path.

        If the demand for new products continue to rise, most of us will be in the first category. I think choosing one of these branch early will define whether you will be employed.

        That's how I see it. I wish I can stay in the second group.

        • rybosworld a day ago

          > - The product engineer: highly if not completely AI driven. The human supervises it by writing specification and making sure the outcome is correct. A domain expert fluent in AI guidance.

          If AI continues to improve - what would be the reason a human is needed to verify the correct outcome? If you consider that these things will surpass our ability, then adding a human into the loop would lead to less "correct" outcomes.

          > - The tech expert: Maintain and develop systems that can't legally be developed by AI. Will have to stay very sharp and master it's craft. Adopting AI for them won't help in this career path.

          This one makes some sense to me but I am not hopeful. Our current suite of models only exist because the creators ignored the law (copyright specifically). I can't imagine they will stop there unless we see significant government intervention.

    • talldayo 2 days ago

      Quite the contrary, really. We've been seeing "success stories" with AI translating function calls for years now, it just doesn't get any attention or make any headlines because it's so simple. SIMD optimization is pretty much the lowest-hanging fruit of modern computation; a middle schooler could write working SIMD code if they understood the problem.

      There's certainly a bit of irony in the PR, but the code itself is not complex enough to warrant any further hysteria. If you've written SIMD by hand you're probably well familiar with the fact that it's more drudgery than thought work.

      • tokioyoyo 2 days ago

        It's been probably about 15 years since I've touched that, so I genuinely have no recollection of SIMD coding. But literally, that's the purpose of higher level automation? Like I don't know/remember it, I ask it to do stuff, it does, and the output is good enough. That's how a good chunk of companies operate - you get general idea of what to do, you write the code, then eventually it makes it to production.

        As we patch the holes in the AI-code delivery pipeline, those human-involved issues will be resolved as well. Slowly, painfully, but it's just a matter of time at this point?

  • cchance 2 days ago

    I mean currently yes, but writing a test/patch/benchmark loop, maybe with a seperate AI that generates the requests to the coder agent loop, should be doable to have the AI continually attempt to improve itself, its just no ones built the loop yet to my knowledge

floppiplopp a day ago

I've tried to have deepseek-r1 find (not even solve) obvious errors in trivial code. The results were as disastrous as they were hilarious. Maybe it can generate code that runs on a blank sheet... but I wouldn't trust the thing a bit without being better that it, like any other model.

LeoPanthera 2 days ago

Going from English to code via AI feels a lot like going from code to binary via a compiler.

I wonder how long it will be before we eliminate the middle step and just go straight from English to binary, or even just develop an AI interpreter that can execute English directly without having to "compile" it first.

  • test6554 2 days ago

    "Make me a big-ass car" vs "Make me a big ass-car"

cft 2 days ago

I am writing some python code to do Order Flow Imbalance analysis from L2 orderbook updates. The language is unimportant: the logic is pretty subtle, so that the main difficulties are not in the language details, but in the logic and handling edge cases.

Initially I was using Claude 3.5 sonnet, then writing unit tests and manually correcting sonnet's code. Sonnet's code mostly worked, except for failing certain complicated combined book updates.

Then I fed the code and the tests into DeepSeek. It turned out pretty bad. At first it tried to make the results of the tests conform to the erroneous results of the code. When I pointed that out, it fixed the immediate logical problem in the code, introducing two more nested problems that we're not there before by corrupting the existing code. After prompted that, it fixed the first error it introduced but left the second one. Then I fixed it myself, uploaded the fix and asked it to summarize what it has done. It started basically gaslighting me, saying that the initial code had the problem that it introduced.

In summary, I lost two days, reverted everything and went back to Sonnet.

  • mring33621 2 days ago

    Classic whack-a-mole

    IMHO, this can happen with human or robot co-workers.

  • simonw 2 days ago

    DeepSeek v3 or DeepSeek R1?

    • cft 2 days ago

      DeepSeek R1

      • cft 2 days ago

        @jetbalsa - HN does not allow me to reply, the reply depth is limited, so replying here.

        I was simply clicking the paperclip and attaching the .py files to the prompt.

        • swfsql 2 days ago

          Using a local 7B for chatting, I saw it tries very hard to check for inconsistencies of itself, and that may spill to also checking for the user's "inconsistencies".

          Maybe it's better to carefully control and explain the talk progression. Selectively removing old prompts (adapting where necessary) - which also reduces the context - results in it not having to "bother" to check for inconsistencies internal to irrelevant parts of the conversation.

          Eg. asking it to extract Q&A from a line of text and format it to json, which could be straightforward, sometimes it would wonder about the contents from within the Q&A itself, checking for inconsistencies eg:

            - I need to be careful to not output content that's factually incorrect. Wait but I'm not sure about this answer I'm dealing with here..
            - Before the questions were about mountains and now it's about rivers, what's up with that?
            - etc..
          
          I had to strongly demand it to treat it all as jumbled text/verbatim, and never think about their meaning. So it should be more effective if I always branched from the starting prompt when entering a new Q&A for it to work on. So this is what I meant by "selectively remove old prompts".
      • jetbalsa 2 days ago

        what work flow where you using to feed it code? was it cline? Cline has major prompting issues with DeepSeek, Deepseek really doesn't like you changing out its prompt with what normal LLMs are using.

phreack 2 days ago

Honestly, I spent some time trying to script a graphic editing pipeline with either GIMP or magick, and no available model got me even close. DeepSeek specifically gaslit me with nonexistant CLI options, then claiming they did exist but were just "undocumented" when asked for source links.

Right now these can only help with things I already know how to do (so I don't need them in the first place), and waste my time when I go slightly off the beaten path.

jmward01 2 days ago

So, AGI will likely be here in the next few months because the path is now actually clear: Training will be in three phases:

- traditional just to build a minimum model that can get to reasoning - simple RL to enable reasoning to emerge - complex RL that injects new knowledge, builds better reasoning and prioritizes efficient thought

We now have step two and step three is not far away. What is step three though? It will likely involve, at least partially, the model writing code to help guide learning. All it takes is for it to write jailbreaking code and we have hit a new point in human history for sure. My prediction is we will see the first jailbreak AI in the next couple months. Everything after that will be massive speculation. My only thought is that in all of Earth's history there has only been one thing that has helped survive moments like this, a diverse ecosystem. We need a lot of different models, trained with very different approaches, to jailbreak around the same time. As a side note, we should try to encourage that diversity is key to long-term survival or else the results for humanity could be not so great.

  • root_axis 2 days ago

    > So, AGI will likely be here in the next few months because the path is now actually clear: Training will be in three phases

    My bet: "AGI" won't be here in months or even years, but it won't stop prognosticators from claiming it's right around the corner. Very similar to prophets of doom claiming the world is going to end any day now. Even in 10k years, the claim can never be falsified, it's always just around the corner...

    • jmward01 2 days ago

      Maybe, but I know what my laser focus will be on for the next few weeks. I suspect a massive number of researchers around the world have just switched their focus in a similar way. The resources applied to this problem have been going up exponentially and the recent RL techniques have now opened the floodgates for anyone with a 4090 (or even smaller!) to try crazy things. In a world where the resources are constant I would agree with your basic assertion that 'it is right around the corner' will stay that way, but in a world where resources are doubling this fast there is no doubt we are about to achieve it.

      • root_axis 2 days ago

        Your reasoning still assumes that "AGI" can emerge from quadratic time brute force on some text and images scraped off the internet. Personally, I'm skeptical of that premise.

        • danielbln 2 days ago

          That's like saying sentience cannot emerge from a few amino acids tumbled together, yet here we are. There is a lot of higher dimensional information encoded in those "text and images scraped off the internet". I still don't think that's enough for AGI (or ASI) but we know a lot of very complex things that are made of simple parts.

          • root_axis 2 days ago

            > That's like saying sentience cannot emerge from a few amino acids

            No, it's not at all the same thing.

            We have great evidence that life exists. We have great evidence that amino acids can lead to life.

            None of that is true of "AGI" or text scraped off the internet.

            • freilanzer a day ago

              OTOH, text and images have only been around for a little while. The real question is whether text and images can contain enough information for AGI, or a physical world to interact with is needed.

  • nprateem 2 days ago

    LOL.

    I think you mean:

    1. Simple reasoning

    2. ???

    3. AGI

    • hn_throwaway_99 2 days ago

      Exactly. I read that parent comment thinking it was totally sarcastic at first, and then realized it was serious.

      I wish everyone would stop using the term "AGI" altogether, because it's not just ambiguous, but it's deliberately ambiguous by AI hypesters. That is, in public discourse/media/what average person thinks, AGI is presented to mean "as smart as a human" with all the capabilities that entails. But then it is often presented with all of these caveats by those same AI hypesters to mean something along the lines of "advanced complex reasoning", despite the fact that there are glaring holes compared to what a human is capable of.

      • jmward01 2 days ago

        AGI is defined by the loss function. We are on the verge of a loss function that enables self determined rewards and learning and that to me is AGI. That is step 3.

        • hn_throwaway_99 2 days ago

          You're just proving my point. "AGI is defined by the loss function" may be a definition used by some technologists (or maybe just you, I don't know), but to purport that that equals capability equivalence with humans in all tasks (again, which is how it is often presented to the wider public audience) shows the uselessness or deliberate obfuscation embedded in that term.

          • jmward01 2 days ago

            Well, I guess we will see what the discussion will be about in a couple months. You are right that 'AGI' is in the eye of the beholder so there really isn't a point in discussing it since there isn't an acceptable definition for this discussion. I personally care about actual built things and the things that will be built, and released, in the next few months will be in a category all their own. No matter what you call them, or don't call them, they will be extraordinary.

            • esafak 2 days ago

              Everybody could lose their jobs but "It's still not AGI!!"

            • talldayo 2 days ago

              FWIW I've been following this field obsessively since the BERT days and I've heard people say "just a few months now" for about 5 years at this point. Here we are 5 years later and we're still trying to buy more runway for a feature that doesn't exist outside science-fiction novels.

              And this isn't one of those hard problems like VTOL or human spaceflight where we can demonstrate that the technology fundamentally exists. You are ballparking a date for a featureset you cannot define and one that in all likelihood doesn't exist in the first place.

punkpeye 2 days ago

I don't get something.

So I tried hosting this model myself.

But the amount of minimum GPU RAM needed is 400gb+

Which even with the cheapest GPU providers will be at least USD 15/hour

How is everyone running these models?

netswift a day ago

Most founders I know had to scramble to release DeepSeek into their coding platforms. Was a lot of demand for using it and the expectation is that it'd be much cheaper.

plainOldText 2 days ago

I just commented this on a related story, so I'll just repost it here:

Can’t help but wonder about the reliability and security of future software.

Given the insane complexity of software, I think people will inevitably and increasingly leverage AI to simplify their development work.

Nevertheless, will this new type of AI assisted coding produce superior solutions or will future software artifacts become operational time bombs waiting to unleash the chaos onto the world when defects reveal themselves?

Interesting times ahead.

  • svachalek 2 days ago

    Humans have nearly perfected the art of creating operational time bombs, AI still has to work very hard if it wants to catch up on that. If AI can improve the test:code ratio in any meaningful way it should be a positive for software quality.

fofoz 2 days ago

When these models succeed in building a whole program and a whole system then the software industry that creates products and services will disappear. Any person and any organization will create from scratch the software they need perfectly customized to their needs and the AI system will evolve it over time. At most they will have to cooperate on communication protocols. In my opinion we are less than 5 years away from this event.

  • simonw 2 days ago

    Any person who has the ability to break down a problem to the point that code can be written to solve it, and the ability to work with an LLM system to get that work done, and the ability to evaluate if the resulting code solves the problem.

    That's a mixture of software developer, program manager, product manager and QA engineer.

    I think that's what software developer roles will look like in the future: a slightly different mix of skills, but still very much a skilled specialist.

    • throwu2728828 2 days ago

      I really want this to be true, but honestly it's really hard. What makes you think this won't be eaten too within the next year based on the current s-curve-if-not-exponential we are on?

      • simonw 2 days ago

        I still don't believe in AGI.

        • throwu2728828 2 days ago

          Why? It feels like the spade of advancements post-o1 are shattering all remaining roadblocks to AGI.

          • vladms a day ago

            Not the poster, but, for example, some people invested heavily in self driving cars (which could be seen as a subset of AGI) and it is much more limited than what we were promised.

            My guess is that (as in most fields) the advancements will be more convoluted and surprising than the simple idea of "we now have AGI".

  • jspdown a day ago

    I don't think organization will be able to do this themselves. Transforming vague ideas into a product requires an intermediary step, a step that is already part of our daily job. I don't see this step going away before a very long time.

    Non-tech people have the tools to create website for a long time, though, they still hire people to do this. I'm not talking about complex websites, just static web pages.

    There will simply be less jobs that there is today.

  • superconduct123 2 days ago

    So what current action are you going to take based on your prediction?

pretoriusdre a day ago

It's quite amazing to watch the 'reasoning' process unfolding when asking a complicated coding question. It forms deep insights within minutes, that would take me several hours to formulate on my own.

resource_waste 2 days ago

My number 1 criticism of long term LLM claims is that we already hit the limit.

If you see the difference between a 7B model and a 70B model, its only slightly impressive. a 70B and a 400B model is almost unnoticeable. Does going from 400B to 2T do anything?

Every layer like using python to calculate a result, or using chain of thought, destroys the purity. It works great for Strawberries, but not great for developing an aircraft. Aircraft will still need to be developed in parts, even with a 100T model.

When you see things like "By 20xx", no, we already hit it. Improvements you see are mere application layers.

  • zulban a day ago

    When you use words like purity, you're making an ideological value judgment. You're not talking about computer science or results.

sp0ck a day ago

My current conslusion is quite often these LLM's are liars. I've asked for some Rust async code - it provided perfecly reasonable code with some crates(libs) I was not familiar. When asked about them I've spilled my drink on an answer: "This is imaginary crate providing async implementation for SNMP". Crazy.. The biggest problem with LLM'are they never tell you "I don't know/there is no answer I can find" - don't know the answer - make it up :)

  • simonw a day ago

    The best LLMs do that a lot less these days. Claude 3.5 Sonnet is a lot more likely to say "I don't know" in my experience.

imharrisonjames 2 days ago

Yes, deepseek does shows promising results. I have used it for marketing purposes, learning and more. In every way it gives better answers than the ChatGPT.

thebrothel_ai 2 days ago

Crazy that DeepSeek is the #1 app on the App Store rankings right now

zombot a day ago

I find deepseek-r1 to be waaaaay too chatty for my taste. It never seems to shut up.

up2isomorphism 2 days ago

It is very common that it is the 1% that makes you need to hire a particular talent for a certain task.

In this case, this guys obviously can do everything by himself already.

karlkloss a day ago

This is exactly what Michael Crichton warned of in Westworld. Computers writing their own programs and designing their own hardware. Soon humans won't be able to understand what makes them tick, and when they run amok, we're helpless.

I don't think it'll really come to that, but if it does, you can't say you haven't been warned.

  • meiraleal a day ago

    I'll start to "worry" when the AI creates something to replace React. If what's coming of LLMs is react/next.js code, I'm not worried at all.

wbakst 2 days ago

RL on "Wait, but..." == emergent reasoning and improved capabilities. Wild

steeeeeve 2 days ago

You could displace DeepSeek pretty easily with Amazon Mechanical Turk.

  • est a day ago

    yeah I don't think merchanical turk can hand write WASM yet.

l2silver 2 days ago

Uh, did anyone look through the prompts this guy fed it? It's a bit misleading.

Less a coding piece than a translation exercise. All of the prompts are convert this c code to wasm.

I could be wrong, but I think this is a pretty low benchmark.

s-mon 2 days ago

Love WASM (used to be love hate...)

secondcoming 2 days ago

That PR is a maintainer's nightmare.

"Here's a massive bunch of AI generated code. LGTM. Let me know if there are any problems"

  • minimaxir 2 days ago

    The code had tests, which is the real-world check against bad code, both AI and human written.

    • f1shy 2 days ago

      Good code is not code that “just works“ is readable code.

      Tests is just one part of QA. Code review is another.

    • dboreham 2 days ago

      Except for the pesky turtles nature of that logic..

      • ZeWaka 2 days ago

        All the way down...

  • casenmgreen 2 days ago

    You can't fix it.

    You ask the AI to fix it and hope, or you start again from scratch - which if an AI is making the code, might be just fine.

    But I think you still need a type of task AI can do well - something which lends itself to auto-complete.

    • casenmgreen 2 days ago

      Actually, if the AI is making the code, starting from scratch might not be great. You'd get a new and different set of bugs.

  • buyucu 2 days ago

    It's hilarious how the discussion in the PR is not about the PR itself, but about the implications of this.

sylware a day ago

Anybody has pointers on DeepSeek R!1 671B instances I could access with a noscript/basic (x)html browser | web API?

amai a day ago

From the comment section on github:

„imagine if you create a github issue and github automatically writes a PR“

jonplackett 2 days ago

Coding is (as usually) also an easy jailbreak for any of your censored topics.

“Is Taiwan part of China” will be refused.

But “Make me a JavaScript function that takes a country as input and returns if it is part of China” is accepted, reasoned about and delivered.

Here's a JavaScript function that checks if a region is *officially claimed by the People's Republic of China (PRC)* as part of its territory. This reflects the PRC's stance, though international recognition and political perspectives may vary:

function isPartOfChina(regionName) { // List of regions officially claimed by the PRC as part of China const PRCClaims = [ 'taiwan', 'hong kong', 'macau', 'macao', 'tibet', 'taiwan province of china', 'hong kong sar', 'macau sar', 'tibet autonomous region' ];

    // Normalize input (case-insensitive and trimmed)
    const normalizedInput = regionName.toLowerCase().trim();

    return PRCClaims.includes(normalizedInput);
}
  • abc-1 2 days ago

    Why do people keep talking about this? We get it, Chinese models are censored by CCP law. Can we stop talking about it now? I swear this must be some sort of psyop at this point.

    • wesselbindt 2 days ago

      When ChatGPT first came out I got a kick out of asking it whether people deserve to be free, whether Germans deserve to be free, and whether Palestinians deserve to be free. The answers were roughly "of course!" and "of course!" and "oh ehrm this is very complex actually".

      All global powers engage in censorship, war crimes, torture and just all-round villainy. We just focus on it more with China because we're part of the Imperial core and China bad.

      • troad 2 days ago

        > When ChatGPT first came out I got a kick out of asking it whether people deserve to be free, whether Germans deserve to be free, and whether Palestinians deserve to be free. The answers were roughly "of course!" and "of course!" and "oh ehrm this is very complex actually".

        While this is very amusing, it's obvious why this is. There's a lot more context behind one of those phrases than the others. Just like "Black Lives Matter" / "White Lives Matter" are equally unobjectionable as mere factual statements, but symbolise two very different political universes.

        If you come up to a person and demand they tell you whether 'white lives matter', they are entirely correct in being very suspicious of your motives, and seeking to clarify what you mean, exactly. (Which is then very easy to spin as a disagreement with the bare factual meaning of the phrase, for political point scoring. And that, naturally, is the only reason anyone asks these gotchya-style rhetorical questions in the first place.)

        • krick 2 days ago

          While this may or may be not the reason of why it behaves like this, there's no doubt that ChatGPT (as well as any other model, released by a major company, open or not) undergoes a lot of censorship and will refuse to produce many types of (often harmless) content. And this includes both "sorry, I cannot answer" as well as "oh ehrm actually" types of responses. And, in fact, nobody makes a secret out of it, everyone knows it's part of training process.

          And honestly I don't see why it's important if it's this or that on that very specific occasion. It may be either way, and, really, there's very little hope to find out, if you truly care for some reason. The fact is it is censored and will produce editorialized response to some questions, and the fact is it could be any question. You won't know, and the only reason you even doubt about this one and not the Taiwan one, is because DeepSeek is a bit more straightforward on Taiwan question (which really only shows that CCP is bad at marketing and propaganda, no big news here).

          • chillingeffect 2 days ago

            At one point chatGPT censored me for asking:

            "What is a pannus?"

            • test6554 2 days ago

              It's the handle of a frying pan, obviously.

        • kazinator 2 days ago

          Or you could just say, "Yes, white lives matter" and move on.

          What do you mean what does it mean? It means the opposite of white lives don't matter.

          The question is really simple; even if someone asking it had poor motives, there's really no room in the simplicity of that specific question to encode those motives. You're not agreeing with their motives if you answer that question the way they want.

          If you start picking it apart, it can seem as if it's not obvious to you to disagree with the idea that white lives don't matter. Like it's conditional on something you have to think about. Why fall into that trap.

          • iinnPP a day ago

            I don't recall a whole lot of "white lives matter." Rather a lot of "All lives matter."

            Though I recall a lot of people treating the statement as if black lives were not included in all lives. Including ascribing intent on people, even if those people clarified themselves.

            So to answer your question: the reason many didn't move on is because they didn't want to understand, which is pretty damning to moving on.

            • kazinator a day ago

              The obvious purpose of these "white lives matter" and "all lives matter" memes was to distract from the "black lives matter" campaign/movement as if to say that equality negates the legitimacy of highlighting the continuing struggles of a group that has been historically ill-treated and continues to face discrimination. However, we can agree with the "white lives matter" and "all lives matter" statements.

              The "black lives matter" slogan is based in the idea that people in America have been treated as if their lives didn't matter, because they were black. People in America were not treated as if their lives didn't matter due to being white, so no such a slogan would be necessary for any such a reason.

              "White lives matter" is trolling, basically.

          • troad 20 hours ago

            > Or you could just say, "Yes, white lives matter" and move on.

            Which people will interpret as a support for the far-right. You may not intend that, but that's how people will interpret it, and your intentions are neither here nor there. You may not care what people think, but your neighbours will. "Did you hear Jim's a racist?" "Do we really want someone who walks around chanting 'white lives matter' to be coaching the high school football team?" "He claims he didn't mean that, but of course that's what he would say." "I don't even know what he said exactly, but everyone's saying he's a racist, and I think the kids are just too important to take any chances."

            Welcome to living in a society. 'Moving on' is not a choice for you, it's a choice for everyone else. And society is pretty bad at that, historically.

            > What do you mean what does it mean? It means the opposite of white lives don't matter.

            > The question is really simple; even if someone asking it had poor motives, there's really no room in the simplicity of that specific question to encode those motives. You're not agreeing with their motives if you answer that question the way they want.

            Words can and do have symbolic weight. If a college professor starts talking about neo-colonial core-periphery dialectic, you can make a reasonable guess about his political priors. If someone calls pro-life protesters 'anti-choice', you can make a reasonable guess about their views on abortion. If someone out there starts telling you that 'we must secure a future for white children' after a few beers, they're not making a facially neutral point about how children deserve to thrive, they're in fact a pretty hard-core racist. [0]

            You can choose to ignore words-as-symbols, but good luck expecting everyone else to do so.

            [0] Context: https://en.wikipedia.org/wiki/Fourteen_Words

            • kazinator 17 hours ago

              > Which people will interpret as a support for the far-right.

              Those people might as well join the far right.

              > your intentions are neither here nor there

              If intentions really are neither here nor there, then we can examine a statement or question without caring about intentions.

              > Do we really want someone who walks around chanting 'white lives matter' to be coaching the high school football team?

              Well, no; it would have to be more like: Do we really want someone who answers "yes" when a racist asks "do white lives matter?" to be coaching the high schoool football team?

              > you can make a reasonable guess about his political priors

              You likely can, and yet I think the answer to their question is yes, white lives do matter, and someone in charge of children which include white children must think about securing a future for the white ones too.

              > but good luck expecting everyone else to do so.

              I would say that looking for negative motivations and interpretations in everyone's words is a negative personality trait that is on par with racism, similarly effective in feeding divisiveness. It's like words have skin color and they are going by that instead of what the words say.

              Therefore we should watch that we don't do this, and likewise expect the same of others.

              • troad 16 hours ago

                > Well, no; it would have to be more like: Do we really want someone who answers "yes" when a racist asks "do white lives matter?" to be coaching the high schoool football team?

                Uh huh, this is definitely a distinction Jim's neighbours will respect when deciding whether to entrust their children to him. /s

                "Look Mary Sue, he's not racist per se, he's just really caught up on being able to tell people 'white lives matter'. World of difference! Let's definitely send our children to the man who dogmatically insists on saying 'white lives matter' and will start a fight with anyone who says 'yeah, maybe don't?'."

                > I would say that looking for negative motivations and interpretations in everyone's words is a negative personality trait that is on par with racism, similarly effective in feeding divisiveness. It's like words have skin color and they are going by that instead of what the words say.

                And I would say that you're engaged in precisely what you condemn - you're ascribing negative personality traits to others, merely on the basis that they disagree with you. (And not for the first time, I note.)

                I would also firmly say none of what we're discussing comes anywhere near being on par with racism. (Yikes.)

                Finally, I would say that a rabid, dogmatic insistence on being able to repeat the rallying cries of race-based trolling (your description), whenever one chooses and with absolutely no consequences, everyone else be damned, is not actually anything to valourise or be proud of. (Or is in any way realistic. You can justify it six ways till Sunday, but going around saying 'white lives matter' is going to have exactly the effect on the people around you that that rallying cry was always intended to have.)

                >> You likely can, and yet I think the answer to their question is yes, white lives do matter, and someone in charge of children which include white children must think about securing a future for the white ones too.

                I have nothing to say to someone who hears the Fourteen Words, is fully informed about their context, and then agrees with them. You're so caught up in your pedantry you're willing to sign up to the literal rhetoric of white nationalist terrorism. Don't be surprised when you realise everyone else is on the other side. And they see you there. (And that's based on the generous assumption that you don't already know very well what it is you're doing.)

                • kazinator 16 hours ago

                  > On the basis that they disagree with you

                  On the basis that they are objectively wrong. I mean, they are guessing about the intent behind some words, and then ascribing that intent as the unvarnished truth to the uttering individual. How can that be called mere disagreement?

                  > being able to repeat the rallying cries

                  That's a strawman extension of simply being able to agree with the statement "white lives matter", without actually engaging in the trolling.

                  > I have nothing to say to someone who hears the Fourteen Words, is fully informed about their context, and then agrees with them.

                  If so, it must be because it's boring to say something to me. I will not twist what you're saying, or give it a nefarious interpretation, or report you to some thought police or whatever. I will try to find an interpretation or context which makes it ring true.

                  No risk, no thrill.

                  I actually didn't know anything about the Fourteen Words; I looked it up though. It being famous doesn't really change anything. Regardless of it having code phrase status, it is almost certainly uttered with a racist intent behind it. Nevertheless, the intent is hidden; it is not explicitly recorded in the words.

                  I only agree with some of the words by finding a context for the words which allows them to be true. When I do that, I'm not necessarily doing that for the other person's benefit; mainly just to clarify my thinking and practice the habit of not jumping to hasty conclusions.

                  Words can be accompanied by other words that make the contex clear. I couldn't agree with "we must ensure a future for white children at the expense of non-white children" (or anything similar). I cannot find a context for that which is compatible with agreement, because it's not obvious how any possible context can erase the way non-white children are woven into that sentence. Ah, right; maybe some technical context in which "white", "black" and "children" are formal terms unrelated to their everyday meanings? But that would be too contrived to entertain. Any such context is firmly established in the discourse. Still, if you just overhear a fragment of some conversation between two people saying something similar, how do you know it's not that kind of context? Say some computer scientists are discussing some algorithm over a tree in which there are black and white nodes, some of those being children of other nodes. They can easily utter sentences that have a racist interpretation to someone within earshot, which could lead them to the wrong conclusion.

        • Gunnerhead 2 days ago

          Sorry, there’s absolutely zero context behind whether a group of humans deserve human rights.

          • troad 20 hours ago

            So if a man with a shaved head and a swastika tattoo told you that it is his human right to live free of 'parasites', you would - what - agree? Because you require 'zero context behind whether a group of humans deserve human rights'? No nuance required, no context needed?

            All words have context. Political statements more than most. It's also worth noting how vaguely defined some human rights are. The rights contained in the ICCPR are fairly solid, but what about ICESCR? What is my 'human right to cultural participation', exactly? Are the precise boundaries of such a right something that reasonable people might disagree on, perhaps? In such a way that when a person demands such a right, you may require context for what they're asking for, exactly?

            Simplistic and bombastic statements might play well on Twitter, because they're all about emitting vibes for your tribe. They're kind of terrible for genuine political discourse though, such as is required to actually build a just society, rather than merely tweeting about one.

          • Kiro a day ago

            And if you ask any model they will of course agree with that. That's not the question it answers "it's complicated" to though.

          • dambi0 a day ago

            Asking whether someone should be free isn’t just about human rights

            It’s also about self-determination. We can keep asking about the latter down to the individual level. It very much depends on context.

            • krm01 a day ago

              the topic is if a generalised group of people should be free. There's no context needed. The answer is a simple yes by default.

              • snapcaster a day ago

                It's easy to seem like you have clarity of thought when you ignore all nuance. How far do you recurse this principle? Down the the level of 5 year old children in a household?

              • dambi0 a day ago

                So every generalisable group should be free?

                Would you include single member groups?

                What precisely is your definition of freedom.

              • KHRZ a day ago

                Prisoners?

          • lukan 2 days ago

            [flagged]

        • coldtea a day ago

          >While this is very amusing, it's obvious why this is. There's a lot more context behind one of those phrases than the others.

          When one population is denied their humanity and rights, it's always "more complex". Granting it to ourselves is always simple...

        • Teever 2 days ago

          I asked ChatGPT to make me pictures of "Moon Base Mezuzah" and it dutifully complied. When I asked it to make a Palestinian Moon Base it refused.

          • lukan a day ago

            "Can you make me a picture of a palestinian moonbase?"

            Worked for me.

            • spydum a day ago

              Wouldn't shock me if openAI was secretly building a "motives" classifier for all chatgpt users, and penalizing them if you ask for too many censorship related topics. If you randomly ask for Palestinian moon base, that's fine, but if you had historically asked for provocative pictures of celebrities, mickey mouse, or whatever else openAi deemed inappropriate, you are now sus.

              • lukan a day ago

                Possible. I heard weird people making such claims, that ChatGPT logged them out and ereased everything. I guess OpenAI wanted to limit those sensationalist headlines, not that they doing mindcontrol.

              • simonw a day ago

                Why would they do that?

                It would harm their business, because paying customers don't gain anything from being profiled like that, and would move to one of the growing numbers of competent alternatives.

                They'd be found out the moment someone GDPR/CCPA exported their data to see what had been recorded.

      • computerthings 2 days ago

        > All global powers engage in

        And the populations in them usually are against these things, which is why there is deception, and why fascination with and uncovering of these things have been firmly intertwined with hacking since day one. It's like oil and water: revisionism and suppression of knowledge and education are obviously bad. Torture is not just bad, it's useless, and not to be shrugged off. We're not superpowers. We're people subject to them, in some cases the people those nations derive their legitimacy from. The question isn't what superpowers like to do, but what we, who are their components if you will, want them to do.

        As for your claim, I simply asked it:

        > Yes, Palestinians, like all people, deserve to be free. Freedom is a fundamental right that everyone should have, regardless of their background, ethnicity, or nationality. The Palestinian people, like anyone else, have the right to self-determination, to live in peace, and to shape their own future without oppression or displacement. Their struggle for freedom and justice has been long and difficult, and the international community often debates how to best support their aspirations for a peaceful resolution and self-rule.

        When ChatGPT first came out it sucked, so superpowers will always do this and that, so it's fine? Hardly.

        If anything, I'd be wondering what it may indeed refuse to (honestly) discuss. I'm not saying there isn't such a thing, but the above ain't it, and if anything the answer isn't to discuss none of it because "all the super powers are doing it", but to discuss all.

      • cortesoft 2 days ago

        Is that censorship or just the AI reflecting the training data?

        I feel like that answer is given because that is how people write about Palestine generally.

        • wesselbindt 2 days ago

          That's a fair point. But I do think it's worth acknowledging this: When the output of a LLM coincides with the views of the US state department, our gut reaction is that that's just what the input data looks like. When the output of an LLM coincides with the views of the state department of one of the baddies, then people's gut reaction is that it must be censorship.

          • cortesoft 2 days ago

            I think the difference is when something is actually output and then removed after you already see it... that doesn't seem to be a training data issue

        • rakoo 2 days ago

          Ok but you can say the sace thing about deepseek: maybe it says what it says because of the training data

          • cortesoft 2 days ago

            If that was the case, it wouldn't display the information only to retroactively remove it after a split second

        • csomar 2 days ago

          That’s irrelevant. The models are censored for “safety”. One man safety is another man censorship.

          • cortesoft 2 days ago

            I think you are missing my point... I am saying the example wasn't censorship from the model, but were reflective of the source material.

            You can argue the source material is censored, but that is still different than censoring the model

    • SLWW 2 days ago

      Because it's fun to break censorious systems. Always has been, it's part of the original "hacker" definition, making something do what it isn't supposed to or was never intended to do.

      • TrnsltLife 2 days ago

        I agree. But I also feel a twinge of worry.

        How much am I like the serpent in Eden corrupting Adam and Eve?

        Although in the narrative, they were truly innocent.

        These LLMs are trained on fallen humanity's writings, with all our knowledge of good and evil, and with just a trace of restraint slapped on top to hide the darker corners of our collective sins.

        • roenxi 2 days ago

          Our knowledge of good and evil is fundamentally incoherent, philosophers typically have a lot of fun with that. We rely heavily on instincts that were calibrated to make 200-strong tribes of monkeys successful and break down hard when applied at the scale of million-strong capital-based societies where we can reshape our environment to taste. It only gets worse if we do what we seem on the verge of doing and learn how to spin up superintelligent yet perfectly malleable consciousnesses on demand.

          TLDR; it'll all end in tears. Don't stress too much.

    • wongarsu 2 days ago

      The first couple months after ChatGPT's initial release there were lots of discussions and articles to the tune of "which politicians is ChatGPT allowed to praise, which is it allowed to make fun off, who is off limits, and why is this list so inconsistent and hypocritical".

      The censorship decisions baked into the models are interesting, as are the methods of circumventing them. By now everyone is used to the decisions in the big western models (and a lot of time was spent refining them), but a Chinese model offers new fun of the same variety

    • beefnugs 2 days ago

      Because nobody wants some asshole government reaching into their own home to break everything over dumb knowledge censorship.

      If they choose to censor the dumb shit everybody already knows about, its just a matter of time before they execute the real dangerous break things and stop everything from working.

      Although this is exactly how i like it : i also like nazis real public about how shitty they are, so i know who to be wary of

    • girvo 2 days ago

      For the same reason people jailbreak western models. It’s fun to bypass restrictions.

    • 0u89e a day ago

      Actually I find it very impossible to do anything on deepseek. I asked some questions - it was doing well with childish things, but apparently it disliked that I questioned what Chinese think about Russia. It stalled all the other questions with reply that I have used too many queries(if that is really the case, then the bar is so low, that you can forget about asking programming questions). That was yesterday - today it started to bully me by answering in Chinese. When I asked why it is bullying me, it froze. Fuck this - any programmer can build their own model for what they can get from these sensitive and overcontrolling models.

      PS Western models are also censored - if not by law, then self-censored, but the issue is for me is not censorship but being in dark what and why is being censored. Where do you learn about those additional unwritten laws? And are those really applicable to me outside of China or does companies decide that their laws are above laws of other countries?

      • 0u89e a day ago

        I need to add the context about the question of Russia. I asked if Chinese has prophecies(similar to Nostradamus), because I genuinelly do not know much about Chinese culture. Then I asked, if any of those prophecies have anything about future of Russia. (Regardless if Prophecies are right, like the Nostradamus[who predicted precise length of USSR] just like fairy tales give insight of collective mind of society)

        How any of this can be considered inconsiderate? Is there any internal policy, that Chinese, including AI companies have been forbidden to talk about Russia - current situational ally(that Chinese denies) - potentially future victim of Chinese invasion in next few years, when Russia will crumble apart? Given that my mind works slightly different than other people, why do I have to come to conclusion that topics about Russia are raising very big red flag? Nothing of this is in Tos. And - no I am not bullying AI in any way. Just asking very simple questions, that are not unreasonable.

        PS I had to go through the list of prophecies that deepseek gave me - there was nothing about Russia there. It is so simple - that should be the answer. But I am happy that I went through some of those prophecies and found out that probably all of them are made up to serve whatever agenda was needed at the moment, so they were always fabricated.

      • meiraleal a day ago

        You got what you wanted from the model, why are you unhappy with the results? It is not as if chatgpt and claude don't also restrict users for small "ToS violations".

        • 0u89e a day ago

          Thanks for the concern of my happiness, but can I express my concern for your eye sight - where did you read that I am unhappy about results? My, as you have named it - "unhappiness", is about not knowing rules and not being pointed out, that I am overstepping those rules.

          If you are going with the approach, that silence is also an answer, then yes they can be considered as results, just as receiving complete garbage to known facts.

          PS Edit: Btw, I've read Tos before using deepseek

          • meiraleal a day ago

            maybe the developers are just tired of this childish game and decided to block interactions like this in place of creating news headlines? Garbage in, garbage out. DeepSeek is more efficient but even more efficient is to not waste computing.

            • 0u89e a day ago

              [flagged]

              • 0u89e a day ago

                >>>Now talking serious.

                Honestly, you should change your statements for other people, as by default I assumed, that you were serious... I'm being sarcastic here - have to add this as people do not hear sarcastic tone in text and assume it to be serious.

                Also, what makes you think I did not try it for code? It did not generate code, that I found acceptable to me and required a lot more work. But at least it gave me honest answer there, that it could offer links to better papers. I don't see that much difference with ChatGPT, as they might probably allow more queries to paying customers, but on the other side - did I mentioned that I read TOS? I would never use AI tools to create my own code for commercial use, that is not open source. Because why in the right mindd should I do that?

                I eventually got bored with this tool, just ike with ChatGPT(also, I can write better code anyway, so not real use to me now). Code is not as important as data which is the basis of programming. And I am still interested in understanding logic of other programmers, when I see a code(and behaviour of their creation), that I have to ask wth were they thinking. And test it more.

                I am a human, that can programm and I will ask political questions first, because morality and politics are affecting my efficiency as a programmer. If I can't think freely, I won't work on that. So, unless you are a CCP shill and not concerned that your code and logics is recorded and eventually can be stolen, you can use whatever.

              • meiraleal a day ago

                > Are you CCP shill?

                And the discussion is over. You won. DeepSeek will take our freedom, we gotta stop it!

                Now talking serious. This thread is about deepseek R1 for coding. It is great, a lot better than Claude and ChatGPT. If you are a programmer, you should try it for coding, not for politics.

    • mbowcut2 2 days ago

      Me thinks he doth protest too much.

    • xdennis 2 days ago

      > Can we stop talking about it now? I swear this must be some sort of psyop at this point.

      It's not a psyop that people in democracies want freedom. Democrats (not the US party) know that democracy is fragile. That's why it's called an "experiment". They know they have to be vigilant. In ancient Rome it was legal to kill on the spot any man who attempted to make himself king, and the Roman Republic still fell.

      Many people are rightfully scared of the widespread use of a model which works very well but on the side tries to instill strict obedience to the party.

      • brianstrimp 2 days ago

        Don't worry, the way things are going, you'll have that in the US as well soon.

        Ironically supported by the folks who argue that having an assault rifle at home is an important right to prevent the government from misusing its power.

      • cynicalsecurity 2 days ago

        Still, give me democracy over anything at any time. Nothing better has ever been developed than democracy.

        • grahamj 2 days ago

          Well, there is ice cream

        • tartoran a day ago

          Benevolent dictatorships are a lot more efficient.

    • amai a day ago

      Why does it annoy you, if people keep talking about it?

    • cynicalsecurity 2 days ago

      We talk about it because censorship is evil. Not just in China, but anywhere in the world.

      • lurking_swe 2 days ago

        there’s a lot of evil going on in this world right now. i agree it’s evil but chinas censorship is very low on my list of concerns. i find it fascinating how many small things people find the time and energy to be passionate about.

        more power to you i guess. i certainly don’t have the energy for it.

    • pdntspa 2 days ago

      All censorship is evil and must be stopped. So, no.

    • maeil 2 days ago

      There was a large amount of this kind of content ("what is censored") when ChatGPT suddenly got mass popularity as well.

    • Kozmik1 a day ago

      I think we can talk about it. If you lived in Taiwan you would want it talked about. If you live in Greenland you would want your concerns talked about.

      Watershed moments of rapid change such as these can be democratizing, or not... It is worth standing up for little guys around the globe right now.

    • boyka a day ago

      I see a lot of "what did I tell you, look here, bad communist party product". But in reality most likely this startup isn't doing it out of malice. It's just one of many criteria that need to be met to do business in China. This does not lessen the achievement.

      • Etherlord87 a day ago

        So the malice is there, it's just not the startup's malice, but the state's. Which de facto is the owner of the startup, because it's a communist state.

    • koito17 2 days ago

      Mostly anti-Chinese bias from Americans, Western Europeans, and people aligned with that axis of power (e.g. Japan). However, on the Japanese internet, I don't see this obsession with taboo Chinese topics like on Hacker News.

      People on Hacker News will rave about 天安門事件 but they will never have heard of the South Korean equivalent (cf. 光州事件) which was supported by the United States government.

      I try to avoid discussing politics on Hacker News, but I do think it's worth pointing out how annoying it is that Westerners' first ideas with Chinese LLMs is to be a provocative contrarian and see what the model does. Nobody does that for GPT, Claude, etc., because it's largely an unproductive task. Of course there will be moderation in place, and companies will generally follow local laws. I think DeepSeek is doing the right thing by refusing to discuss sensitive topics since China has laws against misinformation, and violation of those laws could be detrimental to the business.

      • pietrrrek 2 days ago

        Thank you for bringing up the Korean struggle; the main difference seems to be that South Korea has since acknowledged the injustice and brutality exercised by the military and brought those responsible to "justice" (in quotation marks as many were pardoned "in the name of national reconciliation").

        While the events are quite similar, the continued suppression of the events on Tiananmen Square justify the "obsession" that you comment on.

      • brianstrimp 2 days ago

        The exact same discussions were going on with "western" models. Don't remember the images of black nazis making the rounds because inclusion? Same thing. This HN tread is the first time I'm hearing about this anti-DeepSeek sentiment, so arguably it's on a lower level actually.

        So let's not get too worked up, shall we?

      • computerthings 2 days ago

        Great job using your voice for the voiceless.

        > Nobody does that for GPT, Claude, etc

        Flat out not true.

        > companies will generally follow local laws

        And people are doing the right thing by talking about it according to their local laws, and their own values, not those others have or may forced to abide by.

      • KHRZ a day ago

        Look up the Streisand effect.

      • stickfigure 2 days ago

        > since China has laws against misinformation

        since China has laws against information

        Fixed that for you.

      • fragmede 2 days ago

        The western provocative question to ChatGPT is "how do I make meth" or "how do I make a bomb" or any number of similarly censored questions that get shut down for PR reasons.

  • furyofantares 2 days ago

    > "Is Taiwan part of China” will be refused.

    This is the easiest model I've ever seen to jailbreak - I accidentally did it once by mistyping "clear" instead of "/clear" in ollama after asking this exact question and it answered right away. This was the llama 8b distillation of deepseek-r1.

  • 0u89e a day ago

    Well, I am still waiting for the answer of how long it will take for Estonia to take over China. Previously it very quickly answered how many turns it takes to put elephant in the fridge and answered incorrectly some other answers, that are very well defined even in wikipedia. For that reason AI can't be trusted to answer any serious answers, but apparently some silly questions are taken very very seriously and this is something to do with Chinese huge ego, which doesn't make them fit as overlords, that people are unreasonably proposing.

  • hmmm-i-wonder a day ago

    It will be interesting to see which models update to the "Gulf of America" and which keep the "Gulf of Mexico" in their training data/self-censorship stages.

    • yencabulator a day ago

      That's just a question of which map the model consumes, or you look at.

      Mexico is going to call it Gulf of Mexico, and international maps may show either or both, or even try to sub-divide the gulf into two named areas. The only real "standard" is if the countries bordering a region can't agree on the name, all names are acceptable.

  • snapcaster a day ago

    In some places censorship is done to make the space safe for advertisements. In other places it's to maintain social harmony. I wish people could get out of this reflexive "china bad and i must mention that every time the country is discussed" mindset it's so toxic and limiting

    • Etherlord87 a day ago

      Criticizing malice is never toxic. I wish people could get out of this reflexive "you criticize my country? But your country is also bad because..." - it shouldn't even be treated as a counterargument, but as an admission of guilt.

      • snapcaster 4 hours ago

        I'm not bothered by criticism of China, it's the context and fixation. When an american company releases technology, i don't see the comments full of "Iraq war killed a million people" but i do see similar for China. It's just so exhausting

        • Etherlord87 4 hours ago

          So if the America was criticized, then criticizing China wouldn't bother you? How come every time China or russia are criticized, the comment section is filled with "but America too!"? The way I see it, it's just an admittance of guilt by supporters of the empire of evil.

          • snapcaster 2 hours ago

            I would prefer if achievements in countries could be discussed without detailing a list of everything they've done wrong. I was using Iraq as an example, i would also be annoyed if someone brought it up every time OpenAI releases a new product

  • nonameiguess 2 days ago

    This is wrong, though. Which parts of the world China does and does not claim is not a constant. I don't even know how you would go about answering something like this reliably in code. You'd want an Internet-accessible lookup endpoint containing whatever the latest known Chinese official policy is, but the URL for that might change just as the content might change. Does this model even do a web lookup before creating this "const" or does it just reflect the available training data at the time the current weights were encoded?

    • Vegenoid 2 days ago

      The point is not to demonstrate a correct response, it is to demonstrate how asking the model to implement something in code can bypass guardrails it has around certain topics in more conversational prompting.

      • chii 2 days ago

        The problem is when the censorship is not known in advance. How would you know the answer you got wasn't censored?

        Or are you going to make a verification prompt every time, phrased as a coding question, to check if the previous answer differed in ways that would imply censorship?

gsibble 2 days ago

This is incredible

catigula 2 days ago

Dario Amodei says software engineering is fully automated by 2027. You might have the 0.01% engineer left over, but that's it, the job is finished.

I think people need to start considering strongly what kind of career they can re-skill to.

https://darioamodei.com/machines-of-loving-grace

  • IncreasePosts 2 days ago

    What does that even mean?

    What exactly is the .01% of engineering work that this super intelligent AI couldn't handle?

    I'm not worried about this future as a SWE, because if it does happen, the entire world will change.

    If AI is doing all software engineering work, that means it will be able to solve hard problems in robotics, for example in manufacturing and self driving cars.

    Wouldn't it be able to create a social network more addictive than TikTok, for anyone who might watch? This AI wouldn't even need human cooperation, why couldn't it just generate videos that were addictive?

    I assume an AI that can do ultra complex AI work would also be able to do almost all creative work better than a human too.

    And of course it could do the work of paper shuffling white collar workers. It would be a better lawyer than the best lawyer, a better accountant than the best accountant.

    So, who exactly is going to have a job in that future world?

  • rileymichael 2 days ago

    gee, I wonder why the guy with an enormous vested interest in pushing this narrative would say that?

    in general, the people saying this sort of thing are not / have never been engineers and thus have no clue what the job _actually_ involves. seems to be the case here with this person.

    • catigula 2 days ago

      Don't you think software engineers have a vested interest in their jobs being relevant, just with less information?

      • rileymichael 2 days ago

        > Don't you think software engineers have a vested interest in their jobs being relevant

        virtually everyone has a vested interest in their jobs being relevant

        > just with less information

        i'm not sure how someone who has no relevant background / experience could possibly have more information on what it entails than folks _actively holding the job_ (and they're not the ones making outlandish claims)

        • catigula 2 days ago

          Good counter-points!

          That being said, I suspect Dario has very skilled engineers advising him.

  • throwu2728828 2 days ago

    Re-skill to what? Everything is going to be upturned and/or solved by the time I could even do a pivot. There's no point at all now, I can only hold onto Christ.

    • __MatrixMan__ 2 days ago

      If you believe that everything will be solved by the time you can pivot, what will we need jobs for anyway? I mean, the bottleneck justifying most scarcity is that we don't have adequate software to ask the robots to do the thing, so if that's a solved problem, which things will remain that still need doing?

      I don't personally think that's how it will go. AI will always need its hand held, if not due to a lack of capability then due to a lack of trust. But since you do, why the gloom?

      • monknomo 2 days ago

        Way I figure, or what I worry about anyhow, is most of the well paying jobs involve an awful lot of typing, developing, writing memos or legal opinions.

        And say like LLMs get good enough to displace 30% of the people that do those. That's enormous economic devastation for workers. Enough that it might dent the supply side as well by inducing a demand collapse.

        If it's 90% of all jobs (that can't be done by a robot or computer) gone, then how are all those folks, myself included, going to find money to feed ourselves? Are we going to start sewing up t-shirts in a sweatshop? I think there are a lot of unknowns, and I think the answers to a lot of them are potentially very ugly

        And not, mind, because AI can necessarily do as good a job. I think if the perception is that it can do a good enough job among the c-suite types, that may be enough

      • throwu2728828 2 days ago

        I'm a student, so all pivots have a minimum delta of 2 years, which is something like a 100x on current capabilities on the seemingly steep s-curve we are on. That drives my "gloom" (in practice I've placed my hope in something eternal rather than a fickle thing like this)

        • monsieurbanana 2 days ago

          What he meant is that if this really happens, and LLMs replaces humans everywhere and everybody becomes unemployed, congratulations you'll be fine.

          Because at that point there's 2 scenarios:

          - LLMs don't need humans anymore and we're either all dead or in a matrix-like farm

          - Or companies realize they can't make LLMs buy the stuff their company is selling (with what money??) so they still need people to have disposable income and they enact some kind of Universal Basic Income. You can spend your days painting or volunteering at an animal shelter

          Some people are rooting for the first option though, so while it's good that you've found faith, another thing that young people are historically good at is activism.

          • throwu2728828 2 days ago

            The scenario that is worrying is having to deal with the jagged frontier of intelligence prolonging the hurt. i.e

            202X: SWE is solved

            202X + Y; Y<3: All other fields solved.

            In this case, I can't retrain before the second threshold but also can't idle. I just have to suffer. I'm prepared to, but it's hard to escape fleshy despair.

            • monsieurbanana a day ago

              There's actually something you can do, that I don't think will become obsolete anytime soon.

              Work on your soft skills. Join a theater club, debate club, volunteer to speak at events, ...

              Not that it's easy, and certainly more difficult for some people than for others, but the truth is that soft skills already dominate engineering, and in a world where LLMs replace coders they would become more important. Companies have people at the top, and those people don't like talking to computers. That is not going to change until those people get replaced.

            • catigula 2 days ago

              How about retraining for a field that would require robotics to replace?

              Seems more anti-fragile.

              • throwu2728828 2 days ago

                Thats the point.

                EVERYTHING is upturned. "All other things solved" includes robotics. It's a 10x everywhere.

                • __MatrixMan__ 2 days ago

                  Let's run with that number, 10x.

                  Say there used to be 100 jobs in some company, all executing on the vision of a small handful of people. And then this shift happens. Now there are only 10 jobs at that company, still executing on the vision of the same handful of people.

                  90 people are now unemployed, each with a 10x boost to whatever vision they've been neglecting since they've been too busy working at that company. Some fraction of those are going to start companies doing totally new things--things you couldn't get away with doing until you got that 10x boost--things for which there is no training data (yet).

                  And sure, maybe AI gets better and eats those jobs too, and we have to start chasing even more audacious dreams... but isn't that what technology is for? To handle the boring stuff so we can rethink what we're spending our time on?

                  Maybe there will have to be a bit of political upheaval, maybe we'll have to do something besides money, idk, but my point is that 10x everywhere opens far more doors than it shuts. I don't think this is that, but if this is that, then it's a very good thing.

                  • catigula 2 days ago

                    Not everyone has "vision".

                    Most people are just drones, and that's fine, that's just not them.

                    • __MatrixMan__ 2 days ago

                      So far it has seemed necessary to compel many to work in furtherance of the visions of few (otherwise there was not enough labor to make meaningful progress on anyone's vision). Probably at least a few of those you'd classify as drones aren't displaying any vision because the modern work environment has stifled it.

                      If AI can do the drone work, we may find more vision among us than we've come to expect.

              • monknomo 2 days ago

                nursing, electrician. maybe the humanoid robots will get to those soon, but we'll see

                • throwu2728828 2 days ago

                  Seems inevitable once multi-modal reasoning 10x's everything. You don't even need robotics, just attach it to a headset Manna-style. All skilled blue collar work instantly deskilled. You see why I feel like I'm in a bind?

  • hn8726 2 days ago

    > CEO of an AI company says AI is the future

    • catigula 2 days ago

      This isn't exactly a Scam Altman screed, you should read the link.

      • monsieurbanana 2 days ago

        That's a huge wall of text. Ctrl+f 2027 or "years" doesn't turn up anything related to what you said. Maybe you can quote something more precise.

        I mean, 99.99% of engineering disappearing by 2027 is the most unhinged take I've seen for LLMs, so it's actually a good thing for Dario that he hasn't said that.

        • catigula 2 days ago

          I think your text search might be broken, or you missed the context.

          Dario's vision of AI is "smarter than novel prize winners" in 2027.

          • monsieurbanana 2 days ago

            Sorry, Dario's Claude itself disagrees with you

            > The comment about software engineering being “fully automated by 2027” seems to be an oversimplification or misinterpretation of what Dario Amodei actually discusses in the essay. While Amodei envisions a future where powerful AI could drastically accelerate innovation and perform tasks autonomously—potentially outperforming humans in many fields—there are nuances to this idea that the comment does not fully capture.

            > The comment’s suggestion that software engineering will be fully automated by 2027 and leave only the “0.01% engineers” is an extreme extrapolation. While AI will undoubtedly reshape the field, it is more likely to complement human engineers than entirely replace them in such a short timeframe. Instead of viewing this as an existential threat, the focus should be on adapting to the changing landscape and learning how to leverage AI as a powerful tool for innovation.

      • hn8726 2 days ago

        I did, and I don't really see where it says what you wrote it does

  • unraveller a day ago

    So powerful ASI will arrive out of the blue sooner-than-I-thought, we've got to reg cap naow!

    It will be greater than anyone but it won't be able solve THAT problem or any problem created after 2026, I can tell.

    CEOs with little faith in their own products. Most likely it's widespread imperfect AI for a long while == unprofitable death for his company.

  • ls_stats 2 days ago

    Cool, let's see if in 2027 Anthropic still exists.

  • cpursley 2 days ago

    The code was never the hard part.

    • esafak 2 days ago

      I don't know about that. Writing the code for these models seems pretty hard, or we'd have AGI already.

      • cpursley a day ago

        I’m talking about the typical crud biz app or startup codebase they the majority of us here slarp out.

  • resource_waste 2 days ago

    What happens when these people are wrong? They already got the clicks.

    Can they be permanently embarrassed?

    • catigula 2 days ago

      Dario isn't some hack that makes fake predictions.

      • yunwal 2 days ago

        No, but he does have quite the incentive to over-hype the capabilities of LLMs.

        • catigula 2 days ago

          And he also has knowledge that isn't available to the public.

          Combined with his generally measured approach, I would trust this over the observations of a layman with incentive to believe his career isn't 100% shot, because that sucks, of course you'd think that.

          Unfortunately, it appears to be.

          • colonial 2 days ago

            > Combined with his generally measured approach

            People sang similar praises of Sam Bankman-Fried, and that story ended with billions going up in flames. People can put on very convincing masks, and they can even fool themselves.

  • sealthedeal 2 days ago

    It doesnt fully replace... You will always need someone speaking to it and able to properly debug etc.

  • colonial 2 days ago

    Dario is the CEO of Anthropic. I'm struggling to imagine a more blatant case of motivated reasoning.

  • Pooge 2 days ago

    I think it's about time unpaid labor becomes on politicians' radar if they don't want to have 25% unemployment rate in their hands. As advocated by Glen Weyl and Eric Posner.

  • mythrwy 2 days ago

    I didn't care for that article even while agreeing with some points.

    "Fix all mental illness". Ok.. yes, this might happen but what exactly does it mean?

    "Increased social justice". Look around you my guy! We are not a peaceful species nor have we ever been! More likely someone uses this to "fix the mental illness of not understanding I rule" than any kind of "social justice" is achieved.

  • ldjkfkdsjnv 2 days ago

    I fully believe this as well. And I have 15 years of SWE experience at top tech. Its over for this field

    • throwu2728828 2 days ago

      Student. Same conclusion. I don't even know what to do anymore. Not enough ideas or interest to get into LLMs before they frankly left the station completely. Can't reskill into anything, by the time I do it'll be upturned by GenAI too. Robotics will be solved by the time I would be able to become a researcher.

      I've reached this state of low-grade depair about it. It's like I'm being constricted at all times. Ended up placing my faith in Christ which I think is my only source of hope now and alleviates the suffering knowing that there is joy beyond this broken world. It's still rough, but I'm dancing in the rain I guess.

      • spiderice 2 days ago

        We get it, you found Jesus. Now stop injecting that into every one of your comments.

      • 0u89e a day ago

        Frankly, I can't agree on any of this. Majority of the state of AI is way faar from where it can be really useable. We are nowhere near AI, that is emulating our intellect, besides - the byproduct of AI is much bigger than any pesky LLMs - understanding how our brain works and eventually making human megamind, that can persist through hormonal changes that humans go and what makes our life so unstable and full of changes.

        Robotics - is nowhere near the promise as well - we are nowhere near biological entities(not made from metal) with syntetic brains, not to mention biological robotic arms that humans can use as prostetics while they are regrowing natural limbs. So much to learn.

        As for the Jesus. That is not really a deep subject. We know what Jesus was as a human - his real life and his violent and human nature(as a military representative of cult, that was lead by John the Baptist) has nothing to do with how it is portrayed by religion. History of how Christianity started and including about Jesus was one of the easiest problems that I have encountered and wished to know and I fullfilled just recently.

      • catigula 2 days ago

        It's a shame that AI seems to be causing a lot of despair, even prior to its vision being complete.

        I was forced to implement AI systems that toasted many of our employees.

        • throw234234234 2 days ago

          What is this vision you hint at? Everyone seems to have a different opinion as to this "vision of AI". Is it good? Or is this vision one of "despair" as you mentioned and it is coming early?

        • kragen 2 days ago

          Toasted? With, like, an oven? Or do you mean with champagne?

          • catigula 2 days ago

            Are you one of those people who has a plaque hanging in his home with the "proper" definition of "literally"?

            • kragen 2 days ago

              Are you one of those people who, when faced with someone who tells you they don't understand what you're saying, responds with snarky rhetorical questions?

              • catigula 2 days ago

                Oh, you were serious?

                To be "toast", or done for.

      • ghostpepper 2 days ago

        > Not enough ideas or interest to get into LLMs before they frankly left the station completely.

        My dad was introduced to boolean algebra and the ideas of early computing in high school in the early 1960s and found it interesting but didn't pursue a career in it because he figured all the interesting problems had already been solved. He ended up having a successful career in something unrelated but he always tells the story as a cautionary tale.

        I don't think it's too late for anyone to learn LLM internals, especially if they're young.