zmmmmm 21 hours ago

I think a key point from this article that I agree strongly with is the simple point that it is crucial that everyone recognise we are currently in an AI bubble.

I often find people contest this with the non-sequitur of "No, it's not a bubble, there is real value there. We are building things with it". The fact there is real value in the technology does not contradict in any way that we are in a bubble. It may even be supporting evidence for it. Compare with the dot com bubble : nobody would tell you there was no value in the internet. But it was still a bubble. A massive hyper inflated bubble. And when it popped, it left large swathes of the industry devastated even while a residual set of companies were left to carry on and build the "real" eventual internet based reworking of the entire economy which took 10 - 15 years.

People would be well advised to have a look at this point in time at who survived the dot com bubble and why.

  • cj 20 hours ago

    I agree, although bubbles don’t always have to pop in huge ways like it did in the dot com crash.

    E.g. crypto displayed many, many characteristics of a bubble for a number of years, but the crypto bubble seems like it has just slowly stopped growing and slowly stopped getting larger, rather than popping in a fantastical way. (Not to say it still can’t, of course)

    Then again, this bubble is different in that it has engulfed the entire US economy (including public companies, which is the scary part since the damage potential isn’t limited to private investors). If there’s even a 10% chance of it popping, that’s incredibly frightening.

    • libraryofbabel 19 hours ago

      I think this is a really insightful point. Even if we are in a bubble now, in the sense that current LLM technology (impressive though it is) does not quite live up to the huge valuations of AI companies, there is a plausible future in which we get enough technological progress in the next few years that the bubble never really pops and we are able to morph into a new AI-driven economy without a crash. There are probably good historical examples of this happening with other technologies, although it’s hard to identify them because in retrospect it looks like the optimists invested rationally, even though their bets maybe weren’t all that justified at the time.

      I personally think a crash is more likely than not, but I think we should not assume that history will follow a particular pattern like the dot com bust. There are a variety of ways this can go and anyone who tells you they know how it’s all going to shake out is either guessing or trying to sell you something.

      It is for sure an interesting time to be in the industry. We’ll be able to tell the next generation a lot of stories.

      • zmmmmm 16 hours ago

        It's a good analysis.

        For me the big concern is really the level of detachment from reality that I'm seeing around time scales. People in the startup world seem to utterly fail to appreciate the complexity of changing business processes - for any type of change, let alone for an immature tech where there are still fundamental unsolved problems. The only way for the value of AI to be realised is for large scape business adoption to happen, and that is simply not achievable in the 2 years of runway most of these companies seem to be on.

    • xsmasher 19 hours ago

      Cryptocurrencies have survived and thrived, but anyone who went all-in on NFTs or blockchain gaming (or anything other than currency on the blockchain?) has been zeroed out.

    • lavarnann 11 hours ago

      > The crypto bubble seems like it has just slowly stopped growing and slowly stopped getting larger

      Bitcoin is now worth 2.3 trillion dollars. The price graph looks like a hockey stick. For tokens in a self contained ledger system.

      You may be conflating hype and bubble.

  • worldsayshi 20 hours ago

    Once there's a consensus around a bubble the bubble has already burst?

  • asdev 20 hours ago

    everyone does NOT recognize it, just go on Twitter if you don't think so

  • digitcatphd 20 hours ago

    Agreed. I think most of the arguments are premised on AI getting infinitely better for some reason, without anyone outlining clear arguments addressing the inherent architectural limitations of today's LLMs. I have been in this space since 2021 and honestly, beside maybe voice and Gemini Deep Research, things aren't much better than GPT-4.

  • entropsilk 20 hours ago

    The fact everyone thinks we are in an AI bubble is practically proof we are not in an AI bubble.

    The crowd is always wrong on these things. Just like everyone "knew" we were going into a deep recession sometime in late 2022, early 2023. The crowd has an incredibly short memory too.

    What it means is that people are really cautious about AI. That is not a self reinforcing, fear of missing out, explosive process bubble. That is a classic bull market climbing a wall of worry.

    • dustingetz 20 hours ago

      technical ICs actually trialing the AI tools think we’re in a bubble. Executives, boards, directors and managers are still tumbling head over heels down the mountain in a race to shovel more money into the fire, because their engineering orgs are not delivering results and they are desperate to find a solution

      • stego-tech 20 hours ago

        This. Highly competent technical ICs in my circles continue to (metaphorically) scream at their Juniors submitting AI slop and being unable to describe what it's doing, why it's doing it that way, or how they could optimize it further, since all management cares about is "that it works".

        Current models excel because of the corpus of the open internet they (stole from) built off of. New languages aren't likely to see as consistent results as old ones simply because these pattern matchers are trained on past history and not new information (see Rust vs C). I think the fact nobody's minting billions turning LLMs into trading bots should be pretty telling in that regard, since finance is a blend of relying on old data for models and intuiting new patterns from fresh data - in other words, directly targeting the weak points of LLMs specifically (inability to adapt to real-time data streams over the long haul).

        AI's not going away, and I don't think even the doomiest of AI doomers is claiming or hoping for that. Rather, we're at a crossroads like you say: stakeholders want more money and higher returns (which AI promises), while the people doing the actual work are trying to highlight that internal strife and politics are the holdups, not a lack of brute-force AI. Meanwhile both sides are trying to rattle the proverbial prison bars over the threats to employment real AI will pose (and the threats current LLMs pose to society writ large), but the booster side's actions (e.g., donating to far-right candidates that oppose the very social reforms AI CEOs claim are needed) betray their real motives: more money, less workers, more power.

        • rightbyte 20 hours ago

          > AI's not going away, and I don't think even the doomiest of AI doomers is claiming or hoping for that.

          Is this the consensus on nomenclature? I though "AI doomers" was people thinking some dystopia will come out of it due to it. In that case I've read so much text wrong.

          • stego-tech 18 hours ago

            At this point, my perspective is that the bubble talk has effectively boiled viewpoints into booster or doomer camps based solely on one’s buy-in of the argument these companies have created actual intelligence that can wholesale replace humans. There doesn’t seem to be much room for nuance at the moment, as the proverbial battle lines have been drawn by the loudest voices on either side.

    • layer8 20 hours ago

      But it's not true that everyone thinks we are in an AI bubble.

      • UncleOxidant 14 hours ago

        Most people don't think they're in a bubble until it starts to pop.

        • layer8 8 hours ago

          So you seem to be opposing entropsilk‘s argument, same as I did.

    • giantg2 20 hours ago

      We might be in a bull market. The question is for how long. I would guess less than a year considering the market-wide P/E.

    • ryoshu 20 hours ago

      Not really. Worked through the dotcom bubble. It was obvious to some people on the ground doing the work. It was obvious to some execs who took advantage of it. Feels similar. Especially if you are burning through tokens on Gemini CLI and Claude Code where the spend doesn’t match the outcomes.

      • DrewADesign 20 hours ago

        I saw someone earnestly say that a business model with potential to generate actual revenue was no longer relevant, and companies need only generate enough excitement to draw investors to be successful because “the rules have changed.” At that moment, I saw that telltale soapy iridescent sheen. I’ve heard that before.

        I’m worried that the US knowledge industries jumped the shark in the teens and have been living off hopeful investors assuming the next equivalent of the SaaS revolution is right around the corner, and AI for whatever reason just won’t change things that much, or if it does, the US tech industry will fumble it, assuming their resources and reputations will insulate them from the competition, just like the tech giants of the 90s vs Internet startups. If that’s true, some industries like biotech will still do fine, but the trajectory of the tech sector, generally, will start looking like that of the manufacturing sector in the 90s.

    • PessimalDecimal 20 hours ago

      There is absolutely FOMO. It's even being deliberately stoked. "AI won't take your job. People using AI will." This is this hype cycle's "have fun being poor."

lnkl 21 hours ago

>Article praising LLMs.

>Look inside.

>Written by someone having a stake in LLM business.

Every time.

  • senko an hour ago

    Would you rather read an article praising LLMs written by someone having a stake in chilli peppers business?

    Asking for a friend.

  • bwfan123 20 hours ago

    right yea,

    Whens the last time you saw management tell you which compiler or toolchain you need to use to build your code ? But now we have CEOs and management dictating how coding should be done.

    In the article the author admits: "I started coding again last year. But I hadn't written production code since 2012" and then goes on to say: "While established developers debate whether AI will replace them, these kids are shipping.".

    Then I ask myself, what are they selling ? and lo and behold, it is AI/ML consulting.

    • lavarnann 10 hours ago

      Every praise of LLM is invariably preceded by some form of "I don't really understand their output but it looks great". That right there is the strongest signal I've caught so far that the whole thing is just a funny money pyramid.

      In Sirens of Titan Vonnegut tells a story where governments decided to boost the space industry to drive aggregate demand.

      This is exactly what is happening. When you realize that the whole thing is predicated on building and selling more $100,000 GPUs (and the solution to every problem therein is to use even more GPUs), everything really comes into focus.

  • claw-el 20 hours ago

    Alternatively, would someone not having a stake in LLM business have an incentive to disparage LLMs?

    • almostgotcaught 20 hours ago

      Lol that makes zero sense - not having a stake in something is literally the definition of not having an incentive.

      • ej88 16 hours ago

        Not having a stake in something currently rocketing up in value is certainly a cause for FOMO and / or incentive to disparage it.

    • ants_everywhere 20 hours ago

      Lots of people have a stake in disparaging AI. That's why there are so many low quality anti-LLM comments on HN lately

  • EA-3167 21 hours ago

    Hey, at least this one is willing to admit that they aren't building Machine Jesus. That's a start.

    • upghost 20 hours ago

      "I come to bury Caesar, not to praise him..."

      A rhetorical technique as old as dirt, but apparently still effective.

      • EA-3167 20 hours ago

        "Now let it work. Mischief, thou art afoot, take thou what course thou wilt!"

        But seriously, it isn't on me to justify my skepticism of the extreme claim, "We are in a race to build machine super-intelligence" because that skepticism is the rational default. Instead it's the burden of people who claim that we are in fact in that race, just like "self driving next year" was a claim for others to prove, just like "Crypto is the future of money" is a statement requiring a high degree of support.

        We've seen this all before, and in the end the argument in favor seems to boil down to, "Look at how much money we're moving around with this hype" and "Trust us, the best is yet to come."

        Maybe this time it will.

        • upghost 20 hours ago

          Just to clarify, I meant the rhetorical technique was being employed by the author of the article. He's downplaying the "AGI race" in order to normalize and validate the byproduct of the hype bubble to be as "normal and reliable as electricity and TCP/IP". It's clearly meant to attempt to disarm and appeal to skeptics, but there is more than enough dog whistling and performative contradiction in there to make it clear the true intention of the article -- praising Caesar.

          For the record, I would be more inclined to be sympathetic towards the author if any receipts (i.e., repos) were produced at all, but as you so correctly stated, extraordinary claims require extraordinary evidence.

          I agree you do not have burden of defending the author's claims, apologies if that was not clear.

asdev 21 hours ago

The biggest issue with AI isn't AI itself, but the fact that it seemingly "saved" an overinflated economy. Economy needs a deep reset with high rates for longer and the AI narrative is just kicking the can down the road

  • PessimalDecimal 18 hours ago

    This seems like the point of the hype and why it has so much traction.

    I suspect this hype cycle won't end until a new one forms, whether technology or some catastrophic event (disease, war) changes focus and allows the same delaying tactics.

    • lavarnann 10 hours ago

      Every disaster, man-made or not, will be used to drive shock therapy.

      Look at stock prices trajectory before and after COVID.

      When this bubble bursts, the ensuing chaos will be used in a similar manner.

stego-tech 20 hours ago

Not a bad position to take, and very similar to my personal one (that gets immediately conflated as "LOL AI DOOMER" by the AI Booster Club): yes, this is a bubble, and yes, it will eventually pop, but the tools won't go away. What's been democratized isn't the entirety of human skills, but the narrow field of custom ML-based tooling, and that's going to change quite a lot in the decades ahead as people utilize them in unexpectedly novel ways.

It'll never be AGI or superintelligence, it won't create or cause the singularity, and it'll never be a substitute for learning, practicing, and honing skills into mastery. For the fields LLMs do displace in part or in whole, I still expect it'll largely displace the mediocre or the barely-passable, not the competent or experts. Those experts will, once the bubble pops and the hype train derails, find the novel and transformative uses for LLMs outside of building moats for big enterprises or vamping for investor capital.

I especially enjoy the on-prem/locally-run angle, as I think that is where much of the transformation will occur - in places like homes, small offices, or private datacenters where a GPU or two can accelerate novel tasks for the entity using it, without divulging data to corporate entities or outright competitors. Inference is cheap, and a modest gaming GPU or AI accelerator can easily support 99.9% of individual use cases offline, with the right supporting infrastructure (which is improving daily!).

All in all, an excellent post.

fullstick 21 hours ago

We're building AI workflows at my company. Yes chatbots, but also more interesting/complex workflows that I won't get into. Let's just say we have the data, expertise, and industry structure to leverage AI in valuable and useful ways.

As an engineer, development still comes down to requirements gathering, solid engineering principles, and the tools we already have at our disposal - network calls, rendering the UI, orchestrating containers and job, etc.

All that is to say that I thought AI was going to be sexy, like Westworld, and not so boring...

  • brokencode 21 hours ago

    Boring is where the money is. Always has been.

    Westworld robots are still a long way off, but think about how far we’ve come so quickly.

    It’s pretty incredible that natural language computing is now seen as boring when it barely even existed 5 years ago.

stillpointlab 20 hours ago

This is a good article, but it has a flaw that I keep seeing in these. Articles like this say "I built this app, and that app, and another app, and another one". Ok, let's see them. Are they any good? Please post the github link, or link to the webpage.

I'm reminded of the motto of the Royal Society: Nullius in verba.

lgleason 20 hours ago

Have we made some significant advancements similar to what happened with the Internet back in the 90's with HTML? Yes. But, this is a bubble we are currently in just like the .com bubble with lots of irrational exuberance.

That said, that job market is not as crazy as it was during .com, in fact right now most technologists are finding it more difficult to find work at the moment. Most of this AI hype started when the employment market started to slow down. Usually these bubbles pop after the employment market goes crazy. The employment starts to go nuts when crazy money enters the picture. So if, for example the fed really starts to cut rates and/or investment starts to really pick up and we have another boom period, the tail end of that seems to historically be when the bubbles pop.

Put another way, there is a good chance that the bubble will continue to inflate for a few years before it pops.

  • UncleOxidant 14 hours ago

    > Usually these bubbles pop after the employment market goes crazy. The employment starts to go nuts

    Meta has been offering 7 figure salaries for AI talent. This is a very different bubble from the .com bubble. The hiring frenzy in this limited to a very small group of people with unique skills/experience that few people posses. While at the same time thousands of other people are being let go in order to pay those big salaries to a few people (and in order to buy more GPUs). The C-suite has become obsessed with the idea that they're going to need much fewer engineers and they're hiring/firing like it.

ankit219 19 hours ago

Somewhat related to the article, but mostly anecdotal. In SF, i have had chats with (~15) engineers who after some prodding admitted that they feel the whole AGI thing is passing them by (not that it is close). In a sense that they want to be doing something deeper, build/research something which is more than an API call (paraphrasing and not disparaging making API calls), and want to build where the action is (read: train models or be at the forefront). I understand you need a specific skillset to be in that position, just that it's slightly off putting that to do any meaningful work in this field you need a lot of compute. I understand they raised funding and what not, yet want something more than they are working on. I am not sure of the solution, but the cause sure seems to be the hype that is created currently.

handbanana_ 21 hours ago

>While established developers debate whether AI will replace them, these kids are shipping. Developers who learned their craft in the age of pull requests and sprint planning sneer at their security failures, not realizing that 'best practices' are about to flip again. The barbarians aren't at the gate. They're deploying to production.

Shipping where? What production? What kids? I've yet to see this. I see the tools everywhere, but not anything built with them. You'd think it would be getting yelled about from the mountaintops, but I'm still waiting.

  • bwfan123 20 hours ago

    I think what has happened is the following:

    A whole bunch of folks got into management thinking coding is beneath them, they are now wielding the power - let the code-monkeys do the typing. Then, turns out, coders are continuing to call the shots, and the management folks have coder-envy.

    Now, with LLMs, coding is again not only within management's reach, but they think it is trivial, and it can be outsourced to the LLM code-monkeys, and management has regained power from the pesky coder-class.

    So, you have management of all stripes "shipping" things, and dictating what coders should do - not realizing that they should stay in their lanes, and let coders decide for themselves what works best in their craft.

    • 0xcafefood 18 hours ago

      This is a really interesting point. Managers are the _only_ people I've heard say things like "it's only a matter of time till all coding interviews are just 'write a prompt to...'" or "soon all coding will just be LLMs writing machine code directly."

      It's struck me as odd that managers of software engineers would seek to negate the field of software development almost completely. But maybe you're onto something.

  • worldsayshi 20 hours ago

    What would qualify as proof? If somebody builds a good product and ships it it will just look like a good product. People will call it vibe coded slop when it fails spectacularly.

    • layer8 20 hours ago

      If there isn't a strong uptick in the general quality and usefulness of software within the next couple of years, then it's not clear what AI coding/design is actually buying us. Other than possibly some cost reduction, but it would be optimistic to assume that the savings go to the users and not to big tech. Regardless, the proof will be in the pudding.

    • handbanana_ 19 hours ago

      I mean they would provide it--you would think this is something the AI coding businesses would be highlighting. "Here's an app tends of thousands use every day built with our AI tools!"

      Heck they did it with languages for the longest time. Here's twitter, we built it on Rails, everyone use Rails! Facebook, built on PHP, everyone use PHP! Feels weird that if these AI tools are doing all this work that no one is showing it off.

time0ut 20 hours ago

The article resonates. AI coding assistants are cool and fun to use, but they just help solve a solved problem faster. The really exciting thing is exploring the new problems we can solve with this tool. It has really reignited my passion for building.

  • hnthrow90348765 20 hours ago

    Businesses realizing a lot of their problems are already solved will be of great help to developers.

    I'm extremely tired of bespoke solutions when OTS or already-known would work just fine.

nunez 2 hours ago

I'm having trouble with the concept of claiming that one "wrote" something that was entirely, or almost-entirely, generated by AI. I use Tesla's FSD all of the time. When it's on, I'm "driving" insofar as I'm monitoring my surroundings and preparing to take over if the car does something crazy. However, I'll usually tell people that the car drove, not me.

I also don't understand the value of using AI to write stuff in loads of unfamiliar languages. I get why one might choose Rust vs. Golang vs. JavaScript depending on the mission, but I would think that those differences go away entirely when you're depending on an LLM to author something in those languages AND you aren't skilled enough in those languages to understand when something's suboptimal or not. This just feels like an express train to bankruptcy via technical debt.

I'm also having trouble with the notion of AI accelerating the creation of side projects. For me, actually writing the code (or figuring out how the language works) is part of the fun that I get from doing side projects. If I wanted to create something as quickly as possible, I'd just buy a SaaS subscription or physical version for what I want.

It's also insane to me that we're just not AT ALL considering how LLMs stunt the growth of our juniors. Spending hours banging my head against the wall on tiny bugs is how I got to where I am today. I'm going to guess that's the case for many of the people on HN as well. That learning process goes away entirely once an LLM goes into the mix. You can just ask it to fix whatever's broken, no understanding of the bug required. This is fine for seniors who know why things happen how they happen, but I can't imagine juniors making up this skill gap.

It's like learning a new language vs having your phone generate whatever in the target language. The end result is the same, but there's no way you can really learn that language with your phone doing the work, unless one assigns no value to learning that language in the first place.

Finally, I have trouble accepting the idea of giving up the keyboard once you become an "architect." I very much understand that us "architects" have less free time in the day to fire up the IDE (death by meetings, basically), but giving that up entirely feels somewhat career-limiting to me. Then again, this is a moot point if the market moves towards making software development an AI-only activity.

What's crazy to me is that most developers and architects sneered at low/no-code solutions because they created unmaintainable codebases that were too proprietary to make sense of, yet here we are lapping up code generated by "coder" LLMs and accepting that they "might" produce insecure code here and there. Insane.

upghost 21 hours ago

> Within weeks I built a serverless system processing 5 million social media posts daily, tracking topic clusters and emerging narratives in real-time. Then brand monitoring dashboards. Then a "robojournalist" that could deep-dive any trending story. Then hardware and firmware specs for a coffee machine. Then my first mobile app.

I call bullshit. Let's see some repos.

  • AznHisoka 20 hours ago

    5 million social media posts is like <1% of all the posts out there. Its just a weekend project

  • fourthark 19 hours ago

    This is all stuff for private use. It's credible that the author built these things with LLMs, not that they are secure or robust.

esafak 21 hours ago

At least pick a Douglas Adams book cover!

bayesic 21 hours ago

> Sam Altman knew exactly which buttons to push. Congressional testimony about the need for regulation (from the company furthest ahead). Warnings about AI risk. OpenAI's playbook: Build in public, warn about dangers, present yourself as the responsible actor who needs resources to "do it safely."

And this is why Matt Levine calls Sam Altman the greatest business negger of all time

aeon_ai 21 hours ago

The author misses the deeper game: if you genuinely believe AGI is imminent, then current economic metrics become meaningless. Why optimize for revenue when the entire concept of scarcity-based economics dissolves?

The $560B for those who believe in AGI isn't about ROI using today's money-in/money-out formula; it's about power positioning for a post-capitalist transition.

Every major player knows that whoever controls the infrastructure once the threshold is crossed might control what comes after.

The "bubble" narrative assumes these actors are optimizing for quarterly returns rather than civilizational leverage.

  • zmmmmm 20 hours ago

    The problem with this is that it's entirely evidence free.

    I could also say, if you truly believe nuclear fusion is imminent we will have infinite free energy and all current economic metrics are meaningless. But there is no nuclear fusion bubble. Why not? Because people don't believe nuclear fusion is imminent. But for some reason they do believe AGI is imminent - despite there being no actual evidence of that. There is probably less understanding of what is needed to close the gap to true AGI than there is to close the gap to make nuclear fusion possible.

    The only distinction here is what people are willing to "believe" based on pure conjecture - which is why I class it as a true bubble.

    • teeray 20 hours ago

      > The only distinction here is what people are willing to "believe" based on pure conjecture

      It’s a religion. Repent now, the AGI is coming.

    • giantg2 20 hours ago

      "The problem with this is that it's entirely evidence free."

      That's more for less true for predicting any new financial trend.

      If AI is making devs 20-30% more efficient, then you could invest in tech stocks if you think they can ship as much with lower overhead. The financial metrics look better if that's true.

  • chromanoid 21 hours ago

    I think the author addresses this, but dismisses it as fantasy, which constitues the bubble.

coarise 20 hours ago

The entire article revolves around the premise of AGI will not be achieved, Which is unjustified. Making reading the article a waste of time.

commenter711 8 hours ago

Slop-esque article written by someone with a clear bias. At least the linked articles were a nice read...

chromanoid 21 hours ago

Great article! I share the experience mentioned in the article, LLMs facilitate a head-on interaction with any topic. It is similar to instructional YouTube videos (that imo were already transformative) but with the ability to ask detailed questions. And this is what becomes better with each iteration. When creative communities finally settle down on generative AI there will be not just a plethora of AI slop, but so much highly creative never seen before content. It might lead to a new golden age of indie low budget movie productions.

  • exasperaited 20 hours ago

    There’s already a new golden age of indie low budget movies. Those guys will not use AI to generate significant parts of their content, because it defeats the point of making an indie movie at all.

    I never cease to be shocked at how little tech people think of what creative people do and why they do it.

    • chromanoid 20 hours ago

      You seem to have an agenda here. I am sure there are many many visions of special effects and story arcs that could never be realized because of not being able to pull it off. This will change now. Green screens and sophisticated SFX tech will not be necessary to create fantastical images. You may call these kind of movies low brow entertainment but I am very curious to see indie movie interpretations of my favorite litrpg books.

      • exasperaited 20 hours ago

        An agenda? I just give a shit about the creative people (indie filmmakers, photographers, artists, actors, models) I know, and I fail to see what AI brings them that their creativity does not already; special effects are such a tiny part of filmmaking, for example.

        I don’t mean to say I don’t think there are any uses but I think the main misunderstanding here is that what holds indie filmmakers back isn’t access to technology, generally.

        • chromanoid 19 hours ago

          Well, at least I assume that currently indie movies are also somewhat defined by budget and technical limitations. With GenAI you will be able to film an action scene with your smartphone in an empty warehouse that will later look like an authentic full street in Medieval Bagdad. GenAI will remove constraints. Constraints that may have led to creativity by themselves, but those constraints also led to constraints in audience and artistic outcome. Imagination will be the limit. And I don't think we will need labels like "organic" to make collaborative efforts with actual actors more accepted than AI only productions, because good actors bring more to the table than just their face and stature.

          • exasperaited 7 hours ago

            > GenAI will remove constraints.

            I think this really hits on the difference in our understanding because constraints are what cause actual creativity and art to happen.

            A lack of constraints is why big-budget movies are so tedious. Lower budget movies are better because of their constraints.

            • chromanoid 5 hours ago

              I totally see that. But I think it's time for new constraints that are less tied to money and more to the imagination of the creators.

              It will hopefully lead to a democratization of previously expensive settings (e.g. historic, fantastical, large scale events) etc. Many indie movies still have huge budgets and need some kind of sponsor. Now we will hopefully see a wonderful mix of hobbyist, semi-professional and professional fully independent setups that tell stories without worrying about financial risks that are connected to certain forms of artistic expression.

              I don't think it is helpful to gatekeep movie making with arbitrary requirements regarding AI usage nor do I believe that the requirement for patrons or state sponsorships that is prevalent in indie movie making are a good thing regarding the current neo-feudal and authoritarian currents.

              • exasperaited 2 hours ago

                > I don't think it is helpful to gatekeep movie making with arbitrary requirements regarding AI usage

                I am not gatekeeping at all; I don't understand this argument that this could ever be perceived as gatekeeping. I'm just saying that in my own experience, indie creators tend to perceive generative AI as bullshit, not as liberation.

                Artists who tell you that AI is not helping art are not gatekeeping either.

                • chromanoid 2 hours ago

                  Of course, but AI shouldn't hinder art either. I can understand the sentiment, but if GenAI can help somewhere, it is creative endeavors.

            • Peritract 6 hours ago

              Jaws is an iconic horror film partly because the mechanical shark kept breaking, forcing them to do more with atmosphere and less with animatronics.

              • exasperaited 5 hours ago

                Yes.

                A more obvious example is The Blair Witch Project, which cost less than a million dollars even after all the marketing was done (and cost essentially nothing to make).

                The original Halloween was a very low-budget movie considering how long it took to shoot.

                Vin Diesel's career was established by his own movie, Strays, which cost less than $50K. Which is zero budget, essentially, for a film that opened at Sundance.

                Away from films there are many, many examples of massively popular albums and songs that were made essentially for nothing off the back of simple constraints and creativity.

                In the long run, the only way artists will use AI effectively is by deciding on constraints that limit its use.

                Because as soon as you don't limit its use, anyone can do what you can do.

                So I tend towards thinking that AI won't really move the needle in terms of human creativity. It may reframe it. But nobody is going to be liberated creatively by it.

                Tech people, I suspect, tend to assume that AI brings "full creative freedom" to artists the same way a patron does when they say "you can have full creative freedom".

                It's not the same kind of freedom.

    • ants_everywhere 20 hours ago

      Of course they will.

      The world is full of creative people and some of them will make movies with AI. Those are indie film makers.

gargalatas 21 hours ago

I totaly agree with the author. Not even the smartphone or the iphone brought such a sudden change to so many people and in many cases, for free. I know we want to oppose this huge thing just because it doesn't make sense moraly but when you learn using this tool there is no way back. Just imagine what is coming in the next 5-10 years. Even if the tools remain at the same level as today, people have learned to use it so well that ever sector every industry will speed up tremendously. We will see great new products and ideas emerging. Just can't wait for the revolution.

  • doctorwho42 20 hours ago

    Or we will see people become too dependent on it to see the forest for the trees on countless problems throughout the systems of society and business.