Not just Meta, 40 EU companies urged EU to postpone roll out of the ai act by two years due to it's unclear nature. This code of practice is voluntary and goes beyond what is in the act itself. EU published it in a way to say that there would be less scrutiny if you voluntarily sign up for this code of practice. Meta would anyway face scrutiny on all ends, so does not seem to a plausible case to sign something voluntary.
One of the key aspects of the act is how a model provider is responsible if the downstream partners misuse it in any way. For open source, it's a very hard requirement[1].
> GPAI model providers need to establish reasonable copyright measures to mitigate the risk that a downstream system or application into which a model is integrated generates copyright-infringing outputs, including through avoiding overfitting of their GPAI model. Where a GPAI model is provided to another entity, providers are encouraged to make the conclusion or validity of the contractual provision of the model dependent upon a promise of that entity to take appropriate measures to avoid the repeated generation of output that is identical or recognisably similar to protected works.
The quoted text makes sense when you understand that the EU provides a carveout for training on copyright protected works without a license. It's quite an elegant balance they've suggested despite the challenges it fails to avoid.
Copyright is not a god given right. It's an economic incentive created by government to make desired behavior (writing an publishing books) profitable.
Yes, 100%. And that’s why throwing copyright selectively in the bin now when there’s an ongoing massive transfer of wealth from creators to mega corps, is so surprising. It’s almost as if governments were only protecting economic interests of creators when the creators were powerful (eg movie studios), going after individuals for piracy and DRM circumvention. Now that the mega corps are the ones pirating at a scale they get a free pass through a loophole designed for individuals (fair use).
Anyway, the show must go on so were unlikely to see any reversal of this. It’s a big experiment and not necessarily anything that will benefit even the model providers themselves in the medium term. It’s clear that the ”free for all” policy on grabbing whatever data you can get is already having chilling effects. From artists and authors not publishing their works publicly, to locking down of open web with anti-scraping. Were basically entering an era of adversarial data management, with incentives to exploit others for data while protecting the data you have from others accessing it.
Why? Copyright is 1) presented as being there to protect the interests of the general public, not creators, 2) Statute of Anne, the birth of modern copyright law, protected printers - that is "big businesss" over creators anyway, so even that has largely always been a fiction.
But it is also increasingly dubious that the public gets a good deal out of copyright law anyway.
> From artists and authors not publishing their works publicly
The vast majority of creators have never been able to get remotely close to make a living from their creative work, and instead often when factoring in time lose money hand over fist trying to get their works noticed.
I generally let it slide because these copyright discussions tend to be about America, and as such it can be assumed American law and what it inherits from British law is what pertains.
>Copyright is 1) presented as being there to protect the interests of the general public, not creators,
yes, in the U.S in the EU creators have moral rights to their works and the law is to protect their interests.
There are actually moral rights and rights of exploitation, in EU you can transfer the latter but not the former.
>But it is also increasingly dubious that the public gets a good deal out of copyright law anyway.
In the EU's view of copyright the public doesn't need to get a good deal, the creators of copyrighted works do.
> There are actually moral rights and rights of exploitation, in EU you can transfer the latter but not the former.
And when we talk about copyright we generally talk about the rights of exploitation, where the rationale used today is about the advancement of arts and sciences - a public benefit. There's a reason the name is English is copy-right, where the other Germanic languages focuses more on the work - in the Anglosphere the notion of moral rights as separate from rights of exploitation is well outside the mainstream.
> In the EU's view of copyright the public doesn't need to get a good deal, the creators of copyrighted works do.
Most individual nations copyright law still does uphold the pretence of being for the public good, however. Without that pretence, there is no moral basis for restricting the rights of the public the way copyright law does.
But it has nevertheless been abundantly clear all the way back to the Statute of Anne that any talk of either public goods or rights of exploitation for the creator are excuses, and that these laws if anything mostly exist for the protection of business interests.
>Most individual nations copyright law still does uphold the pretence of being for the public good, however. Without that pretence, there is no moral basis for restricting the rights of the public the way copyright law does.
I of course do not know all the individual EU country's rules, but my understanding was that the EU's view was what it was because derived at least from the previous understanding of its member nations. So the earlier French laws before ratification and implementation of the EU directive on author's rights in Law # 92-597 (1 July 1992) were also focused on the understanding of creators having creator's rights and that protecting these was the purpose of Copyright law, and that this pattern generally held throughout EU lands (at least any lands currently in the EU, I suppose pre-Brexit this was not the case)
You probably have some other examples but in my experience the European laws have for a long time held that copyright exists to protect the rights of creators and not of the public.
> So the earlier French laws before ratification and implementation of the EU directive on author's rights in Law # 92-597 (1 July 1992) were also focused on the understanding of creators having creator's rights
French law, similar to e.g. Norwegian and German law, separated moral and proprietary rights.
Moral rights are not particularly relevant to this discussion, as they relate specifically to rights to e.g. be recognised as the author, and to protect the integrity of a work. They do not relate to actual copying and publication.
What we call copyright in English is largely proprietary/exploitation rights.
The historical foundation of the latter is firmly one of first granting righths on a case by case basis, often to printers rather than cretors, and then with the Statue of Anne that explicitly stated the goal of "encouragement of learning" right in the title of the act. This motivation was later e.g. made explicit in the US constitution.
Since you mention France, the National Assembly after the French Revolution took the stance that works by default were public property, and that copyright was an exception, in the same vein as per the Statute of Anne and US Constitution ("to promote the progress of science and useful arts").
Depository laws etc., which are near universal, are also firmly rooted in this view that copyright is a right grants that is provided on a quid pro quo basis: The work needs to be secured for the public for the future irrespective of continued commercial availability.
> Why? Copyright is 1) presented as being there to protect the interests of the general public, not creators
Doesn’t matter, both the ”public interest” and ”creator rights” arguments have the same impact: you’re either hurting creators directly, or you’re hurting the public benefit when you remove or reduce the economic incentives. The transfer of wealth and irreversible damage is there, whether you care about Lars Ulrichs gold toilet or our future kids who can’t enjoy culture and libraries to protect from adversarial and cynical tech moguls.
> 2) Statute of Anne, the birth of modern copyright law, protected printers - that is "big businesss" over creators anyway, so even that has largely always been a fiction.
> The vast majority of creators have never been able to get remotely close to make a living from their creative work
Nobody is saying copyright is perfect. We’re saying it’s the system we have and it should apply equally.
Two wrongs don’t make a right. Defending the AI corps on basis of copyright being broken is like saying the tax system is broken, so therefore it’s morally right for the ultra-rich to relocate assets to the Caymans. Or saying that democracy is broken, so it’s morally sound to circumvent it (like Thiel says).
You've put into words what I've been internally struggling to voice. Information (on the web) is a gas, it expands once it escapes.
In limited, closed systems, it may not escape, but all it takes is one bad (or hacked) actor and the privacy of it is gone.
In a way, we used to be "protected" because it was "too big" to process, store, or access "everything".
Now, especially with an economic incentive to vacuum literally all digital information, and many works being "digital first" (even a word processor vs a typewriter, or a PDF that is sent to a printer instead of lithograph metal plates)... is this the information Armageddon?
copyright is the backbone of modern media empires. It both allows small creators and massive corporations to seek rent on works, but since the works are under copyright for a century its quite nice to corporations
It is a "right" created by law, is the point. This is not a right that is universally recognised, nor one that has existed since time immemorial, but a modern construction of governments that governments can choose to change or abolish.
what is a right that has existed since time immemorial? Generally rights that have existed "forever" are codified rights and, in the codification, described as being eternal. Hence Jefferson's reference to inalienable rights, which probably came as some surprise to King George III.
on edit: If we had a soundtrack the Clash Know Your Rights would be playing in this comment.
Except of course that the point is that copyright is generally not described this way.
See my more extensive overview in another response.
The history of copyright law is one where it is regularly described either in the debates around the passing of the laws, or in the laws themselves, as a utilitarian bargain between the public and creators.
E.g. since you mention Jefferson and mention "inalienable", notably copyright is in the US not an inaliable right at all, but a right that the US constitution grants Congress the power to enact "to promote the progress of science and useful arts". It says nothing about being an inalienable or eternal right of citizens.
And before you bring up France, or other European law, I suggest you read the other comment as well.
But to add more than I did in the other comment, e.g. in Norway, the first paragraph of the copyright low ("Lov om opphavsrett til åndsverk mv.") gives 3 motivations: 1 a) to grant rights to creators to give incentives for cultural production, 1 b) to limit those rights to ensure a balance between creators rights and public interests, 1 c) to provide rules to make it easy to arrange use of copyrighted works.
There's that argument about incentives and balancing public interests again.
This is the historical norm. It is not present in every copyright law, but they share the same historical nucleus.
Early copyright was a take on property rights, applied to supposed labour of the soul and subsequent ownership of its fruits.
Copyright stems from the 15-1600s, while utilitarianism is a mid-1800s kind of thing. The move from explicitly religious and natural rights motivations to language about "intellect" and hedonism is rather late, and I expect it to be tied to an atheist and utilitarian influence from socialist movements.
The first modern copyright law dates to 1709, and was most certainly not a "take on property rights". Neither were pre-Statute of Anne monopoly grants.
I can find nothing to suggest a "religious and natural rights" motivation, nor any language about "intellect and hedonism".
Statute of Anne - which specifically gives a utilitarian reason 150 years before your "mid-1800s" estimate also predates socialism by a similar amount of time, and dates to a time were there certainly wasn't any major atheist influence either, so this is utterly ahistorical nonsense.
Copyright originates in the Statute of Anne[0]; its creation was therefore within living memory when the United States declared their independence.
No rights have existed 'forever', and both the rights and the social problems they intend to resolve are often quite recent (assuming you're not the sort of person who's impressed by a building that's 100 years old).
George III was certainly not surprised by Jefferson's claim to rights, given that the rights he claimed were copied (largely verbatim) from the Bill of Rights 1689[1]. The poor treatment of the Thirteen Colonies was due to Lord North's poor governance, the rights and liberties that the Founding Fathers demanded were long-established in Britain, and their complaints against absolute monarchy were complaints against a system of government that had been abolished a century before.
you should probably reread the text I responded to and then what I wrote, because you seem to think I believe there are rights that are not codified by humans in some way and are on a mission to correct my mistake.
>George III was certainly not surprised by Jefferson's claim to rights, given that the rights he claimed were copied (largely verbatim) from the Bill of Rights 1689
to repeat: Hence Jefferson's reference to inalienable rights, which probably came as some surprise to King George III.
inalienable modifies rights here, if George is surprised by any rights it is inalienable ones.
>Copyright originates in the Statute of Anne[0]; its creation was therefore within living memory when the United States declared their independence.
title of post is "Meta says it won't sign Europe AI agreement", I was under the impression that it had something to do with how the EU sees copyright and not how the U.S and British common law sees it.
Hence multiple comments referencing EU but I see I must give up and the U.S must have its way, evidently the Europe AI agreement is all about how copyright works in the U.S, prime arbiter of all law around the globe.
at any rate rights that are described as being eternal or some other version of that such as inalienable, or in the case of copyright moral and intrinsic, are rights that if the government, that has heretofore described that as inviolate, where to casually violate them then the government would be declaring its own nullification to exist further by its previously stated rules.
Not to say this doesn't happen, I believe we can see it happening in some places in the world right now, but these are classes of laws that cannot "just" be changed at the government's whim, and in the EU copyright law is evidently one of those classes of law, strange as it seems.
A lot of cultures have not historically considered artists’ rights to be a thing and have had it essentially imposed on them as a requirement to participate in global trade.
Even in Europe copyright was protected only for the last 250 years, and over the last 100 years it’s been constantly updated to take into consideration new technologies.
The only real mistake the EU made was not regulating Facebook when it mattered. That site caused pain and damage to entire generations. Now it's too late. All they can do is try to stop Meta and the rest of the lunatics from stealing every book, song and photo ever created, just to train models that could leave half the population without a job.
Meta, OpenAI, Nvidia, Microsoft and Google don't care about people. They care about control: controlling influence, knowledge and universal income. That's the endgame.
Just like in the US, the EU has brilliant people working on regulations. The difference is, they're not always working for the same interests.
The world is asking for US big tech companies to be regulated more now than ever.
To be fair, "copy"right has only been needed for as long as it's been possible to copy things. In the grand scheme of human history, that technology is relatively new.
Copyright predates mechanical copying. However, people used to have to petition a King or similar to be granted a monopoly on a work, and the monopoly was specific to that work.
The Statue of Anne - the first recognisable copyright law in anything remotely the modern sense dates to 1709. Long after the invention of movable type. Mechanical in the sense of printing with a press using movable type, not anything highly automated.
Having to petition for monopoly rights on an individual basis is nothing like copyright, where the entire point is to avoid having to ask for exceptions by creating a right.
"intellectual property" only exists because society collectively allows it to. it's not some inviolable law of nature. society (or the government that represents them) can revoke it or give it away.
Yes, that is why (most?) anarchists consider property that one is not occupying and using to be fiction, held up by the state. I believe this includes intellectual property as well.
A person being alive is not at all similar to the concept of intellectual property existing. The former is a natural phenomenon, the latter is a social construct.
Sounds like a reasonable guideline to me. Even for open source models, you can add a license term that requires users of the open source model to take "appropriate measures to avoid the repeated generation of output that is identical or recognisably similar to protected works"
This is European law, not US. Reasonable means reasonable and judges here are expected to weigh each side's interests and come to a conclusion. Not just a literal interpretation of the law.
> This is European law, not US. Reasonable means reasonable and judges here are expected to weigh each side's interests and come to a conclusion. Not just a literal interpretation of the law.
I think you've got civil and common law the wrong way round :). US judges have _much_ more power to interpret law!
In the US, for most laws, and most judges, there's actually much less power to interpret law. Part of the benefit of the common law system is to provide consistency and take that interpretation power away from judges of each case.
My claim is that at a system-level, judges in the US have more power to interpret laws. Your claim is that "in each individual case, the median amount of interpretation is lower in the US than the EU". But you also concede that this is because the judges rely on the interpretations of _other_ judges in cases (e.g. if the Supreme Court makes a very important decision which clarifies how a law should be interpreted, and this is then carried down throughout the rest of the justice system, then this means that there has been a really large amount of interpretation).
It is European law, as in EU law, not law from a European state. In EU matters, the teleogocial interpretation, i.e. intent applies:
> When interpreting EU law, the CJEU pays particular attention to the aim and purpose of EU law (teleological interpretation), rather than focusing exclusively on the wording of the provisions (linguistic interpretation).
> This is explained by numerous factors, in particular the open-ended and policy-oriented rules of the EU Treaties, as well as by EU legal multilingualism.
> Under the latter principle, all EU law is equally authentic in all language versions. Hence, the Court cannot rely on the wording of a single version, as a national court can, in order to give an interpretation of the legal provision under consideration. Therefore, in order to decode the meaning of a legal rule, the Court analyses it especially in the light of its purpose (teleological interpretation) as well as its context (systemic interpretation).
Instead of a license term you can put that in your documentation - in fact that is exactly what the code of practice mentions (see my other comment) for open source models.
An open source cocaine production machine is still an illegal cocaine production machine. The fact that it's open source doesn't matter.
You seem to not have understood that different forms of appliances need to comply with different forms of law. And you being able to call it open source or not doesn't change anything about its legal aspects.
And every law written is a compromise between two opposing parties.
Except that it’s seemingly impossible to prevent against prompt injection. The cat is out the bag. Much like a lot of other legislation (eg cookie law, being responsible for user generated content when you have millions of it posted per day) it’s entirely impractical albeit well-meaning.
I don't think the cookie law is that impractical? It's easy to comply with by just not storing non-essential user information. It would have been completely nondisruptive if platforms agreed to respect users' defaults via browser settings, and then converged on a common config interface.
It was made impractical by ad platforms and others who decided to use dark patterns, FUD and malicious compliance to deceive users into agreeing to be tracked.
I recently received an email[0] from a UK entity with an enormous wall of text talking about processing of personal information, my rights and how there is a “Contact Card” of my details on their website.
But with a little bit of reading, one could ultimately summarise the enormous wall of text simply as: “We’ve added your email address to a marketing list, click here to opt out.”
The huge wall of text email was designed to confuse and obfuscate as much as possible with them still being able to claim they weren’t breaking protection of personal information laws.
If you ask someone if they killed your dog and they respond with a wall of text, then you’re immediately suspicious. You don’t even have to read it all.
The same is true of privacy policies. I’ve seen some companies have very short policies I could read in less than 30s, those companies are not suspicious.
Because they track usage stats for site development purposes, and there was no convergence on an agreed upon standard interface for browsers since nobody would respect it. Their banners are at least simple yes/no ones without dark patterns.
But yes, perhaps they should have worked with e.g. Mozilla to develop some kind of standard browser interface for this.
This is actually not true. I just read the European commission's cookie policy.
The main reason they need the banner is because they show you full page popups to ask you to take surveys about unrelated topics like climate action. They need consent to track whether or not you've taken these surveys
Their banner is just as bad as any other I have seen, it covers most of the page and doesn't go away until I click yes. If you're trying to opt out of cookies on other sites, that's probably why it takes you longer (just don't do that).
They create profiles of visitors that allow them to, e.g. through polls.
It's usually a click or two to "reject all" or similar with serious organisations. Some german corporations are nasty and conflate paywall and data collection and processing consent.
It is impractical for me as a user. I have to click on a notice on every website on the internet before interacting with it - often which are very obtuse and don’t have a “reject all” button but a “manage my choices” button which takes to an even more convoluted menu.
Instead of exactly as you say: a global browser option.
As someone who has had to implement this crap repeatedly - I can’t even begin to imagine the amount of global time that has been wasted implementing this by everyone, fixing mistakes related to it and more importantly by users having to interact with it.
Yeah, but the only reason for this time wasteage is because website operators refuse to accept what would become the fallback default of "minimal", for which they would not need to seek explicit consent. It's a kind of arbitrage, like those scammy website that send you into redirect loops with enticing headlines.
The law is written to encourage such defaults if anything, it just wasn't profitable enough I guess.
Not even EU institutions themselves are falling back on deaults that don't require cookie consent.
I'm constantly clicking away cookie banners on UK government or NHS (our public healthcare system) websites. The ICO (UK privacy watchdog) requires cookie consent. The EU Data Protection Supervisor wants cookie consent. Almost everyone does.
And you know why that is? It's not because they are scammy ad funded sites or because of government surveillance. It's because the "cookie law" requires consent even for completely reasonable forms of traffic analysis with the sole purpose of improving the site for its visitors.
This is impractical, unreasonable, counterproductive and unintelligent.
This is a personal decision to be made by the data "donor".
The NHS website cookie banner (which does have a correct implementation in that the "no consent" button is of equal prominence to the "mi data es su data" button) says:
> We'd also like to use analytics cookies. These collect feedback and send information about how our site is used to services called Adobe Analytics, Adobe Target, Qualtrics Feedback and Google Analytics. We use this information to improve our site.
In my opinion, it is not, as described, "completely reasonable" to consider such data hand-off to third parties as implicitly consented to. I may trust the NHS but I may not trust their partners.
If the data collected is strictly required for the delivery of the service and is used only for that purpose and destroyed when the purpose is fulfilled (say, login session management), you don't need a banner.
The NHS website is in a slightly tricky position, because I genuinely think they will be trying to use the data for site and service improvement, at least for now, and they hopefully have done their homework to make sure Adobe, say, are also not misusing the data. Do I think the same from, say, the Daily Mail website? Absolutely not, they'll be selling every scrap of data before the TCP connection even closes to anyone paying. Now, I may know the Daily Mail is a wretched hive of villainy and can just not go there, but I do not know about every website I visit. Sadly the scumbags are why no-one gets nice things.
>This is a personal decision to be made by the data "donor".
My problem is that users cannot make this personal decision based on the cookie consent banners because all sites have to request this consent even if they do exactly what they should be doing in their users' interest. There's no useful signal in this noise.
The worst data harvesters look exactly the same as a site that does basic traffic analysis for basic usability purposes.
The law makes it easy for the worst offenders to hide behind everyone else. That's why I'm calling it counterproductive.
[Edit] Wrt NHS specifically - this is a case in point. They use some tools to analyse traffic in order to improve their website. If they honour their own privacy policy, they will have configured those tools accordingly.
I understand that this can still be criticised from various angles. But is this criticism worth destroying the effectiveness of the law and burying far more important distinctions?
The law makes the NHS and Daily Mail look exactly the same to users as far as privacy and data protection is concered. This is completely misleading, don't you think?
I don't think it's too misleading, because in the absence of any other information, they are the same.
What you could then add to this system is a certification scheme to permit implicit consent of all the data handling (including who you hand it off to and what they are allowed to do with it, as well as whether they have demonstrated themselves to be trustworthy) is audited to be compliant with some more stringent requirements. It could even be self-certification along the lines of CE marking. But that requires strict enforcement, and the national regulators so far have been a bunch of wet blankets.
That actually would encourage organisations to find ways to get the information they want without violating the privacy of their users and anyone else who strays into their digital properties.
>I don't think it's too misleading, because in the absence of any other information, they are the same.
But other information not being absent we know that they are not the same. Just compare privacy policies for instance. The cookie law makes them appear similar in spite of the fact that they are very different (as of now - who knows what will happen to the NHS).
I do understand the point, but other then allowing a process of auditing to allow a middle ground of consent implied for first-party use only and within some strictly defined boundaries, what else can you do? It's a market for lemons in terms of trustworthy data processors. 90% (bum-pull figures, but lines up with the number of websites that play silly buggers with hiding the no-consent button) of all people who want to use data will be up to no good and immediately try to bend and break every rule.
I would also be in favour of companies having to report all their negative data protection judgements against them and everyone they will share your data with in their cookie banner before giving you the choice as to whether you trust them.
If any rule is going to be broken and impossible to enforce, how can that be a justification for keeping a bad rule rather than replacing it with more sensible one?
I said they'd try to break them. Which requires vigilance and regulators stepping in with an enormous hammer. So far national regulators have been pretty weaksauce which is indeed very frustrating.
I'm not against improving the system, and I even proposed something, but I am against letting data abusers run riot because the current system isn't quite 100% perfect.
I'll still take what we have over what we had before (nothing, good luck everyone).
Then we clearly disagree on what they should be doing.
And this is the crux of the problem. The law helps a tiny minority of people enforce an extremely (and in my view pointlessly) strict version of privacy at the cost of misleading everybody else into thinking that using analytics for the purpose of making usability improvements is basically the same thing as sending personal data to 500 data brokers to make money off of it.
I would draw the line where my personal data is exchanged with third parties for the purpose of monetisation. I want the websites I visit to be islands that do not contribute to anyone's attempt to create a complete profile of my online (and indeed offline) life.
I don't care about anything else. They can do whatever A/B testing they want as far as I'm concerned. They can analyse my user journey across multiple visits. They can do segmentation to see how they can best serve different groups of users. They can store my previous search terms, choices and preferences. If it's a shop, they can rank products according to what they think might interest me based on previous visits. These things will likely make the site better for me or at least not much worse.
Other people will surely disagree. That's fine. What's more important than where exactly to draw the line is to recognise that there are trade-offs.
The law seems to be making an assumption that the less sites can do without asking for consent the better most people's privacy will be protected.
But this is a flawed idea, because it creates an opportunity for sites to withhold useful features from people unless and until they consent to a complete loss of privacy.
Other sites that want to provide those features without complete loss of privacy cannot distinguish themselves by not asking for consent.
Part of the problem is the overly strict interpretation of "strictly necessary" by data protection agencies. There are some features that could be seen as strictly necessary for normal usability (such as remembering preferences) but this is not consistently accepted by data protection agencies so sites will still ask for consent to be on the safe side.
> It's because the "cookie law" requires consent even for completely reasonable forms of traffic analysis with the sole purpose of improving the site for its visitors
Yup. That's what those 2000+ "partners" are all about if you believe their "legitimate interest" claims: "improve traffic"
>This is impractical, unreasonable, counterproductive and unintelligent.
It keeps the political grifters who make these regulations employed, that's kind of the main point in EU/UKs endless stream of regulations upon regulations.
The reality is the data that is gathered is so much more valuable and accurate if you gather consent when you are running a business. Defaulting to a minimal config is just not practical for most businesses either. The decisions that are made with proper tracking data have a real business impact (I can see it myself - working at a client with 7 figure monthly revenue).
Im fully supportive of consent, but the way it is implemented is impractical from everyone’s POV and I stand by that.
Are you genuinely trying to defend businesses unnecessarily tracking users online? Why can't businesses sell their core product(s) and you know... not track users? If they did that, then they wouldn't need to implement a cookie banner.
Retargetting etc is massive revenue for online retailers. I support their right to do it if users consent to it. I don’t support their right to do it if users have not consented.
The conversation is not about my opinion on tracking, anyway. It’s about the impracticality of implementing the legislation that is hostile and time consuming for both website owners and users alike
Plus with any kind of effort put into a standard browser setting you could easily have some granularity, like: accept anonymous ephemeral data collected to improve website, but not stuff shared with third parties, or anything collected for the purpose of tailoring content or recommendations for you.
Are you genuinely acting this obtuse? what do you think walmart and every single retailer does when you walk into a physical store? it’s always constant monitoring to be able to provide a better customer experience. This doesn’t change with online, businesses want to improve their service and they need the data to do so.
If you're talking about the same jurisdiction of this privacy laws, then this is illegal. Your are only allowed to retain videos for 24h and only use it for basically calling the police.
walmart has sales associates running around gathering all those data points, as well as people standing around monitoring. Their “eyes” aren’t regulated.
The question still stands then: Does it happen in Tesco in the EU? Because that is illegal.
The original idea was that it should be legal to track people, because it is ok in the analog world. But it really isn't and I'm glad it is illegal in the EU. I think it should be in the US also, but the EU can't change that and I have no right to have political influence about foreign countries so that doesn't matter.
it’s illegal for Tesco to have any number of employees watching/monitoring/“tracking” in the store with their own eyes and using those in-store insights to drive better customer experiences?
Making statistics about sex, age, number of children, clothing choice, walking speed without consent, sounds illegal. I think it isn't forbidden for the company, but for the individual already, because that's voyeuristic behaviour.
Watching what is bought is fine, but walking around to do that is useless work, because you have that in the accounting/sales data already.
There is stuff like PayPal and now per company apps, that works the same as on the web: you need to first sign a contract. I would rather that to be cracked done on, but I see that it is difficult, because you can't forbid individual choice. But I think the incentive is that products become cheaper when you opt-in to data collection. This is already forbidden though, you can't combine consent with other benefits, then it isn't free consent anymore. I expect a lawsuit in the next decades.
That is only true if you agree with ad platforms that tracking ads are fundamentally required for businesses, which is trivially untrue for most enterprises. Forcing businesses to get off privacy violating tracking practices is good, and it's not the EU that's at fault for forcing companies to be open about ad networks' intransigence on that part.
Regulators often barely grasp how current markets function and they are supposed to be futurists now too? Government regulatory interests almost always end up lining up with protecting entrenched interests, so it's essentially asking for a slow moving group of the same mega companies. Which is very much what Europes market looks like today. Stasis and shifting to a stagnating middle.
And also to prevent European powers trying to kill each other for the third time in a century, setting the whole world on fire in the process - for the third time in a century.
Contrary to the constant whining, most of them are actually quite wealthy. And thanks to strong right to repair laws, they can keep using John Deere equipment without paying extortionate licensing fees.
They're wealthy because they were paid for not using their agricultural land, so they cropped down all the trees on parts of their land that they couldn't use, to classify it as agricultural, got paid, and as a side effect caused downstream flooding
Well, the topic is really whether or not the EU's regulations are effective at producing desired outcomes. The comment you're responding to is making a strong argument that it isn't. I tend to agree.
There's a certain hubris to applying rules and regulations to a system that you fundamentally don't understand.
For those of us outside the US, it's not hard to understand how regulations work. The US acts as a protectionist country, it sets strict rules and pressures other governments to follow them. But at the same time, it promotes free markets, globalisation, and neoliberal values to everyone else.
The moment the EU shows even a small sign of protectionism, the US complains. It's a double standard.
So the solution is to allow the actual entrenched interests to determine the future of things when they also barely grasp how the current markets function and are currently proclaiming to be futurists?
The best way for "entrenched interests" to stifle competition is to buy/encourage regulation that keeps everybody else out of their sandbox pre-emptively.
For reference, see every highly-regulated industry everywhere.
You think Sam Altman was in testifying to the US Congress begging for AI regulation because he's just a super nice guy?
That's a bit oversimplified. Humans have been creating authority systems trying to control others lives and business since formal societies have been a thing, likely even before agriculture. History is also full of examples of arbitrary and counter productive attempts at control, which is a product of basic human nature combined with power, and why we must always be skeptical.
As a member of 'humanity', do you find yourself creating authority systems for AI though? No.
If you are paying for lobbyists to write the legislation you want, as corporations do, you get the law you want - that excludes competition, funds your errors etc.
The point is you are not dealing with 'humanity', you are dealing with those who represent authority for humanity - not the same thing at all. Connected politicians/CEOs etc are not actually representing 'humanity' - they merely say that they are doing so, while representing themselves.
eu resident here. i’ve observed with sadness what a scared and terrified lots the europeans have become. but at least their young people can do drugs, party 72 hours straight, and graffiti all walls in berlin so hey what’s not to like?
one day some historian will be able to pinpoint the exact point in time that europe chose to be anti-progress and fervent traditionalist hell-bent on protecting pizza recipes, ruins of ancient civilization, and a so-called single market. one day!
No, that... that's exactly what we have today. An oligarchy persists through captured state regulation. A more free market would have a constantly changing top.
Depends on the time horizon you look at. A completely unregulated market usually ends up dominated by monopolists… who last a generation or two and then are usurped and become declining oligarchs. True all the way back to the Medici.
In a rigidly regulated market with preemptive action by regulators (like EU, Japan) you end up with a persistent oligarchy that is never replaced. An aristocracy of sorts.
The middle road is the best. Set up a fair playing field and rules of the game, but allow innovation to happen unhindered, until the dust has settled. There should be regulation, but the rules must be bought with blood. The risk of premature regulation is worse.
Calculated, not callous. Quite the opposite: precaution kills people every day, just not as visibly. This is especially true in the area of medicine where innovation (new medicines) aren’t made available even when no other treatment is approved. People die every day by the hundreds of thousands of diseases that we could be innovating against.
You're both right, and that's exactly how early regulation often ends up stifling innovation. Trying to shape a market too soon tends to lock in assumptions that later prove wrong.
Sometimes you can't reverse the damage and societal change after the market has already been created and shaped. Look at fossil fuels, plastic, social media, etc. We're now dependent on things that cause us harm, the damage done is irreversible and regulation is no longer possible because these innovations are now embedded in the foundations of modern society.
Innovation is good, but there's no need to go as fast as possible. We can be careful about things and study the effects more deeply before unleashing life changing technologies into the world. Now we're seeing the internet get destroyed by LLMs because a few people decided it was ok to do so. The benefits of this are not even clear yet, but we're still doing it just because we can. It's like driving a car at full speed into a corner just to see what's behind it.
I think it’s one of those “everyone knows” things that plastic and social media are bad, but I think the world without them is way, way worse. People focus on these popular narratives but if people thought social media was bad, they wouldn’t use it.
Personally, I don’t think they’re bad. Plastic isn’t that harmful, and neither is social media.
I think people romanticize the past and status quo. Change is scary, so when things change and the world is bad, it is easy to point at anything that changed and say “see, the change is what did it!”
People don't use things that they know are bad, but someone who has grown up in an environment where everyone uses social media for example, can't know that it's bad because they can't experience the alternative anymore. We don't know the effects all the accumulating plastic has on our bodies. The positive effects of these things can be bigger than the negative ones, but we can't know that because we're not even trying to figure it out. Sometimes it might be impossible to find out all the effects before large scale adoption, but still we should at least try. Currently the only study we do before deciding is the one to figure out if it'll make a profit for the owner.
> We don't know the effects all the accumulating plastic has on our bodies.
This is handwaving. We can be pretty well sure at this point what the effects aren’t, given their widespread prevalence for generations. We have a 2+ billion sample size.
No, we can't be sure. There's a lot of diseases that we don't know the cause of, for example. Cancers, dementia, Alzheimer's, etc. There is a possibility that the rates of those diseases are higher because of plastics. Plastic pollution also accumulates, there was a lot less plastic in the environment a few decades ago. We add more faster than it gets removed, and there could be some threshold after which it becomes more of an issue. We might see the effect a few decades from now. Not only on humans, but it's everywhere in the environment now, affecting all life on earth.
You're not arguing in a way that strikes me as intellectually honest.
You're hypothesizing the existence of large negative effects with minimal evidence.
But the positive effects of plastics and social media are extremely well understood and documented. Plastics have revolutionized practically every industry we have.
With that kind of pattern of evidence, I think it makes sense to discount the negatives and be sure to account for all the positives before saying that deploying the technology was a bad idea.
I agree that plastics probably do have more positives than negatives, but my point is that many of our innovations do have large negative effects, and if we take them into use before we understand those negative effects it can be impossible to fix the problems later. Now that we're starting to understand the extent of plastic pollution in our environment, if some future study reveals that it's a causal factor in some of our diseases it'll be too late to do anything about it. The plastic is in the environment and we can't get it out with regulation anymore.
Why take such risks when we could take our time doing more studies and thinking about all the possible scenarios? If we did, we might use plastics where they save lives and not use them in single-use containers and fabrics. We'd get most of the benefit without any of the harm.
I'm sure it's very good the first time you take it. If you don't consider all the effects before taking it, it does make sense. You feel very good, but the even stronger negative effects come after. Same can be said about a lot of technology.
Addiction is a matter of degree. There's a bunch of polls where a large majority of people strongly agree that "they spend too much time on social media". Are they addicts? Are they "coosing to use it"? Are they saying it's too much because that's a trendy thing to say?
WHAT?! Do you think we as humanity would have gotten to all the modern inventions we have today like the internet, space travel, atomic energy, if we had skipped the fossil fuel era by preemptively regulating it?
How do you imagine that? Unless you invent a time machine, go to the past, and give inventors schematics of modern tech achievable without fossil fuels.
Maybe not as fast as we did, but eventually we would have. Maybe more research would have been put into other forms of energy if the effects of fossil fuels were considered more thoroughly and usage was limited to a degree that didn't have a chance cause such fast climate change. And so what if the rate of progress would have been slower and we'd be 50 years behind current tech? At least we wouldn't have to worry about all the damage we've caused now, and the costs associated with that. Due to that damage our future progress might halt while a slower, more careful society would continue advancing far into the future.
I think it's an open question whether we can reboot society without the use of fossil fuels. I'm personally of the opinion that we wouldn't be able to.
Simply taking away some giant precursor for the advancements we enjoy today and then assuming it all would have worked out somehow is a bit naive.
I would need to see a very detailed pipeline from growing wheat in an agrarian society to the development of a microprocessor without fossil fuels to understand the point you're making. The mining, the transport, the manufacture, the packaging, the incredible number of supply chains, and the ability to give people time to spend on jobs like that rather than trying to grow their own food are all major barriers I see to the scenario you're suggesting.
The whole other aspect of this discussion that I think is not being explored is that technology is fundamentally competitive, and so it's very difficult to control the rate at which technology advances because we do not have a global government (and if we did have a global government, we'd have even more problems than we do now). As a comment I read yesterday said, technology concentrates gains towards those who can deploy it. And so there's going to be competition to deploy new technologies. Country-level regulation that tries to prevent this locally is only going to lead to other countries gaining the lead.
You might be right, but I'm wasn't saying we should ban all use of any technology that has any negative effects, but that we should at least try to understand all the effects before taking it into use, and try to avoid the worst outcomes by regulating how to use the tech. If it turns out that fossil fuels are the only way to achieve modern technology then we should decide to take the risk of the negative effects knowing that there's such a risk. We shouldn't just blindly rush into any direction that might give us some benefit.
Regarding competition, yes you're right. Effective regulation is impossible before we learn global co-operation, and that's probably never going to happen.
Very naive take that's not based in reality but would only work in fiction.
Historically, all nations that developed and deployed new tech, new sources of energy and new weapons, have gained economic and military superiority over nations who did not, which ended up being conquered/enslaved.
UK would not have managed to be the world power before the US, without their coal fueled industrial era.
So as history goes, if you refuse to take part in, or cannot keep up in the international tech, energy and weapons race, you'll be subjugated by those who win that race. That's why the US lifted all brakes on AI, to make sure they'll win and not China. What EU is doing, self regulating itself to death, is ensuring its future will be at the mercy of US and China. I'm not the one saying this, history proves it.
You're right, in a system based on competition it's not possible to prevent these technologies from being used as soon as they're invented if there's some advantage to be gained. We need to figure out global co-operation before such a thing is realistic.
But if such co-operation was possible, it would make sense to progress more carefully.
There is no such thing as "global cooperation" in our reality for things beyond platitudes. That's only a fantasy for sci-fi novels. Every tribe wants to rule the others, because if you don't, the other tribes will rule you.
It's been the case since our caveman days. That's why tribes that don't focus on conquest end up removed form the gene pool. Now extend tribe to nation to make it relevant to current day.
The internet was created in the military at the start of the fossil era, there is no reason, why it should be affected by the oil era. If we wouldn't travel that much, because we don't use cars and planes that much, the internet would be even more important.
Space travel does need a lot of oil, so it might be affected, but the beginning of it were in the 40s so the research idea was already there.
Atomic energy is also from the 40s and might have been the alternative to oil, so it would thrive more if we haven't used oil that much.
Also all 3 ARE heavily regulated and mostly done by nation states.
How would you have won the world wars without oil?
Your augment only work in a fictional world where oil does not exist and you have the hindsight of today.
But when oil does exist and if you would have chosen not to use it, you will have long been steamrolled by industrialized nations powers who used their superior oil fueled economy and military to destroy or enslave your nation and you wouldn't be writing this today.
I thought we are arguing about regulating oil not to not use oil at all.
> How would you have won the world wars without oil?
You don't need to win world wars to have technological advancement, in fact my country didn't. I think the problem with this discussion, is that we all disagree what to regulate, that's how we ended up with the current situation after all.
I interpreted it to mean that we wouldn't use plastic for everything. I think we would be fine having glass bottles and paper, carton, wood for grocery wrapping. It wouldn't be so individual per company, but this not important for the economy and consumers, and also would result in a more competitive market.
I also interpreted it to mean that we wouldn't have so much cars and don't use planes beside really important stuff (i.e. international politics). The cities simply expand to the travel speed of the primary means of transportation. We would simply have more walkable cities and would use more trains. Amazon probably wouldn't be possible and we would have more local producers. In fact this is what we currently aim for and it is hard, because transition means that we have larger cities then we can support with the primary means of transportation.
As for your example inventions: we did have computers in the 40s and the need for networking would arise. Space travel is in danger, but you can use oil for space travel without using it for everyday consumer products. As I already wrote, we would have more atomic energy, not sure if that would be good though.
Depends what those assumptions are. If by protecting humans from AI gross negligence, then the assumptions are predetermined to be siding towards human normals (just one example). Lets hope logic and understanding of the long term situation proceeds the arguments in the rulesets.
You're just guessing as much as anyone. Almost every generation in history has had doomers predicting the fall of their corner of civilization from some new thing. From religion schisms, printing presses, radio, TV, advertisements, the internet, etc. You can look at some of the earliest writings by English priests in the 1500s predicting social decay and destruction of society which would sound exactly like social media posts in 2025 about AI. We should at a minimum under the problem space before restricting it, especially given the nature of policy being extremely slow to change (see: copyright).
I'd urge you to read a book like Black Swan, or study up on statistics.
Doomers have been wrong about completely different doom scenarios in the past (+), but it says nothing about to this new scenario. If you're doing statistics in your head about it, you're wrong. We can't use scenarios from the past to make predictions about completely novel scenarios like thinking computers.
(+) although they were very close to being right about nuclear doom, and may well be right about climate change doom.
I'd like for you to expand your point on understanding statistics better. I think I have a very good understanding of statistics, but I don't see how it relates to your point.
Your point is fundamentally philosophical, which is you can't use the past to predict the future. But that's actually a fairly reductive point in this context.
GP's point is that simply making an argument about why everything will fail is not sufficient to have it be true. So we need to see something significantly more compelling than a bunch of arguments about why it's going to be really bad to really believe it, since we always get arguments about why things are really, really bad.
> which is you can't use the past to predict the future
Of course you can use the past to predict (well, estimate) the future. How fast does wheat grow? Collect a hundred years of statistics of wheat growth and weather patterns, and you can estimate how fast it will grow this year with a high level of accuracy, unless a "black swan" event occurs which wasn't in the past data.
Note carefully what we're doing here: we're applying probability on statistical data of wheat growth from the past to estimate wheat growth in the future.
There's no past data about the effects of AI on society, so there's no way to make statements about whether it will be safe in the future. However, people use the statistics that other, completely unrelated, things in the past didn't cause "doom" (societal collapse) to predict that AI won't cause doom. But statistics and probability doesn't work this way, using historical data about one thing to predict the future of another thing is a fallacy. Even if in our minds they are related (doom/societal collapse caused by a new technology), mathematically, they are not related.
> we always get arguments about why things are really, really bad.
When we're dealing with a completely new, powerful thing that we have no past data on, we absolutely should consider the worst, and of course, the median, and best case scenarios, and we should prepare for all of these. It's nonsensical to shout down the people preparing for the worst and working to make sure it doesn't happen, or to label them as doomers, just because society has survived other unrelated bad things in the past.
Ah, I see your point is not philosophical. It's that we don't have historical data about the effect of AI. I understand your point now. I tend to be quite a bit more liberal and allow things to play out because I think many systems are too complex to predict. But I don't think that's a point that we'll settle here.
The experience with other industries like cars (specially EV) shows that the ability of EU regulators to shape global and home markets is a lot more limited than they like to think.
Not really china make big policy bet a decade early and win the battle the put the whole government to buy this new tech before everyone else, forcing buses to be electric if you want the federal level thumbs up, or the lottery system for example.
So I disagree, probably Europe will be even more behind in ev if they doesn't push eu manufacturers to invest so heavily in the industry.
You can se for example than for legacy manufacturers the only ones in the top ten are Europeans being 3 out of 10 companies, not Japanese or Korean for example, and in Europe Volkswagen already overtake Tesla in sales Q1 for example and Audi isn't that much away also.
Regulating it while the cat is out of the bag leads to monopolistic conglomerates like Meta and Google.
Meta shouldn't have been allowed to usurp instagram and whatsapp, Google shouldn't have been allowed to bring Youtube into the fold. Now it's too late to regulate a way out of this.
It’s easy to say this in hindsight, though this is the first time I think I’ve seen someone say that about YouTube even though I’ve seen it about Instagram and WhatsApp a lot.
The YouTube deal was a lot earlier than Instagram, 2006. Google was way smaller than now. iPhone wasn’t announced. And it wasn’t two social networks merging.
Very hard to see how regulators could have the clairvoyance to see into this specific future and its counter-factual.
Technically untrue, monopoly busting is a kind of regulation. I wouldn't bet on it happening on any meaningful scale, given how strongly IT benefits from economies of scale, but we could be surprised.
> before we have any idea what the market is going to look like in a couple years.
Oh, we already know large chunks of it, and the regulations explicitly address that.
If the chest-beating crowd would be presented with these regulations piecemeal, without ever mentioning EU, they'd probably be in overwhelming support of each part.
But since they don't care to read anything and have an instinctive aversion to all things regulatory and most things EU, we get the boos and the jeers
I literally lived this with GDPR. In the beginning every one ran around pretending to understand what it meant. There were a ton of consultants and lawyers that basically made up stuff that barely made sense. They grifted money out of startups by taking the most aggressive interpretation and selling policy templates.
In the end the regulation was diluted to something that made sense(ish) but that process took about 4 years. It also slowed down all enterprise deals because no one knew if a deal was going to be against GDPR and the lawyers defaulted to “no” in those orgs.
Asking regulators to understand and shape market evolution in AI is basically asking them to trade stocks by reading company reports written in mandarin.
> In the end the regulation was diluted to something that made sense(ish) but that process took about 4 years.
Is the same regulation that was introduced in 2016. The only people who pretend not to understand it are those who think that selling user data to 2000+ "partners" is privacy
It doesn't seem unreasonable. If you train a model that can reliably reproduce thousands/millions of copyrighted works, you shouldn't be distributibg it. If it were just regular software that had that capability, would it be allowed? Just because it's a fancy Ai model it is ok?
> that can reliably reproduce thousands/millions of copyrighted works, you shouldn't be distributibg it. If it were just regular software that had that capability, would it be allowed?
LLMs are hardly reliable ways to reproduce copyrighted works. The closest examples usually involve prompting the LLM with a significant portion of the copyrighted work and then seeing it can predict a number of tokens that follow. It’s a big stretch to say that they’re reliably reproducing copyrighted works any more than, say, a Google search producing a short excerpt of a document in the search results or a blog writer quoting a section of a book.
It’s also interesting to see the sudden anti-LLM takes that twist themselves into arguing against tools or platforms that might reproduce some copyrighted content. By this argument, should BitTorrent also be banned? If someone posts a section of copyrighted content to Hacker News as a comment, should YCombinator be held responsible?
They're probably training them to refuse, but fundamentally the models are obviously too small to usually memorise content, and can only do it when there's many copies in the training set. Quotation is a waste of parameters better used for generalisation.
The other thing is that approximately all of the training set is copyrighted, because that's the default even for e.g. comments on forums like this comment you're reading now.
The other other thing is that at least two of the big model makers went and pirated book archives on top of crawling the web.
LLMs even fail on tasks like "repeat back to me exactly the following text: ..." To say they can exactly and reliably reproduce copyrighted work is quite a claim.
You can also ask people to repeat a text and some will fail.
What I want to say is that even if some LLMs (probably only older ones) will fail doesn't mean future ones will fail (in the majority). Especially if benchmarks indicate they are becoming smarter over time.
It is entirely unreasonable to prevent a general purpose model to be distributed for the largely frivolous reason that maybe some copyrighted works could be approximated using it. We don´t make metallurgy illegal because it's possible to make guns with metal.
When a model that has this capability is being distributed, copyright infringement is not happening. It is happening when a person _uses_ the model to reproduce a copyrighted work without the appropriate license. This is not meaningfully different to the distinction between my ISP selling me internet access and me using said internet access to download copyrighted material. If the copyright holders want to pursue people who are actually doing copyright infringement, they should have to sue the people who are actually doing copyright infringement and they shouldn't have broad power to shut down anything and everything that could be construed as maybe being capable of helping copyright infringement.
Copyright protections aren't valuable enough to society to destroy everything else in society just to make enforcing copyright easier. In fact, considering how it is actually enforced today, it's not hard to argue that the impact of copyright on modern society is a net negative.
If the Xerox machine had all of the copyrighted works in it and you just had to ask it nicely to print them I think you'd say the tool is in the wrong there, not the user.
Xerox already went through that lawsuit and won, which is why photocopiers still exist. The tool isn't in the wrong for being told to print out the copyrighted works. The user still had to make the conscious decision to copy that particular work. Hence, still the user's fault.
You take the copyrighted work to the printer, you don't upload data to an LLM first, it is already in the machine. If you got LLMs without training data (however that works) and the user needs to provide the data, then it would be ok.
You don't "upload" data to an LLM, but that's already been explained multiple times, and evidently it didn't soak in.
LLMs extract semantic information from their training data and store it at extremely low precision in latent space. To the extent original works can be recovered from them, those works were nothing intrinsically special to begin with. At best such works simply milk our existing culture by recapitulating ancient archetypes, a la Harry Potter or Star Wars.
If the copyright cartels choose to fight AI, the copyright cartels will and must lose. This isn't Napster Part 2: Electric Boogaloo. There is too much at stake this time.
One of the reasons the New York Times didn't supply the prompts in their lawsuit is because it takes an enormous amount of effort to get LLMs to produce copyrighted works. In particular, you have to actually hand LLMs copyrighted works in the prompt to get them to continue it.
It's not like users are accidentally producing copies of Harry Potter.
Helpfully the law already disagrees. That Xerox machine tampers with the printed result, leaving a faint signature that is meant to help detect forgeries. You know, for when users copy things that are actually illegal to copy. Xerox machine (and every other printer sold today) literally leaves a paper trail to trace it back to them.
You're quite right. Still, it's a decent example of blaming the tool for the actions of its users. The law clearly exerted enough pressure to convince the tool maker to modify that tool against the user's wishes.
If I've copied someone else's copyrighted work on my Xerox machine, then give it to you, you can't reproduce the work I copied. If I leave a copy of it in the scanner when I give it to you, that's another story. The issue here isn't the ability of an LLM to produce it when I provide it with the copyrighted work as an input, it's whether or not there's an input baked-in at the time of distribution that gives it the ability to continue producing it even if the person who receives it doesn't have access to the work to provide it in the first place.
To be clear, I don't have any particular insight on whether this is possible right now with LLMs, and I'm not taking a stance on copyright law in general with this comment. I don't think your argument makes sense though because there's a clear technical difference that seems like it would be pretty significant as a matter of law. There are plenty of reasonable arguments against things like the agreement mentioned in the article, but in my opinion, your objection isn't one of the.
You can train a LLM on completely clean data, creative commons and legally licensed text, and at inference time someone will just put a whole article or chapter in the model and has full access to regenerate it however they like.
Re-quoting the section the parent comment included from this agreement:
> > GPAI model providers need to establish reasonable copyright measures to mitigate the risk that a downstream system or application into which a model is integrated generates copyright-infringing outputs, including through avoiding overfitting of their GPAI model. Where a GPAI model is provided to another entity, providers are encouraged to make the conclusion or validity of the contractual provision of the model dependent upon a promise of that entity to take appropriate measures to avoid the repeated generation of output that is identical or recognisably similar to protected works.
It sounds to me like an LLM you describe would be covered if they people distributing it put in a clause in the license saying that people can't do that.
According to the law in some jurisdictions it is. (notably most EU Member States, and several others worldwide).
In those places actually fees are included ("reprographic levy") in the appliance, and the needed supply prices, or public operators may need to pay additionally based on usage. That money goes towards funds created to compensate copyright holders for loss of profit due to copyright infringement carries out through the use of photocopiers.
Xerox is in no way singled out and discriminated against. (Yes, I know this is an Americanism)
It's a trojan horse, they try to do the same thing that is happening in the banking sector.
By this they want AI model provider to have a strong grip on their users, so controling their usage to not risk issues with the regulator.
Then, the European technocrats will be able control the whole field by being able to control the top providers, that then will overreach by controlling their users.
> One of the key aspects of the act is how a model provider is responsible if the downstream partners misuse it in any way
AFAICT the actual text of the act[0] does not mention anything like that. The closest to what you describe is part of the chapter on copyright of the Code of Practice[1], however the code does not add any new requirements to the act (it is not even part of the act itself). What it does is to present a way (which does not mean it is the only one) to comply with the act's requirements (as a relevant example, the act requires to respect machine-readable opt-out mechanisms when training but doesn't specify which ones, but the code of practice explicitly mentions respecting robots.txt during web scraping).
The part about copyright outputs in the code is actually (measure 1.4):
> (1) In order to mitigate the risk that a downstream AI system, into which a general-purpose AI model is integrated, generates output that may infringe rights in works or other subject matter protected by Union law on copyright or related rights, Signatories commit:
> a) to implement appropriate and proportionate technical safeguards to prevent their models from generating outputs that reproduce training content protected by Union law on copyright and related rights in an infringing manner, and
> b) to prohibit copyright-infringing uses of a model in their acceptable use policy, terms and conditions, or other equivalent documents, or in case of general-purpose AI models released under free and open source licenses to alert users to the prohibition of copyright infringing uses of the model in the documentation accompanying the model without prejudice to the free and open source nature of the license.
> (2) This Measure applies irrespective of whether a Signatory vertically integrates the model into its own AI system(s) or whether the model is provided to another entity based on contractual relations.
Keep in mind that "Signatories" here is whoever signed the Code of Practice: obviously if i make my own AI model and do not sign that code of practice myself (but i still follow the act requirements), someone picking up my AI model and signing the Code of Practice themselves doesn't obligate me to follow it too. That'd be like someone releasing a plugin for Photoshop under the GPL and then demanding Adobe release Photoshop's source code.
As for open source models, the "(1b)" above is quite clear (for open source models that want to use this code of practice - which they do not have to!) that all they have to do is to mention in their documentation that their users should not generate copyright infringing content with them.
In fact the act has a lot of exceptions for open-source models. AFAIK Meta's beef with the act is that the EU AI office (or whatever it is called, i do not remember) does not recognize Meta's AI as open source, so they do not get to benefit from those exceptions, though i'm not sure about the details here.
These regulations may end up creating a trap for European companies.
Essentially, the goal is to establish a series of thresholds that result in significantly more complex and onerous compliance requirements, for example when a model is trained past a certain scale.
Burgeoning EU companies would be reluctant to cross any one of those thresholds and have to deal with sharply increased regulatory risks.
On the other hand, large corporations in the US or China are currently benefiting from a Darwinian ecosystem at home that allows them to evolve their frontier models at breakneck speed.
Those non-EU companies will then be able to enter the EU market with far more polished AI-based products and far deeper pockets to face any regulations.
Well, if there's not much difference why bother. If there are copyright restrictions on things people care about Europeans are perfectly capable of bypassing restrictions, like watching the ending of Game of Thrones etc.
Haha, huge, HUGE L-take. Go to any library or coffeeshop, and you'll see most students on their laptops are on ChatGPT. Do you think they won't immediately figure out how to use a VPN to move to the "better" models from the US or China if the EU regulations cripple the ones available in the EU?
EU's preemptive war on AI will be like the RIAA's war on music piracy. EU consumers will get their digital stuff one way or another, only EU's domestic products will just fall behind by not competing to create a equally good product that the consumers want.
>I think they don't even know the term "model" (in AI context), let alone which one's the best. They only know ChatGPT.
They don't know how torrents work either, but they always find a way to pirate movies to avoid Netflix's shitty policies. Necessity is the mother of invention.
>However I don't think many will make use of that.
You underestimate the drive kids/young adults have trying to maximize their grades/output while doing the bare minimum to have more time for themselves.
>Additionally, AI companies could quickly get in trouble if they accept payments from EU credit cards.
Well, if the EU keep this up, that might not be an issue long term in the future, when without top of the line AI and choked by regulations and with the costs of caring for an ageing demographics sucking up all the economic output, the EU economy falls further and further into irrelevancy.
Europe is digging a hole of a combination of suffocating regulation and dependance on foreign players. It's so dumb, but Europeans are so used to it they can't see the problem.
It's always the same argument, and it is true. The US retained an edge over the rest of the world through deregulating tech.
My issue with this is that it doesn't look like America's laissez-faire stance on this issues helped Americans much. Internet companies have gotten absolutely humongous and gave rise to a new class of techno-oligarchs that are now funding anti-democracy campaigns.
I feel like getting slightly less performant models is a fair price to pay for increased scrutiny over these powerful private actors.
The problem is that misaligned AI will eventually affect everyone worldwide. Even if us Americans cause the problem, it won't stay an American problem.
If Europe wants leverage, the best plan is to tell ASML to turn off the supply of chips.
It's basically micromanaging an industry that European countries have not been able to cultivate themselves. It's legislation for legislation's sake. If you had a naive hope that Mario Draghi's gloomy report on the EU's competitiveness would pave the way for a political breakthrough in the EU - one is tempted to say something along the lines of communist China's market reforms in the 70s - then you have to conclude that the EU is continuing in exactly the same direction. I have actually lost faith in the EU.
The problem is this severely harms the ability to release opens weights models, and only leaves the average person with options that aren't good for privacy.
Nope. This text is embedded in HN and will survive rather better than the prompt or the search result, both of which are non-reproducible. It may bear no relation to reality but at least it won't abruptly disappear.
But AI also carries tremendous risks, from something simple as automating warfare to something like a evil AGI.
In Germany we have still traumas from automatic machine guns setup on the wall between East and West Germany. The Ukraine is fighting a drone war in the trenches with a psychological effect on soldiers comparable to WWI.
Stake are enormous. Not only toward the good. There is enough science fiction written about it. Regulation and laws are necessary!
I think your machine gun example illustrates people are quite capable of masacreing each other without AI or even high tech - in past periods sometimes over 30% of males died in warfare. While AI could get involved it's kind of a separate thing.
Yeah, his automated gun phobia argument is dumb. Should we ban all future tech development because some people are a scared of some things that can be dangerous but useful? NO.
Plus, ironically, Germany's Rheinmetall is a leader in automated anti-air guns so the people's phobia of automated guns is pointless and, at least in this case, common sense won, but in many others like nuclear energy, it lost.
It seems like Germans area easy to manipulate to get them to go against their best interests, if you manage to trigger some phobias in them via propaganda. "Ohoohoh look out, it's the nuclear boogieman, now switch your economy to Russian gas instead, it's safer"
The switching to russian gas is bad for know, but was rational back then. The idea was to give russia leverage on europe besides war, so that they don't need war.
Only if you're a corrupt German politician getting bribed by Russia to sell out long term national security for short term corporate profits.
It was also considered a stupid idea back then by NATO powers asking Germany WTF are you doing, tying your economy to the nation we're preparing to go to war with.
> The idea was to give russia leverage on europe besides war, so that they don't need war.
The present day proves it was a stupid idea.
"You were given the choice between war and dishonor. You chose dishonor, and you will have war." - Churchill
It worked quite well between France and Germany 50 years earlier.
Yes it was naive, given the philosophy of the leaders of the UdSSR/Russia, but I don't think it was that much problematic. We do need some years to adapt, but it doesn't meaningfully impact the ability to send weapons to the ukraine and impose sanctions (in the long term). Meanwhile we got cheap gas for some decades and Russia got some other trade partners beside China. Would we better of if we didn't use the oil in the first place? Then Russia would have bounded earlier only to China and Nordkorea, etc. . It also did have less environmental impact then shipping the oil from the US.
>It worked quite well between France and Germany 50 years earlier.
France and Germany were democracies under the umbrella of the US rule acting as arbiter. It's disingenuous and even stupid, to argue an economic relationship with USSR and Putin's Russia as being the same thing.
Yes I agree it was naive. It is something people come up with, if they think everyone cares for their own population's best and "western" values. Yet that is an assumption we used to base a lot on and still do.
Did the US force France into it? I thought that it was an idea of the french government (Charles de Gaulle), while the population had much resentment, which only vanished after having successful business together. Germany hadn't much choice though. I don't think it would had lasting impact if it were decreed and not coming from the local population.
You could hope making Russia richer, could in them rather be rich then large, which is basically the deal we have with China, which is still an alien dictatorship.
It was a major success, contributing to the thawing of relationships with the Soviet Union and probably contributed to the peaceful end of the Soviet Union.
It supported several EU countries through their economic development and kept the EU afloat through the financial crisis.
It was a very important source of energy and there is no replacement. This can be seen by the flight of capital, deindustrialisation and poor economic prospects in Germany and the EU.
But as far as I know, many countries still import energy from Russia, either directly or laundered through third parties.
I think the argument was about automated killing, not automated weapons.
There are already drones from Germany capable of automatic target acquisition, but they still require a human in the loop to pull the trigger. Not because they technically couldn't, but because they are required to.
I don't disagree that we need regulation, but I also think citing literal fiction isn't a good argument. We're also very, very far away from anything approaching AGI, so the idea of it becoming evil seems a bit far fetched.
On the other hand, firstly every single person disagrees what the phrase AGI means, varying from "we've had it for years already" to "the ability to do provably impossible things like solve the halting problem"; and secondly we have a very bad track record for knowing how long it will take to invent anything in the field of AI with both positive and negative failures, for example constantly thinking that self driving cars are just around the corner vs. people saying an AI that could play Go well was "decades" away a mere few months before it beat the world champion.
Autonomous sentry turrets have already been a thing since the 2000s. If we assume that military technology is always at least some 5-10 years ahead of civilian, it is likely that some if not all of the "defense" contractors have far more terrifying autonomous weapons.
regulation does not stop weapons from being created that utilizes AI. It only slows down honest states that try to abide by it, and gives the dishonest ones a head start.
The only thing the cookie law has accomplished for users, is pestering everyone with endless popups (full of dark patterns). WWW is pretty much unbearable to use without uBlock filtering that nonsense away. User tracking and fingerprinting has moved server side. Zero user privacy has been gained, because there's too much money to be made and the industry routed around this brain dead legislation.
> User tracking and fingerprinting has moved server side.
This smells like a misconception of the GDPR. The GDPR is not about cookies, it is about tracking. You are not allowed to track your users without consent, even if you do not use any cookies.
Cookies and crossdomain tracking is slightly different to a login. Login would occur on one platform and would not track you when you go on to amazon or some porn site or read infowars. But crossdomain cookies do not need auth and they are everywhere because webmasters get paid for adding them, they track you everywhere.
Well in my case I just do not use those websites with an enormous amount of “partners” anymore. Cookie legislation was great because it now shows you how many businesses are ad based, it added a lot of transparency. It is annoying only because you want the shit for free and it carries a lot of cookies usually. All of the businesses that do not track beyond the necessary do not have that issue with the cookie banners IMO. GDPR is great for users and not too difficult to implement. All of the stuff related to it where you can ask the company what data they hold about you is also awesome.
Well Europe haven't enacted policies actually breaking American monopolies until now.
Europeans are still essentially on Google, Meta and Amazon for most of their browsing experiences. So I'm assuming Europe's goal is not to compete or break American moat but to force them to be polite and to preserve national sovereignty on important national security aspects.
A position which is essentially reasonable if not too polite.
> So I'm assuming Europe's goal is not to compete or break American moat but to force them to be polite and to preserve national sovereignty on important national security aspects.
When push comes to shove the US company will always prioritize US interest. If you want to stay under the US umbrella by all means. But honestly it looks very short sighted to me.
You have only one option. Grow alternatives. Fund your own companies. China managed to fund the local market without picking winners. If European countries really care, they need to do the same for tech.
If they don't they will forever stay under the influence of another big brother. It is US today, but it could be China tomorrow.
That it is very likely not going to work as advertised, and might even backfire.
The EU AI regulation establishes complex rules and requirements for models trained above 10^25 FLOPS. Mistral is currently the only European company operating at that scale, and they are also asking for a pause before these rules go into effect.
The sad reality is that nobody ever cares about the security/ethics of their product unless they are pressured. Model evaluation against some well defined ethics framework or something like HarmBench are not without costs, nobody wants to do that. It is similar to pentesting. It is good that such suggestions are being pushed forward to make sure model owners are responsible here. It also protects authors and reduces the risk of their works being copied verbatim. I think this is what morel owners are afraid of the most.
This is the same entity that has literally ruled that you can be charged with blasphemy for insulting religious figures, so intent to protect citizens is not a motive I ascribe to them.
>Even in a lively discussion it was not compatible with Article 10 of the Convention to pack incriminating statements into the wrapping of an otherwise acceptable expression of opinion and claim that this rendered passable those statements exceeding the permissible limits of freedom of expression.
Although the expression of this opinion is otherwise acceptable, it was packed with "incriminating statements". But the subject of these incriminating statements is 2000 year old mythical figure.
The EU is pushing for a backdoor in all major messaging/email providers to "protect the children". But it's for our own good you see? The EU knows best and it wants your data without limits and without probable cause. Everyone is a suspect.
A good example of how this can end up with negative outcomes is the cookie directive, which is how we ended up with cookie consent popovers on every website that does absolutely nothing to prevent tracking and has only amounted to making lives more frustrating in the EU and abroad.
It was a decade too late and written by people who were incredibly out of touch with the actual problem. The GDPR is a bit better, but it's still a far bigger nuisance for regular European citizens than the companies that still largely unhindered track and profile the same.
Cookie consent popovers were the deliberate decisions of company to create the worst possible compliance. A much simpler one could have been to stop tracking users especially when it is not their primary business.
Newer regulations also mandate that "reject all cookies" should be a one click action but surprisingly compliance is low. Once again, the enemy of the customer here is the company, not the eu regulation.
I don’t believe that every website has colluded to give themselves a horrible user experience in some kind of mass protest against the GDPR. My guess is that companies are acting in their interests, which is exactly what I expect them to do and if the EU is not capable of figuring out what that will look like then it is a valid criticism of their ability to make regulations
What makes you think the regulators didn't predict the outcome?
Of course the business which depend on harvesting data will do anything they can to continue harvesting data. The regulation just makes that require consent. This is good.
If businesses are intent to keep on harvesting data by using dark patterns to obtain "consent", these businesses should either die or change. This is good.
Websites use ready-to be used cookie banners provider by their advertisers. Who have all the incentive to make the process as painful as possible unless you click "accept", and essentially followed the model that Facebook pioneered.
And since most people click on accept, websites don't really care either.
Well, pragmatically, I'd say no. We must judge regulations not by the well wishes and intentions behind them but the actual outcomes they have. These regulations affect people, jobs and lives.
The odds of the EU actually hitting a useful mark with these types of regulations, given their technical illiteracy, it's is just astronomically unlikely.
I think OP is criticising blindly trusting the regulation hits the mark because Meta is mad about it. Zuckerberg can be a bastard and correctly call out a burdensome law.
Bad argument, the solution is not to not regulate, it's to make a new law mandating companies to make cookies opt-in behind a menu that can't be a banner. And if this somehow backfires too, we go again. Giving up is not the solution to the privacy crisis.
"Even the very wise cannot see all ends." And these people aren't what I'd call "very wise."
Meanwhile, nobody in China gives a flying fuck about regulators in the EU. You probably don't care about what the Chinese are doing now, but believe me, you will if the EU hands the next trillion-Euro market over to them without a fight.
Everyone working on AI will care, if ASML stops servicing TSMC's machines. If Europe is serious about responsible AI, I think applying pressure to ASML might be their only real option.
If Europe is serious about responsible AI, I think applying pressure to ASML might be their only real option.
True, but now they get to butt heads with the US, who call the tunes at ASML even though ASML is a European company.
We (the US) have given China every possible incentive to break that dependency short of dropping bombs on them, and it would be foolish to think the TSMC/ASML status quo will still hold in 5-10 years. Say what you will about China, they aren't a nation of morons. Now that it's clear what's at stake, I think they will respond rationally and effectively.
I was specifically referring to several comments that specifically stated that they did not know what the regulation was, but that they assumed Europe was right and Meta was wrong.
I, prior to reading the details of the regulation myself, was commenting on my surprise at the default inclinations of people.
At no point did I pass judgement on the regulation and even after reading a little bit on it I need to read more to actually decide whether I think it's good or bad.
Being American it impacts me less, so it's lower on my to do list.
Let's see, how many times did I get robo-called in the last decade? Zero :)
Sometimes the regulations are heavy-handed and ill-conceived. Most of the time, they are influenced by one lobby or another. For example, car emissions limits scale with _weight_ of all things, which completely defeats the point and actually makes today's car market worse for the environment than it used to be, _because of_ emissions regulations. However, it is undeniable that that the average European is better off in terms of privacy.
How about not assuming by default? How about reading something about this? How about forming your own opinion, and not the opinion of the trillion- dollar supranational corporations?
That you don't have an immediate guess at which party you are most likely to agree with?
I do and I think most people do.
I'm not about to go around spreading my uninformed opinion though. What my comment said was that I was surprised at people's kneejerk reaction that Europe must be right, especially on HN. Perhaps I should have also chided those people for commenting at all, but that's hindsight for you.
The "kneejerk reaction" is precisely because it's Meta. You know, the poor innocent trillion-dollar supranational corporation whose latest foray into AI was "opt-in all of our users into training our AI and make the opt-out an awkward broken multi-step process designed to discourage the opt-out".
Whereas EU's "heavy-handed and ill-conceived" regulations are "respect copyright, respect user choice, document your models, and use AI responsibly".
Maybe the others have put in a little more effort to understand the regulation before blindly criticising it? Similar to the GDPR, a lot of it is just common sense—if you don’t think that "the market" as represented by global mega-corps will just sort it out, that is.
Our friends in the EU have a long history of well-intentioned but misguided policy and regulations, which has led to stunted growth in their tech sector.
Maybe some think that is a good thing - and perhaps it may be - but I feel it's more likely any regulation regarding AI at this point in time is premature, doomed for failure and unintended consequences.
Yet at the same time, they also have a long history of very successful policy, such as the USB-C issue, but also the GDPR, which has raised the issue of our right to privacy all over the world.
How long can we let AI go without regulation? Just yesterday, there was a report here on Delta using AI to squeeze higher ticket prices from customers. Next up is insurance companies. How long do you want to watch? Until all accountability is gone for good?
As someone with both a usb-c and micro-usb phone, I can assure you that other connectors are not free of that problem. The micro-usb one definitely feels worse. Not sure about the old proprietary crap that used to be forced down our throats so we buy Apple AND Nokia chargers, and a new one for each model, too.
Who said money. Time and human effort are the most valuable commodities.
That time and effort wasted on consultants and lawyers could have been spent on more important problems or used to more efficiently solve the current one.
Which... has the consequences of stifling innovation. Regulations/policy is two-way street.
Who's to say USB-C is the end-all-be-all connector? We're happy with it today, but Apple's Lightning connector had merit. What if two new, competing connectors come out in a few year's time?
The EU regulation, as-is, simply will not allow a new technically superior connector to enter the market. Fast forward a decade when USB-C is dead, EU will keep it limping along - stifling more innovation along the way.
Standardization like this is difficult to achieve via consensus - but via policy/regulation? These are the same governing bodies that hardly understand technology/internet. Normally standardization is achieved via two (or more) competing standards where one eventually "wins" via adoption.
If the industry comes out with a new, better connector, they can use it, as long as they also provide USB-C ports. If enough of them collectively decide the new one is superior, then they can start using that port in favor of USB-C altogether.
The EU says nothing about USB-C being the bestest and greatest, they only say that companies have to come to a consensus and have to have 1 port that is shared between all devices for the sake of consumers.
I personally much prefer USB-C over the horrid clusterfuck of proprietary cables that weren't compatible with one another, that's for sure.
> The EU regulation, as-is, simply will not allow a new technically superior connector to enter the market.
As in: the EU regulation literally addresses this. You'd know it if you didn't blindly repeat uneducated talking points by others who are as clueless as you are.
> Standardization like this is difficult to achieve via consensus - but via policy/regulation?
In the ancient times of 15 or so years ago every manufacturer had their own connector incompatible with each other. There would often be connectors incompatible with each other within a single manufacturer's product range.
The EU said: settle on a single connector voluntarily, or else. At the time the industry settled on micro-USB and started working on USB-C. Hell, even Power Delivery wasn't standardized until USB-C.
Consensus doesn't always work. Often you do need government intervention.
You mean that thing (or is that another law?) that forces me to find that "I really don't care in the slightest" button about cookies on every single page?
No, the laws that ensures that private individuals have the power to know what is stored about them, change incorrect data, and have it deleted unless legally necessary to hold it - all in a timely manner and financially penalize companies that do not.
No, GDPR is the law that allowed me to successfully request the deletion of everything companies like Meta have ever harvested on me without my consent and for them to permanently delete it.
Fun fact, GitHub doesn't have cookie banners. It's almost like it's possible to run a huge site without being a parasite and harvesting every iota of data of your site's visitors!
I’d side with Europe blindly over any corporation.
The European government has at least a passing interest in the well being of human beings while that is not valued by the incentives that corporations live by
The EU is pushing for a backdoor in all major messaging/email providers to "protect the children". No limits and no probable cause required. Everyone is a suspect.
Are you still sure you want to side blindly with the EU?
That's the issue with people's from a certain side of politics, they don't vote for something they always side / vote against something or someone ... Blindly. It's like pure hate going over reason. But it's ok they are the 'good' ones so they are always right and don't really need to think
Sometimes people are just too lazy to read an article. If you just gave one argument in favor of Meta, then perhaps that could have started a useful conversation.
No... making teenagers feel depressed sometimes is not in fact worse than facilitating the Holocaust, using human limbs as currency, enslaving half the world and dousing the earth with poisons combined.
I'm not saying Meta isn't evil - they're a corporation, and all corporations are evil - but you must live in an incredibly narrow-minded and privileged bubble to believe that Meta is categorically more evil than all other evils in the span of human history combined.
Go take a tour of Dachau and look at the ovens and realize what you're claiming. That that pales in comparison to targeted ads.
Dachau was enabled by the Metas of that time. It needed advertising aka. propaganda to get to this political regime and it needed surveillance to keep people in check and target the people who get a sponsorship for that lifelong vacation.
I'm specifically talking about comments that say they haven't read it, but that they side with Europe. Look through the thread, there's a ton like that
To be fair, anyone who genuinely likes React is probably insane?
Plenty of great projects are developed by people working at Meta. Doesn't change the fact that the company as a whole should be split in at least 6 parts, and at least two thirds of these parts should be regulated to death. And when it comes to activities that do not improve anyone's life such as advertisement and data collection, I do mean literally regulated into bankruptcy.
We don't like what trillion-dollar supranational corporations and infinite VC money are doing with tech.
Hating things like "We're saving your precise movements and location for 10+ years" and "we're using AI to predict how much you can be charged for stuff" is not hating technology
Edit: from the linked in post, Meta is concerned about the growth of European companies:
"We share concerns raised by these businesses that this over-reach will throttle the development and deployment of frontier AI models in Europe, and stunt European companies looking to build businesses on top of them."
Sure, but Meta saying "We share concerns raised by these businesses" translates to: It is in our and only our benefit for PR reasons to agree with someone, we don't care who they are, we don't give a fuck, but just this second it sounds great to use them for our lobbying.
Meta has never done and will never do anything in the general public's interest. All they care about is harvesting more data to sell more ads.
> has never done and will never do anything in the general public's interest
I'm no Meta apologist, but haven't they been at the forefront of open-source AI development? That seems to be in the "general public's interest".
Obviously they also have a business to run, so their public benefit can only go so far before they start running afoul of their fiduciary responsibilities.
Of course. Skimming over the AI Code of Practice, there is nothing particularly unexpected or qualifying as “overreach”. Of course, to be compliant, model providers can’t be shady which perhaps conflicts with Meta’s general way of work.
Companies did that and thoughtless website owners, small and large, who decided that it is better to collect arbitrary data, even if they have no capacity to convert it into information.
The solution to get rid of cookie banners, as it was intended, is super simple: only use cookies if absolutely necessary.
It was and is a blatant misuse. The website owners all have a choice: shift the responsibility from themselves to the users and bugger them with endless pop ups, collect the data and don’t give a shit about user experience. Or, just don’t use cookies for a change.
And look which decision they all made.
A few notable examples do exist: https://fabiensanglard.net/
No popups, no banner, nothing. He just don’t collect anything, thus, no need for a cookie banner.
The mistake the EU made was
to not foresee the madness used to make these decisions.
I’ll give you that it was an ugly, ugly outcome. :(
> The mistake the EU made was to not foresee the madness used to make these decisions.
It's not madness, it's a totally predictable response, and all web users pay the price for the EC's lack of foresight every day. That they didn't foresee it should cause us to question their ability to foresee the downstream effects of all their other planned regulations.
Interesting framing. If you continue this line of thought, it will end up in a philosophical argument about what kind of image of humanity one has. So your solution would be to always expect everybody to be the worst version of themselves? In that case, that will make for some quite restrictive laws, I guess.
People are generally responsive to incentives. In this case, the GDPR required:
1. Consent to be freely given, specific, informed and unambiguous and as easy to withdraw as to give
2. High penalties for failure to comply (€20 million or 4 % of worldwide annual turnover, whichever is higher)
Compliance is tricky and mistakes are costly. A pop-up banner is the easiest off-the-shelf solution, and most site operators care about focusing on their actual business rather than compliance, so it's not surprising that they took this easy path.
If your model of the world or "image of humanity" can't predict an outcome like this, then maybe it's wrong.
> and most site operators care about focusing on their actual business rather than compliance,
And that is exactly the point. Thank you. What is encoded as compliance in your example is actually the user experience. They off-loaded responsibility completely to the users. Compliance is identical to UX at this point, and they all know it. To modify your sentence: “and most site operators care about focusing on their actual business rather than user experience.”
The other thing is a lack of differentiation. The high penalities you are talking about are for all but of the top traffic website. I agree, it would be insane to play the gamble of removing the banners in that league. But tell me: why has ever single-site- website of a restaurant, fishing club and retro gamer blog a cookie banner? For what reason? They won’t making a turnover you dream about in your example even if they would win the lottery, twice.
Well, you and I could have easily anticipated this outcome. So could regulators. For that reason alone…it’s stupid policy on their part imo.
Writing policy is not supposed to be an exercise where you “will” a utopia into existence. Policy should consider current reality. if your policy just ends up inconveniencing 99% of users, what are we even doing lol?
I don’t have all the answers. Maybe a carrot-and-stick approach could have helped? For example giving a one time tax break to any org that fully complies with the regulation? To limit abuse, you could restrict the tax break to companies with at least X number of EU customers.
I’m sure there are other creative solutions as well. Or just implementing larger fines.
If the law incentivized practically every website to implement the law in the "wrong" way, then the law seems wrong and its implications weren't fully thought out.
Right there... "This site uses cookies." Yes, it's a footer rather than a banner. There is no option to reject all cookies (you can accept all cookies or only "necessary" cookies).
Do you have a suggestion for how the GDPR site could implement this differently so that they wouldn't need a cookie footer?
> Do you have a suggestion for how the GDPR site could implement this differently so that they wouldn't need a cookie footer?
Well, it's a information-only website, it has no ads or even a login, so they don't need to use any cookies at all. In fact if you look at the page response in the browser dev tools, there's in fact no cookies on the website, so to be honest they should just delete the cookie banner.
This is why the EU law was nonsense. Many of those cookies listed are just because of they want embed things like YouTube or Vimeo videos. Embedding YouTube to show videos to your users is massively cheaper and easier than self hosted video infrastructure. The idea that the GDPR's own website just implemented GPDR "wrong" because they should just avoid using cookies is nonsense and impractical.
The other part of the point I was trying to make that if there's a different technological solution to cookie banners, the europa.eu sites are not demonstrating it. Instead, companies that don't do it that way get fined for some inadequacy in their approach.
Thus, I hold that the GPDR requires cookie banners.
---
Another part to consider that if videos (and LinkedIn for job searches and Google Maps for maps and Internet Archive for whatever they embed from there) are sufficiently onerous 3rd party cookies ("yea, we're being good with our cookies, but we use 3rd party providers and can't do anything about them, but we informed and you accepted their cookies")... then wouldn't it be an opportunity for the Federal Ministry of Transport and Digital Infrastructure https://en.wikipedia.org/wiki/Federal_Ministry_for_Transport or similar to have grants https://www.foerderdatenbank.de for companies to create a viable GDPR friendly alternative to those services?
That is, if the GDPR and other EU regulations weren't stifling innovation and establishing regulatory capture (its expensive to do and retain the lawyers needed to skirt the rules) making it impossible for such newer alternative companies to thrive and prosper within the EU.
But this is a failure on the part of the EU law makers. They did not understand how their laws would look in practice.
Obviously some websites need to collect certain data and the EU provided a pathway for them to do that, user consent. It was essentially obvious that every site which wanted to collect data for some reason also could just ask for consent. If this wasn't intended by the EU it was obviously foreseeable.
>The mistake the EU made was to not foresee the madness used to make these decisions.
Exactly. Because the EU law makers are incompetent and they lack technical understanding and the ability to write laws which clearly define what is and what isn't okay.
What makes all these EU laws so insufferable isn't that they make certain things illegal, it is that they force everyone to adopt specific compliance processes, which often do exactly nothing to achieve the intended goal.
User consent was the compliance path to be able to gather more user data. Not foreseeing that sites would just ask that consent was a failure of stupid bureaucrats.
Of course they did not intend that sites would just show pop ups, but the law they created made this the most straightforward path for compliance.
Tracking users isn’t actually needed, though. Websites that store data only for the functionality they offer don’t need to seek consent.
The actual problem is weak enforcement. If the maximum fines allowed by the law had been levied, several companies would’ve been effectively ended or excluded from the EU. That would’ve been good incentive for non-malicious compliance.
That possibly cannot be the common notion to frame this.
I agree with some parts it but also see two significant issues:
1. It is even statistically implausible that everyone working at the EU is tech-illiterate and stupid and everybody at HN is a body of enlightenment on two legs. This is a tech-heavy forum, but I would guess most here are bloody amateurs regarding theory and science of law and you need at least two disciplines at work here, probably more.
This is drifting too quickly into a territory of critique by platitudes for the sake of criticism.
2. The EU made an error of commission, not omission, and I think that that is a good thing. They need to make errors in order to learn from them and get better. Critique by using platitudes is not going to help the case. It is actually working against it. The next person initiating a EU procedure to correct the current error with the popups will have the burden of doing everything perfectly right, all at once, thought through front to back, or face the wrath of the all-knowing internet. So, how should that work out? Exactly like this: we will be stuck for half an eternity and no one will correct anything because if you don’t do anything you can’t do any wrong! We as a society mostly record the things that someone did wrong but almost never record something somebody should have done but didn’t. That’s an error of omission, and is usually magnitudes more significant than an error of commission. What is needed is an alternative way of handling and judging errors. Otherwise, the path of learning by error will be blocked by populism.
——-
In my mind, the main issue is not that the EU made a mistake. The main issue is that it is not getting corrected in time and we will probably have to suffer another ten years or so until the error gets removed. The EU as a system needs to be accelerated by a margin so that it gets to an iterative approach if an error was made. I would argue with a cybernetic feedback loop approach here, but as we are on HN, this would translate to: move fast and break things.
On point 1. Tech illiteracy is something that affects an organization, it is independent of whether some individuals in that organization understand the issues involved. I am not arguing that nobody at the EU understands technology, but that key people pushing forward certain pieces of legislation have a severe lack of technical background.
On point 2. My argument is that the EU is fundamentally legislating wrong. The laws they create are extremely complex and very hard to decipher, even by large corporate law teams. The EU does not create laws which clearly outlaw certain behaviors, they create corridors of compliance, which legislate how corporations have to set up processes to allow for certain ends. This makes adhering to these laws extremely difficult, as you can not figure out if something you are trying to do is illegal. Instead you have to work backwards, start by what you want to do, then follow the law backwards and decipher the way bureaucrats want you to accomplish that thing.
I do not particularly care about cookie banners. They are just an annoying thing. But they clearly demonstrate how the EU is thinking about legislation, not as strict rules, but as creating corridors. In the case of cookie banners the EU bureaucrats themselves did not understand that the corridor they created allowed basically anyone to still collect user data, if they got the user to click "accept".
The EU creates corridors of compliance. These corridors often map very poorly onto the actual processes and often do little to solve the actual issues. The EU needs to stop seeing themselves as innovators, who create broad highly detailed regulations. They need to radically reform themselves and need to provide, clear and concise laws which guarantee basic adherence to the desired standards. Only then will their laws find social acceptance and will not be viewed as bureaucratic overreach.
> Exactly. Because the EU law makers are incompetent and they lack technical understanding and the ability to write laws which clearly define what is and what isn't okay.
I am sorry but I too agree with OP's statement. The EU is full of technocrats who have no idea about tech and they get easily swayed by lobbies selling them on a dream that is completely untethered to the reality we live in.
> The next person initiating a EU procedure to correct the current error with the popups will have the burden of doing everything perfectly right, all at once, thought through front to back, or face the wrath of the all-knowing internet.
You are talking as if someone is actually looking at the problem. is that so? Because if there was such a feedback loop that you seem to think exists in order to correct this issue, then where is it?
> In my mind, the main issue is not that the EU made a mistake. The main issue is that it is not getting corrected in time and we will probably have to suffer another ten years or so until the error gets removed.
So we should not hold people accountable when they make mistakes and waste everyone's time then?
There is plenty of evidence to show that the EU as a whole is incompetent when it comes to tech.
Case and point the Chat control law that is being pushed despite every single expert warning of the dire consequences in terms of privacy, and setting a dangerous precedent. Yet, they keep pushing it because it is seen as a political win.
If the EU knew something about tech they would know that placing back-doors in all communication applications is non starter.
> You are talking as if someone is actually looking at the problem. is that so? Because if there was such a feedback loop that you seem to think exists in order to correct this issue, then where is it?
Yes, the problem is known and actually worked on. There are several approaches, some being initiated on country level (probably because EU is too slow) some within the institution, as this one:
No, I don’t think that institutionalised feedback loops exist there, but I do not know. I can only infer from observation that they are probably not in place, as this would, I would think, show up as “move fast and break things”.
> So we should not hold people accountable when they make mistakes and waste everyone's time then?
I have not made any direct remark to accountability, but I’ll play along: what happens by handling mistakes that way is accountability through fear. What is, in my opinion, needed is calculated risk taking and responsibility on a base of trust and not punishment. Otherwise, eventually, you will be left with no one taking over the job or people taking over the job who will conserve the status quo. This is the opposite of pushing things through at high speed. There needs to be an environment in place which can absorb this variety before you can do that(see also: Peter Senge’s “Learning Organisation”).
On a final note, I agree that the whole lobbying got out of hand. I also agree on the back-door issue and I would probably agree on a dozen other things. I am not in the seat of generally approving what the European Administration is doing. One of my initial points, however, was that the EU is not “the evil,
dumb-as-brick-creator” of the cookie-popup-mess. Instead, this is probably one of the biggest cases of malicious compliance in history. And still, the EU gets the full, 100% blame, almost unanimously (and no comment as to what the initial goal was).
That is quite a shift in accountability you just were interested in not to loose.
The internet is riddled with popups and attention grabbing dark patterns, but the only one that's a problem is the one that actually lets you opt out of being tracked to death?
...yes? There are countless ways it could have been implemented that would have been more effective, and less irritating for billions of people. Force companies to respect the DNT header. Ta-daa, done. But that wouldn't have been profitable, so instead let's cook up a cottage industry of increasingly obnoxious consent banners.
Actually, it's because marketing departments rely heavily on tracking cookies and pixels to be their job, as their job is measured on things like conversations and understanding how effective their ad spend is.
The regulations came along, but nobody told marketing how to do their job without the cookies, so every business site keeps doing the same thing they were doing, but with a cookie banner that is hopefully obtrusive enough that users just click through it.
No it's because I'll get fined by some bureaucrat who has never run a business in his life if I don't put a pointless popup on my stupid-simple shopify store.
You know it's possible to make good reasoned points without cramming in "<psuedo-marxist buzzword> capitalism" into a sentence for absolutely no reason.
All I want is to not be forced to irritate my customers about something that nobody cares about. It doesn't have to be complicated. It is how the internet was for all of its existence until a few years ago.
There you go. Shopify does a bunch of analytics gathering for you. Whether you choose to use it or not, the decision was made by someone who thought it would be a value add and now you need a banner.
No need for a cookie banner if you don't collect data without consent. Every modern browser supports APIs that answer that question without pestering the user with a cookie banner.
It's important to point out that it's actually not at all about cookies. It's tracking by using information stored on the user's device in general that needs to have consent.
You could use localStorage for the purposes of tracking and it still needs to have a popup/banner.
An authentication cookie does not need a cookie banner, but if you issue lots of network requests for tracking and monitor server logs, that does now need a cookie banner.
If you don't store anything, but use fingerprinting, that is not covered by the law but could be covered by GDPR afaiu
Kaplan's LinkedIn post says absolutely nothing about what is objectionable about the policy. I'm inclined to think "growth-stunting" could mean anything as tame as mandating user opt-in for new features as opposed to the "opt-out" that's popular among US companies.
I hope this isn't coming down to an argument of "AI can't advance if there are rules". Things like copyright, protection of the sources of information, etc.
So then it's something completely worthless in the globally competitive cutthroat business world, that even the companies who signed won't follow, they just signed it for virtue signaling.
If you want companies to actually follow a rule, you make it a law and you send their CEOs to jail when they break it.
"Voluntary codes of conduct" have less value in the business world than toilet paper. Zuck was just tired of this performative bullshit and said the quiet part out loud.
No, it's a voluntary code of conduct so AI providers can start implementing changes before the conduct becomes a legal requirement, and so the code itself can be updated in the face of reality before legislators have to finalize anything. The EU does not have foresight into what reasonable laws should look like, they are nervous about unintended consequences, and they do not want to drive good-faith organizations away, they are trying to do this correctly.
This cynical take seems wise and world-weary but it is just plain ignorant, please read the link.
It's a chance for the business to try out the rules, so they can have an informed opinion and make useful feedback when the EU turn it into an actual law. And also so they don't have to scramble to compile once they suddenly become biding.
But well, I wouldn't expect Meta to sign into it either.
“Heft of EU endorsement.” It’s amazing how Europeans have simply acquiesced to an illegitimate EU imitation government simply saying, “We dictate your life now!”.
European aristocrats just decided that you shall now be subjects again and Europeans said ok. It’s kind of astonishing how easy it was, and most Europeans I met almost violently reject that notion in spite of the fact that it’s exactly what happened as they still haven’t even really gotten an understanding for just how much Brussels is stuffing them.
In a legitimate system it would need to be up to each sovereign state to decide something like that, but in contrast to the US, there is absolutely nothing that limits the illegitimate power grab of the EU.
Honestly, US is really not in a good shape to support your argument.
If aristocratic figures had so much power in EU, they wouldnt be fleeing from the union.
In reality, US is plagued with greed, scams, mafias in all sectors, human rights violations and a economy thats like a house of cards. In contrast, you feel human when you're in EU. You have voice, rights and common sense!
It definitely has its flaws, but atleast the presidents there are not rug pulling their own citizens and giving pardons to crypto scammers.. Right?
> in contrast to the US, there is absolutely nothing that limits the illegitimate power grab of the EU.
I am happy to inform you that the EU actually works according to treaties which basically cover every point of a constitution and has a full set of courts of law ensuring the parliament and the European executive respect said treaties and allowing European citizens to defend their interests in case of overreach.
> European aristocrats just decided
I am happy to inform you that the European Union has a democratically elected parliament voting its laws and that the head of commission is appointed by democratically elected heads of states and commissioners are confirmed by said parliament.
If you still need help with any other basic fact about the European Union don’t hesitate to ask.
The world seems to be literally splitting apart, and Meta was a huge part of sowing discontent and stoking violence. I hope to move to Europe one day and I can use an open source LLM at that point
I admit that I am biased enough to immediately expect the AI agreement to be exactly what we need right now if this is how Meta reacts to it. Which I know is stupid because I genuinely have no idea what is in it.
If I'd were to guess Meta is going to have a problem with chapter 2 of "AI Code of Practice" because it deals with copyright law, and probably conflicts with their (and others approach) of ripping text out of copyrighted material (is it clear yet if it can be called fair use?)
We have exceptions, which are similar, but the important difference is that courts decide what is fair and what is not, whereas exceptions are written in law. It is a more rigid system that tend to favor copyright owners because if what is seen as "fair" doesn't fit one of the listed exceptions, copyright still applies. Note that AI training probably fits one of the exceptions in French law (but again, it is complicated).
I don't know the law in other European countries, but AFAIK, EU and international directives don't do much to address the exceptions to copyright, so it is up to each individual country.
Same in Sweden. The U.S. has one of the broadest and most flexible fair use laws.
In Sweden we have "citaträtten" (the right to quote). It only applies to text and it is usually said that you can't quote more than 20% of the original text.
Even if it gets challenged successfully (and tbh I hope it does), the damage is already done. Blocking it at this stage just pulls up the ladder behind the behemoths.
Unless the courts are willing to put injunctions on any model that made use of illegally obtained copyrighted material - which would pretty much be all of them.
Anthropic bought millions of books and scanned them, meaning that (at least for those sources) they were legally obtained. There has also been rampant piracy used to obtain similar material, which I won't defend. But it's not an absolute - training can be done on legally acquired material.
Apologies, I read your original statement as somehow concluding that you couldn't train an AI legally. I just wanted to make it extra clear that based on current legal precedent in the U.S., you absolutely can. Methodology matters, though.
You really went all out with showing your contempt, huh? I'm glad that you're enjoying the tech companies utterly dominating US citizens in the process
As a citizen I’m perfectly happy with the AI Act. As a “person in tech”, the kind of growth being “stunt” here shouldn’t be happening in the first place. It’s not overreach to put some guardrails and protect humans from the overreaching ideas of the techbro elite.
FOMO is not a valid reason to abandon the safety and wellbeing of the people who will inevitably have to endure all this “AI innovation”. It’s just like building a bridge - there are rules and checks and triple checks
Yep, that's why they need to regulate ASML. Tell ASML they can only service 'compliant' foundries, where 'compliant' foundry means 'only sells to compliant datacenters/AI firms'.
As a techbro elite. I find it incredibly annoying when people regulate shit that ‘could’ be used for something bad (and many good things), instead of regulating someone actually using it for something bad.
You’re too focused on the “regulate” part. It’s a lot easier to see it as a framework. It spells out what you need to anticipate the spirit of the law and what’s considerate good or bad practice.
If you actually read it, you will also realise it’s entirely comprised of “common sense”. Like, you wouldn’t want to do the stuff it says are not to be done anyway. Remember, corps can’t be trusted because they have a business to run. So that’s why when humans can be exposed to risky AI applications, the EU says the model provider needs to be transparent and demonstrate they’re capable of operating a model safely.
I feel a lot of emotions in your comment but no connection with reality. The AI Act is really not that big of a deal. If Meta is unhappy with it, it means it’s working.
Meta isn't actually an AI company, as much as they'd like you to think they are now. They don't mind if nobody comes out as the big central leader in the space, they even release the weights for their models.
Ask Meta to sign something about voluntarily restricting ad data or something and you'll get your same result there.
About 2 weeks ago OpenAI won a $200 million contract with the Defense Department. That's after partnering with Anduril for quote "national security missions." And all that is after the military enlisted OpenAI's "Chief Product Officer" and sent him straight to Lt. Colonel to work in a collaborative role directly with the military.
And that's the sort of stuff that's not classified. There's, with 100% certainty, plenty that is.
Nit: (possibly cnbc's fault)
there should be a hyphen to
clarify meta opposes overreach, not growth. "growth-stunting overreach" vs "growth (stunting overreach)"
EU-wide ban of Meta incoming? I'd celebrate personally, Meta and their products are a net negative on society, and only serve to pump money to the other side of the Atlantic, to a nation that has shown outright hostility to European values as of late.
People complain more about cookie banners than they do the actual invasive tracking by those cookies.
Those banners suck and I wouldn't mind if the EU rolled back that law and tried another approach. At the same time, it's fairly easy to add an extension to your browser that hides them.
Legislation won't always work. It's complex and human behavior is somewhat unpredictable. We've let tech run rampant up to this point - it's going to take some time to figure out how to best control them. Throwing up our hands because it's hard to protect consumers from power multi-national corporations is a pretty silly position imo.
can you expand on what sort of rationality would lead a person to consider an at worst annoying pop-up to be more dangerous than data exfiltration to companies and governments that are already acting in adversarial ways? The US government is already using people's social media profiles against them, under the Cloud act any US company can be compelled to hand data over to the government, as Microsoft just testified in France. That's less dangerous than an info pop up?
Of course it has nothing to do with rationality. They're mad at the first thing they see, akin to the smoker who blames the regulators when he has to look at a picture of a rotten lung on a pack of cigarettes
gdpr doesn't stop governments. governments are already spying without permission and they exploit stolen data all the time. so yes, the cost of gdpr compliances including popups is higher than the imperceptible cost of tracked advertising.
For one that is objectively incorrect. GDPR prevents a whole host of data collection outright, shifts the burden for corporations to collecting the minimal amount of data possible, and gives you the right to explicitly consent into what data can be collected.
Being angry at a popup that merely makes transparent, what a company tries to collect from you, and giving you the explicit option to say no to that, is just infantile. It basically amounts to saying that you don't want to think about how companies are exploiting your data, and that you're a sort of internet browsing zombie. That is certainly a lot of things, but it isn't rational.
The the entire ad industry moved to fingerprinting, mobile ad kits, and 3rd party authentication login systems so it made zero difference even if they did comply. Google and Meta aren't worried about cookies when they have JS on every single website but it burdens every website user.
This is not correct, the regulation has nothing to do with cookies as the storage method, and everything to do with what kind of data is being collected and used to track people.
Meta is hardly at blame here, it is the site owners that choose to add meta tracking code to their site and therefore have to disclose it and opt-in the user via "cookie banners"
that's deflecting responsibility. it's important to care about the actual effects of decisions, not hide behind the best case scenario. especially for governments.
in this case, it is clear that the EU policy resulted in cookie banners
This thread is people going "EU made me either choose to tell you that I spy on you or stop spying on you, now I need to tell everyone I spy on them, fucking EU".
Meta on the warpath, Europe falls further behind. Unless you're ready for a fight, don't get in the way of a barbarian when he's got his battle paint on.
Yeah. He just settled the Cambridge Analytica suit a couple days ago, he basically won the Canadian online news thing, he's blown billions of dollars on his AI angle. He's jacked up and wants to fight someone.
> It aims to improve transparency and safety surrounding the technology
Really it does, especially with some technology run by so few which is changing things so fast..
> Meta says it won’t sign Europe AI agreement, calling it an overreach that will stunt growth
God forbid critical things and impactful tech like this be created with a measured head, instead of this nonsense mantra of "Move fast and break things"
Id really prefer NOT to break at least what semblance of society social media hasn't already broken.
The US, China and others are sprinting and thus spiraling towards the majority of society's destitution unless we force these billionaires hands; figure out how we will eat and sustain our economies where one person is now doing a white or blue (Amazon warehouse robots) collar job that ten use to do.
I think it is a legitimate concern in Europe just because their economies are getting squeezed from all sides by USA and China. It's a lot more in the public consciousness now since Trump said all the quiet stuff out loud instead of just letting the slow boil continue.
The more I read of the existing rule sets within the eurozone the less surprised I am that they make additional shit tier acts like this.
What do surprise me is anything at all working with the existing rulesets, Effectively no one have technical competence and the main purpose of legislation seems to add mostly meaningless but parentally formulated complexities in order to justify hiring more bureaucrats.
>How to live in Europe
>1. Have a job that does not need state approval or licensing.
>2. Ignore all laws, they are too verbose and too technically complex to enforce properly anyway.
The problem with the EU regulation is the same as always, first and foremost they do not understand the topic and can not articulate a clear statement of law.
They create mountains of regulations, which are totally unclear and which require armies of lawyers to interpret. Adherence to these regulations becomes a major risk factor for all involved companies, which then try to avoid interacting with that regulation at all.
Getting involved with the GDPR is a total nightmare, even if you want to respect your users privacy.
Regulating AI like this is especially idiotic, since currently every year shows a major shift in how AI is utilized. It is totally out in the open how hard training an AI "from scratch" will be in 5 years. The EU is incapable of actually writing laws which make it clear what isn't allowed, instead they are creating vague corridors how companies should arrive at certain outcomes.
The bureaucrats see themselves as the innovators here. They aren't trying to make laws which prevent abuses, they are creating corridors for processes for companies to follow. In the case of AI these corridors will seem ridiculous in five years.
The economy does not exist in a vacuum. Making number go up isn't the end goal, it is to improve citizens lives and society as a whole. Everything is a tradeoff.
I charge my phone wirelessly. The presence of a port isn't a positive for me. It's just a hole I could do without. The shape of the hole isn't important.
Europe is the world‘s second largest economy and has the world‘s highest standard of living. I’m far from a fan of regulation but they’re doing a lot of things right by most measures. Irrelevancy is unlikely in their near future.
Just like GDPR, it will tremendously benefit big corporations (even if Meta is resistant) and those who are happy NOT to follow regulations (which is a lot of Chinese startups).
Not just Meta, 40 EU companies urged EU to postpone roll out of the ai act by two years due to it's unclear nature. This code of practice is voluntary and goes beyond what is in the act itself. EU published it in a way to say that there would be less scrutiny if you voluntarily sign up for this code of practice. Meta would anyway face scrutiny on all ends, so does not seem to a plausible case to sign something voluntary.
One of the key aspects of the act is how a model provider is responsible if the downstream partners misuse it in any way. For open source, it's a very hard requirement[1].
> GPAI model providers need to establish reasonable copyright measures to mitigate the risk that a downstream system or application into which a model is integrated generates copyright-infringing outputs, including through avoiding overfitting of their GPAI model. Where a GPAI model is provided to another entity, providers are encouraged to make the conclusion or validity of the contractual provision of the model dependent upon a promise of that entity to take appropriate measures to avoid the repeated generation of output that is identical or recognisably similar to protected works.
[1] https://www.lw.com/en/insights/2024/11/european-commission-r...
The quoted text makes sense when you understand that the EU provides a carveout for training on copyright protected works without a license. It's quite an elegant balance they've suggested despite the challenges it fails to avoid.
Is that true? How can they decide to wipe out the intellectual property for an individual or entity? It’s not theirs to give it away.
Copyright is not a god given right. It's an economic incentive created by government to make desired behavior (writing an publishing books) profitable.
Yes, 100%. And that’s why throwing copyright selectively in the bin now when there’s an ongoing massive transfer of wealth from creators to mega corps, is so surprising. It’s almost as if governments were only protecting economic interests of creators when the creators were powerful (eg movie studios), going after individuals for piracy and DRM circumvention. Now that the mega corps are the ones pirating at a scale they get a free pass through a loophole designed for individuals (fair use).
Anyway, the show must go on so were unlikely to see any reversal of this. It’s a big experiment and not necessarily anything that will benefit even the model providers themselves in the medium term. It’s clear that the ”free for all” policy on grabbing whatever data you can get is already having chilling effects. From artists and authors not publishing their works publicly, to locking down of open web with anti-scraping. Were basically entering an era of adversarial data management, with incentives to exploit others for data while protecting the data you have from others accessing it.
Why? Copyright is 1) presented as being there to protect the interests of the general public, not creators, 2) Statute of Anne, the birth of modern copyright law, protected printers - that is "big businesss" over creators anyway, so even that has largely always been a fiction.
But it is also increasingly dubious that the public gets a good deal out of copyright law anyway.
> From artists and authors not publishing their works publicly
The vast majority of creators have never been able to get remotely close to make a living from their creative work, and instead often when factoring in time lose money hand over fist trying to get their works noticed.
I generally let it slide because these copyright discussions tend to be about America, and as such it can be assumed American law and what it inherits from British law is what pertains.
>Copyright is 1) presented as being there to protect the interests of the general public, not creators,
yes, in the U.S in the EU creators have moral rights to their works and the law is to protect their interests.
There are actually moral rights and rights of exploitation, in EU you can transfer the latter but not the former.
>But it is also increasingly dubious that the public gets a good deal out of copyright law anyway.
In the EU's view of copyright the public doesn't need to get a good deal, the creators of copyrighted works do.
> There are actually moral rights and rights of exploitation, in EU you can transfer the latter but not the former.
And when we talk about copyright we generally talk about the rights of exploitation, where the rationale used today is about the advancement of arts and sciences - a public benefit. There's a reason the name is English is copy-right, where the other Germanic languages focuses more on the work - in the Anglosphere the notion of moral rights as separate from rights of exploitation is well outside the mainstream.
> In the EU's view of copyright the public doesn't need to get a good deal, the creators of copyrighted works do.
Most individual nations copyright law still does uphold the pretence of being for the public good, however. Without that pretence, there is no moral basis for restricting the rights of the public the way copyright law does.
But it has nevertheless been abundantly clear all the way back to the Statute of Anne that any talk of either public goods or rights of exploitation for the creator are excuses, and that these laws if anything mostly exist for the protection of business interests.
>Most individual nations copyright law still does uphold the pretence of being for the public good, however. Without that pretence, there is no moral basis for restricting the rights of the public the way copyright law does.
I of course do not know all the individual EU country's rules, but my understanding was that the EU's view was what it was because derived at least from the previous understanding of its member nations. So the earlier French laws before ratification and implementation of the EU directive on author's rights in Law # 92-597 (1 July 1992) were also focused on the understanding of creators having creator's rights and that protecting these was the purpose of Copyright law, and that this pattern generally held throughout EU lands (at least any lands currently in the EU, I suppose pre-Brexit this was not the case)
You probably have some other examples but in my experience the European laws have for a long time held that copyright exists to protect the rights of creators and not of the public.
> So the earlier French laws before ratification and implementation of the EU directive on author's rights in Law # 92-597 (1 July 1992) were also focused on the understanding of creators having creator's rights
French law, similar to e.g. Norwegian and German law, separated moral and proprietary rights.
Moral rights are not particularly relevant to this discussion, as they relate specifically to rights to e.g. be recognised as the author, and to protect the integrity of a work. They do not relate to actual copying and publication.
What we call copyright in English is largely proprietary/exploitation rights.
The historical foundation of the latter is firmly one of first granting righths on a case by case basis, often to printers rather than cretors, and then with the Statue of Anne that explicitly stated the goal of "encouragement of learning" right in the title of the act. This motivation was later e.g. made explicit in the US constitution.
Since you mention France, the National Assembly after the French Revolution took the stance that works by default were public property, and that copyright was an exception, in the same vein as per the Statute of Anne and US Constitution ("to promote the progress of science and useful arts").
Depository laws etc., which are near universal, are also firmly rooted in this view that copyright is a right grants that is provided on a quid pro quo basis: The work needs to be secured for the public for the future irrespective of continued commercial availability.
> Why? Copyright is 1) presented as being there to protect the interests of the general public, not creators
Doesn’t matter, both the ”public interest” and ”creator rights” arguments have the same impact: you’re either hurting creators directly, or you’re hurting the public benefit when you remove or reduce the economic incentives. The transfer of wealth and irreversible damage is there, whether you care about Lars Ulrichs gold toilet or our future kids who can’t enjoy culture and libraries to protect from adversarial and cynical tech moguls.
> 2) Statute of Anne, the birth of modern copyright law, protected printers - that is "big businesss" over creators anyway, so even that has largely always been a fiction.
> The vast majority of creators have never been able to get remotely close to make a living from their creative work
Nobody is saying copyright is perfect. We’re saying it’s the system we have and it should apply equally.
Two wrongs don’t make a right. Defending the AI corps on basis of copyright being broken is like saying the tax system is broken, so therefore it’s morally right for the ultra-rich to relocate assets to the Caymans. Or saying that democracy is broken, so it’s morally sound to circumvent it (like Thiel says).
You've put into words what I've been internally struggling to voice. Information (on the web) is a gas, it expands once it escapes.
In limited, closed systems, it may not escape, but all it takes is one bad (or hacked) actor and the privacy of it is gone.
In a way, we used to be "protected" because it was "too big" to process, store, or access "everything".
Now, especially with an economic incentive to vacuum literally all digital information, and many works being "digital first" (even a word processor vs a typewriter, or a PDF that is sent to a printer instead of lithograph metal plates)... is this the information Armageddon?
copyright is the backbone of modern media empires. It both allows small creators and massive corporations to seek rent on works, but since the works are under copyright for a century its quite nice to corporations
Governments always protect the interests of their powerful friends and donors over the people they allegedly represent.
They've just mastered the art of lying to gullible idiots or complicit psycophants.
It's not new to anyone who pays and kind of attention.
actually in much of the EU if not all of it Copyright is an intrinsic right of the creator.
It is a "right" created by law, is the point. This is not a right that is universally recognised, nor one that has existed since time immemorial, but a modern construction of governments that governments can choose to change or abolish.
what is a right that has existed since time immemorial? Generally rights that have existed "forever" are codified rights and, in the codification, described as being eternal. Hence Jefferson's reference to inalienable rights, which probably came as some surprise to King George III.
on edit: If we had a soundtrack the Clash Know Your Rights would be playing in this comment.
Except of course that the point is that copyright is generally not described this way.
See my more extensive overview in another response.
The history of copyright law is one where it is regularly described either in the debates around the passing of the laws, or in the laws themselves, as a utilitarian bargain between the public and creators.
E.g. since you mention Jefferson and mention "inalienable", notably copyright is in the US not an inaliable right at all, but a right that the US constitution grants Congress the power to enact "to promote the progress of science and useful arts". It says nothing about being an inalienable or eternal right of citizens.
And before you bring up France, or other European law, I suggest you read the other comment as well.
But to add more than I did in the other comment, e.g. in Norway, the first paragraph of the copyright low ("Lov om opphavsrett til åndsverk mv.") gives 3 motivations: 1 a) to grant rights to creators to give incentives for cultural production, 1 b) to limit those rights to ensure a balance between creators rights and public interests, 1 c) to provide rules to make it easy to arrange use of copyrighted works.
There's that argument about incentives and balancing public interests again.
This is the historical norm. It is not present in every copyright law, but they share the same historical nucleus.
Early copyright was a take on property rights, applied to supposed labour of the soul and subsequent ownership of its fruits.
Copyright stems from the 15-1600s, while utilitarianism is a mid-1800s kind of thing. The move from explicitly religious and natural rights motivations to language about "intellect" and hedonism is rather late, and I expect it to be tied to an atheist and utilitarian influence from socialist movements.
The first modern copyright law dates to 1709, and was most certainly not a "take on property rights". Neither were pre-Statute of Anne monopoly grants.
I can find nothing to suggest a "religious and natural rights" motivation, nor any language about "intellect and hedonism".
Statute of Anne - which specifically gives a utilitarian reason 150 years before your "mid-1800s" estimate also predates socialism by a similar amount of time, and dates to a time were there certainly wasn't any major atheist influence either, so this is utterly ahistorical nonsense.
Copyright originates in the Statute of Anne[0]; its creation was therefore within living memory when the United States declared their independence.
No rights have existed 'forever', and both the rights and the social problems they intend to resolve are often quite recent (assuming you're not the sort of person who's impressed by a building that's 100 years old).
George III was certainly not surprised by Jefferson's claim to rights, given that the rights he claimed were copied (largely verbatim) from the Bill of Rights 1689[1]. The poor treatment of the Thirteen Colonies was due to Lord North's poor governance, the rights and liberties that the Founding Fathers demanded were long-established in Britain, and their complaints against absolute monarchy were complaints against a system of government that had been abolished a century before.
[0] https://en.wikipedia.org/wiki/Statute_of_Anne
[1] https://en.wikipedia.org/wiki/Bill_of_Rights_1689
>No rights have existed 'forever'
you should probably reread the text I responded to and then what I wrote, because you seem to think I believe there are rights that are not codified by humans in some way and are on a mission to correct my mistake.
>George III was certainly not surprised by Jefferson's claim to rights, given that the rights he claimed were copied (largely verbatim) from the Bill of Rights 1689
to repeat: Hence Jefferson's reference to inalienable rights, which probably came as some surprise to King George III.
inalienable modifies rights here, if George is surprised by any rights it is inalienable ones.
>Copyright originates in the Statute of Anne[0]; its creation was therefore within living memory when the United States declared their independence.
title of post is "Meta says it won't sign Europe AI agreement", I was under the impression that it had something to do with how the EU sees copyright and not how the U.S and British common law sees it.
Hence multiple comments referencing EU but I see I must give up and the U.S must have its way, evidently the Europe AI agreement is all about how copyright works in the U.S, prime arbiter of all law around the globe.
at any rate rights that are described as being eternal or some other version of that such as inalienable, or in the case of copyright moral and intrinsic, are rights that if the government, that has heretofore described that as inviolate, where to casually violate them then the government would be declaring its own nullification to exist further by its previously stated rules.
Not to say this doesn't happen, I believe we can see it happening in some places in the world right now, but these are classes of laws that cannot "just" be changed at the government's whim, and in the EU copyright law is evidently one of those classes of law, strange as it seems.
And the relevant rights to exploit the work are almost never described as moral, intrinsic, inalienable or similar, so this is largely moot.
Yes it is. In every sense of the phrase, except the literal.
A lot of cultures have not historically considered artists’ rights to be a thing and have had it essentially imposed on them as a requirement to participate in global trade.
Even in Europe copyright was protected only for the last 250 years, and over the last 100 years it’s been constantly updated to take into consideration new technologies.
The only real mistake the EU made was not regulating Facebook when it mattered. That site caused pain and damage to entire generations. Now it's too late. All they can do is try to stop Meta and the rest of the lunatics from stealing every book, song and photo ever created, just to train models that could leave half the population without a job.
Meta, OpenAI, Nvidia, Microsoft and Google don't care about people. They care about control: controlling influence, knowledge and universal income. That's the endgame.
Just like in the US, the EU has brilliant people working on regulations. The difference is, they're not always working for the same interests.
The world is asking for US big tech companies to be regulated more now than ever.
Regulating FB earlier wouldn’t help much I think, it would grow just as fast with other, mostly US, markets and it would be just as powerful today.
To be fair, "copy"right has only been needed for as long as it's been possible to copy things. In the grand scheme of human history, that technology is relatively new.
Copying was a thing for a very long time before the Statue of Anne. Just not mechanically. It coincided with the rise of mechanical copying.
Copyright predates mechanical copying. However, people used to have to petition a King or similar to be granted a monopoly on a work, and the monopoly was specific to that work.
The Statue of Anne - the first recognisable copyright law in anything remotely the modern sense dates to 1709. Long after the invention of movable type. Mechanical in the sense of printing with a press using movable type, not anything highly automated.
Having to petition for monopoly rights on an individual basis is nothing like copyright, where the entire point is to avoid having to ask for exceptions by creating a right.
"intellectual property" only exists because society collectively allows it to. it's not some inviolable law of nature. society (or the government that represents them) can revoke it or give it away.
Yes, but that's also true of all other things that society enforces-- basically the ownership of anything you can't carry with you.
Yes, that is why (most?) anarchists consider property that one is not occupying and using to be fiction, held up by the state. I believe this includes intellectual property as well.
The same is true for human rights.
In the EU, an author’s moral rights are similar in character to human rights: https://en.wikipedia.org/wiki/Authors'_rights
You're alive because society collective allows you to.
A person being alive is not at all similar to the concept of intellectual property existing. The former is a natural phenomenon, the latter is a social construct.
Copyright is literally granted by the gov.
Sounds like a reasonable guideline to me. Even for open source models, you can add a license term that requires users of the open source model to take "appropriate measures to avoid the repeated generation of output that is identical or recognisably similar to protected works"
This is European law, not US. Reasonable means reasonable and judges here are expected to weigh each side's interests and come to a conclusion. Not just a literal interpretation of the law.
> This is European law, not US. Reasonable means reasonable and judges here are expected to weigh each side's interests and come to a conclusion. Not just a literal interpretation of the law.
I think you've got civil and common law the wrong way round :). US judges have _much_ more power to interpret law!
In the US, for most laws, and most judges, there's actually much less power to interpret law. Part of the benefit of the common law system is to provide consistency and take that interpretation power away from judges of each case.
My claim is that at a system-level, judges in the US have more power to interpret laws. Your claim is that "in each individual case, the median amount of interpretation is lower in the US than the EU". But you also concede that this is because the judges rely on the interpretations of _other_ judges in cases (e.g. if the Supreme Court makes a very important decision which clarifies how a law should be interpreted, and this is then carried down throughout the rest of the justice system, then this means that there has been a really large amount of interpretation).
It is European law, as in EU law, not law from a European state. In EU matters, the teleogocial interpretation, i.e. intent applies:
> When interpreting EU law, the CJEU pays particular attention to the aim and purpose of EU law (teleological interpretation), rather than focusing exclusively on the wording of the provisions (linguistic interpretation).
> This is explained by numerous factors, in particular the open-ended and policy-oriented rules of the EU Treaties, as well as by EU legal multilingualism.
> Under the latter principle, all EU law is equally authentic in all language versions. Hence, the Court cannot rely on the wording of a single version, as a national court can, in order to give an interpretation of the legal provision under consideration. Therefore, in order to decode the meaning of a legal rule, the Court analyses it especially in the light of its purpose (teleological interpretation) as well as its context (systemic interpretation).
https://www.europarl.europa.eu/RegData/etudes/BRIE/2017/5993...
> It is European law, as in EU law, not law from a European state. In EU matters, the teleogocial interpretation, i.e. intent applies
I'm not sure why you and GP are trying to use this point to draw a contrast to the US? That very much is a feature in US law as well.
I will admit my ignorance of the finer details of US law - could you share resources explaining the parallels?
> Even for open source models, you can add a license term that requires users of the open source model to take appropriate measures to avoid [...]
You just made the model not open source
Instead of a license term you can put that in your documentation - in fact that is exactly what the code of practice mentions (see my other comment) for open source models.
An open source cocaine production machine is still an illegal cocaine production machine. The fact that it's open source doesn't matter.
You seem to not have understood that different forms of appliances need to comply with different forms of law. And you being able to call it open source or not doesn't change anything about its legal aspects.
And every law written is a compromise between two opposing parties.
“Source available” then?
Except that it’s seemingly impossible to prevent against prompt injection. The cat is out the bag. Much like a lot of other legislation (eg cookie law, being responsible for user generated content when you have millions of it posted per day) it’s entirely impractical albeit well-meaning.
I don't think the cookie law is that impractical? It's easy to comply with by just not storing non-essential user information. It would have been completely nondisruptive if platforms agreed to respect users' defaults via browser settings, and then converged on a common config interface.
It was made impractical by ad platforms and others who decided to use dark patterns, FUD and malicious compliance to deceive users into agreeing to be tracked.
I recently received an email[0] from a UK entity with an enormous wall of text talking about processing of personal information, my rights and how there is a “Contact Card” of my details on their website.
But with a little bit of reading, one could ultimately summarise the enormous wall of text simply as: “We’ve added your email address to a marketing list, click here to opt out.”
The huge wall of text email was designed to confuse and obfuscate as much as possible with them still being able to claim they weren’t breaking protection of personal information laws.
[0]: https://imgur.com/a/aN4wiVp
>The huge wall of text email was designed to confuse and obfuscate as much as possible with
It is pretty clear
Only if you read it. Most people do not read it, same with ToSes.
If you ask someone if they killed your dog and they respond with a wall of text, then you’re immediately suspicious. You don’t even have to read it all.
The same is true of privacy policies. I’ve seen some companies have very short policies I could read in less than 30s, those companies are not suspicious.
That's true, because of the EU privacy regulation, because they make companies write a wall of text before doing smth. suspicious.
I do not disagree. It could indeed be made shorter than usual, especially if you are not malicious.
Even EU government websites have horrible intrusive cookie banners. You can't blame ad companies, there are no ads on most sites
Because they track usage stats for site development purposes, and there was no convergence on an agreed upon standard interface for browsers since nobody would respect it. Their banners are at least simple yes/no ones without dark patterns.
But yes, perhaps they should have worked with e.g. Mozilla to develop some kind of standard browser interface for this.
This is actually not true. I just read the European commission's cookie policy.
The main reason they need the banner is because they show you full page popups to ask you to take surveys about unrelated topics like climate action. They need consent to track whether or not you've taken these surveys
Their banner is just as bad as any other I have seen, it covers most of the page and doesn't go away until I click yes. If you're trying to opt out of cookies on other sites, that's probably why it takes you longer (just don't do that).
You don't need cookie banners if you don't use invasive telemetry.
A website that sticks to being a website does not need cookie banners.
Then why does the EU commission believe they need one for their pages that explain these rules? What invasive telemetry are they using?
Are there any websites that don't require these banners?
If you allow users to set font size or language you need a banner btw
They create profiles of visitors that allow them to, e.g. through polls.
It's usually a click or two to "reject all" or similar with serious organisations. Some german corporations are nasty and conflate paywall and data collection and processing consent.
It is impractical for me as a user. I have to click on a notice on every website on the internet before interacting with it - often which are very obtuse and don’t have a “reject all” button but a “manage my choices” button which takes to an even more convoluted menu.
Instead of exactly as you say: a global browser option.
As someone who has had to implement this crap repeatedly - I can’t even begin to imagine the amount of global time that has been wasted implementing this by everyone, fixing mistakes related to it and more importantly by users having to interact with it.
Yeah, but the only reason for this time wasteage is because website operators refuse to accept what would become the fallback default of "minimal", for which they would not need to seek explicit consent. It's a kind of arbitrage, like those scammy website that send you into redirect loops with enticing headlines.
The law is written to encourage such defaults if anything, it just wasn't profitable enough I guess.
Not even EU institutions themselves are falling back on deaults that don't require cookie consent.
I'm constantly clicking away cookie banners on UK government or NHS (our public healthcare system) websites. The ICO (UK privacy watchdog) requires cookie consent. The EU Data Protection Supervisor wants cookie consent. Almost everyone does.
And you know why that is? It's not because they are scammy ad funded sites or because of government surveillance. It's because the "cookie law" requires consent even for completely reasonable forms of traffic analysis with the sole purpose of improving the site for its visitors.
This is impractical, unreasonable, counterproductive and unintelligent.
> completely reasonable
This is a personal decision to be made by the data "donor".
The NHS website cookie banner (which does have a correct implementation in that the "no consent" button is of equal prominence to the "mi data es su data" button) says:
> We'd also like to use analytics cookies. These collect feedback and send information about how our site is used to services called Adobe Analytics, Adobe Target, Qualtrics Feedback and Google Analytics. We use this information to improve our site.
In my opinion, it is not, as described, "completely reasonable" to consider such data hand-off to third parties as implicitly consented to. I may trust the NHS but I may not trust their partners.
If the data collected is strictly required for the delivery of the service and is used only for that purpose and destroyed when the purpose is fulfilled (say, login session management), you don't need a banner.
The NHS website is in a slightly tricky position, because I genuinely think they will be trying to use the data for site and service improvement, at least for now, and they hopefully have done their homework to make sure Adobe, say, are also not misusing the data. Do I think the same from, say, the Daily Mail website? Absolutely not, they'll be selling every scrap of data before the TCP connection even closes to anyone paying. Now, I may know the Daily Mail is a wretched hive of villainy and can just not go there, but I do not know about every website I visit. Sadly the scumbags are why no-one gets nice things.
>This is a personal decision to be made by the data "donor".
My problem is that users cannot make this personal decision based on the cookie consent banners because all sites have to request this consent even if they do exactly what they should be doing in their users' interest. There's no useful signal in this noise.
The worst data harvesters look exactly the same as a site that does basic traffic analysis for basic usability purposes.
The law makes it easy for the worst offenders to hide behind everyone else. That's why I'm calling it counterproductive.
[Edit] Wrt NHS specifically - this is a case in point. They use some tools to analyse traffic in order to improve their website. If they honour their own privacy policy, they will have configured those tools accordingly.
I understand that this can still be criticised from various angles. But is this criticism worth destroying the effectiveness of the law and burying far more important distinctions?
The law makes the NHS and Daily Mail look exactly the same to users as far as privacy and data protection is concered. This is completely misleading, don't you think?
I don't think it's too misleading, because in the absence of any other information, they are the same.
What you could then add to this system is a certification scheme to permit implicit consent of all the data handling (including who you hand it off to and what they are allowed to do with it, as well as whether they have demonstrated themselves to be trustworthy) is audited to be compliant with some more stringent requirements. It could even be self-certification along the lines of CE marking. But that requires strict enforcement, and the national regulators so far have been a bunch of wet blankets.
That actually would encourage organisations to find ways to get the information they want without violating the privacy of their users and anyone else who strays into their digital properties.
>I don't think it's too misleading, because in the absence of any other information, they are the same.
But other information not being absent we know that they are not the same. Just compare privacy policies for instance. The cookie law makes them appear similar in spite of the fact that they are very different (as of now - who knows what will happen to the NHS).
I do understand the point, but other then allowing a process of auditing to allow a middle ground of consent implied for first-party use only and within some strictly defined boundaries, what else can you do? It's a market for lemons in terms of trustworthy data processors. 90% (bum-pull figures, but lines up with the number of websites that play silly buggers with hiding the no-consent button) of all people who want to use data will be up to no good and immediately try to bend and break every rule.
I would also be in favour of companies having to report all their negative data protection judgements against them and everyone they will share your data with in their cookie banner before giving you the choice as to whether you trust them.
If any rule is going to be broken and impossible to enforce, how can that be a justification for keeping a bad rule rather than replacing it with more sensible one?
I said they'd try to break them. Which requires vigilance and regulators stepping in with an enormous hammer. So far national regulators have been pretty weaksauce which is indeed very frustrating.
I'm not against improving the system, and I even proposed something, but I am against letting data abusers run riot because the current system isn't quite 100% perfect.
I'll still take what we have over what we had before (nothing, good luck everyone).
> even if they do exactly what they should be doing in their users' interest
If they only do this, they don't need to show anything.
Then we clearly disagree on what they should be doing.
And this is the crux of the problem. The law helps a tiny minority of people enforce an extremely (and in my view pointlessly) strict version of privacy at the cost of misleading everybody else into thinking that using analytics for the purpose of making usability improvements is basically the same thing as sending personal data to 500 data brokers to make money off of it.
If you are talking for example about invasive A/B tests, then the solution is to pay for testers, not to test on your users.
What exactly do think should be allowed which still respect privacy, which isn't now?
I would draw the line where my personal data is exchanged with third parties for the purpose of monetisation. I want the websites I visit to be islands that do not contribute to anyone's attempt to create a complete profile of my online (and indeed offline) life.
I don't care about anything else. They can do whatever A/B testing they want as far as I'm concerned. They can analyse my user journey across multiple visits. They can do segmentation to see how they can best serve different groups of users. They can store my previous search terms, choices and preferences. If it's a shop, they can rank products according to what they think might interest me based on previous visits. These things will likely make the site better for me or at least not much worse.
Other people will surely disagree. That's fine. What's more important than where exactly to draw the line is to recognise that there are trade-offs.
The law seems to be making an assumption that the less sites can do without asking for consent the better most people's privacy will be protected.
But this is a flawed idea, because it creates an opportunity for sites to withhold useful features from people unless and until they consent to a complete loss of privacy.
Other sites that want to provide those features without complete loss of privacy cannot distinguish themselves by not asking for consent.
Part of the problem is the overly strict interpretation of "strictly necessary" by data protection agencies. There are some features that could be seen as strictly necessary for normal usability (such as remembering preferences) but this is not consistently accepted by data protection agencies so sites will still ask for consent to be on the safe side.
> It's because the "cookie law" requires consent even for completely reasonable forms of traffic analysis with the sole purpose of improving the site for its visitors
Yup. That's what those 2000+ "partners" are all about if you believe their "legitimate interest" claims: "improve traffic"
>This is impractical, unreasonable, counterproductive and unintelligent.
It keeps the political grifters who make these regulations employed, that's kind of the main point in EU/UKs endless stream of regulations upon regulations.
The reality is the data that is gathered is so much more valuable and accurate if you gather consent when you are running a business. Defaulting to a minimal config is just not practical for most businesses either. The decisions that are made with proper tracking data have a real business impact (I can see it myself - working at a client with 7 figure monthly revenue).
Im fully supportive of consent, but the way it is implemented is impractical from everyone’s POV and I stand by that.
Are you genuinely trying to defend businesses unnecessarily tracking users online? Why can't businesses sell their core product(s) and you know... not track users? If they did that, then they wouldn't need to implement a cookie banner.
Retargetting etc is massive revenue for online retailers. I support their right to do it if users consent to it. I don’t support their right to do it if users have not consented.
The conversation is not about my opinion on tracking, anyway. It’s about the impracticality of implementing the legislation that is hostile and time consuming for both website owners and users alike
> Retargetting etc is massive revenue for online retailers
Drug trafficking, stealing, scams are massive revenue for gangs.
Bro can you send me a link to the RJ community Whats app?
kwaigdc7 @ gmail.com
hey! Which one? you can find them here: https://nomadbrazil.notion.site/Rio-WhatsApp-Groups-cc9ae8b8...
Plus with any kind of effort put into a standard browser setting you could easily have some granularity, like: accept anonymous ephemeral data collected to improve website, but not stuff shared with third parties, or anything collected for the purpose of tailoring content or recommendations for you.
Are you genuinely acting this obtuse? what do you think walmart and every single retailer does when you walk into a physical store? it’s always constant monitoring to be able to provide a better customer experience. This doesn’t change with online, businesses want to improve their service and they need the data to do so.
If you're talking about the same jurisdiction of this privacy laws, then this is illegal. Your are only allowed to retain videos for 24h and only use it for basically calling the police.
walmart has sales associates running around gathering all those data points, as well as people standing around monitoring. Their “eyes” aren’t regulated.
Walmart in the EU?
replace walmart with tesco or your eu retailer of choice, point still holds.
playing with semantics makes you sound smart though!
The question still stands then: Does it happen in Tesco in the EU? Because that is illegal.
The original idea was that it should be legal to track people, because it is ok in the analog world. But it really isn't and I'm glad it is illegal in the EU. I think it should be in the US also, but the EU can't change that and I have no right to have political influence about foreign countries so that doesn't matter.
it’s illegal for Tesco to have any number of employees watching/monitoring/“tracking” in the store with their own eyes and using those in-store insights to drive better customer experiences?
Making statistics about sex, age, number of children, clothing choice, walking speed without consent, sounds illegal. I think it isn't forbidden for the company, but for the individual already, because that's voyeuristic behaviour.
Watching what is bought is fine, but walking around to do that is useless work, because you have that in the accounting/sales data already.
There is stuff like PayPal and now per company apps, that works the same as on the web: you need to first sign a contract. I would rather that to be cracked done on, but I see that it is difficult, because you can't forbid individual choice. But I think the incentive is that products become cheaper when you opt-in to data collection. This is already forbidden though, you can't combine consent with other benefits, then it isn't free consent anymore. I expect a lawsuit in the next decades.
> it’s always constant monitoring to be able to provide a better customer experience
This part gave me a genuine laugh. Good joke.
ah yes because walmart wants to harvest your in-store video data so they can eventually clone you right?
adjusts tinfoil hat
yeah this one wasn't as funny.
I can see how it hits too close to home for you
That is only true if you agree with ad platforms that tracking ads are fundamentally required for businesses, which is trivially untrue for most enterprises. Forcing businesses to get off privacy violating tracking practices is good, and it's not the EU that's at fault for forcing companies to be open about ad networks' intransigence on that part.
> just not practical for most businesses
I don't think practical is the right word here. All the businesses in the world operated without tracking until the mid 90s.
Why would I ever want to consent to you abusing my data?
Just don't process any personal data by default when not I inherently required -> no banner required.
I don't have to, because there are add-ons to reject everything.
There is no way to enforce that license. Free software doesn't have funds for such lawsuits.
Lovely when they try to regulate a burgeoning market before we have any idea what the market is going to look like in a couple years.
The whole point of regulating it is to shape what it will look like in a couple of years.
Regulators often barely grasp how current markets function and they are supposed to be futurists now too? Government regulatory interests almost always end up lining up with protecting entrenched interests, so it's essentially asking for a slow moving group of the same mega companies. Which is very much what Europes market looks like today. Stasis and shifting to a stagnating middle.
The EU is founded on the idea of markets and regulation.
And also to prevent European powers trying to kill each other for the third time in a century, setting the whole world on fire in the process - for the third time in a century.
Arguably that worked. :-)
The EU is founded on the idea of useless bureaucracy.
It's not just IT. Ask any EU farmer.
Contrary to the constant whining, most of them are actually quite wealthy. And thanks to strong right to repair laws, they can keep using John Deere equipment without paying extortionate licensing fees.
They're wealthy because they were paid for not using their agricultural land, so they cropped down all the trees on parts of their land that they couldn't use, to classify it as agricultural, got paid, and as a side effect caused downstream flooding
Just to stay on topic: outside the US there's a general rule of thumb: if Meta is against it, the EU is probably doing something right.
Well, the topic is really whether or not the EU's regulations are effective at producing desired outcomes. The comment you're responding to is making a strong argument that it isn't. I tend to agree.
There's a certain hubris to applying rules and regulations to a system that you fundamentally don't understand.
For those of us outside the US, it's not hard to understand how regulations work. The US acts as a protectionist country, it sets strict rules and pressures other governments to follow them. But at the same time, it promotes free markets, globalisation, and neoliberal values to everyone else.
The moment the EU shows even a small sign of protectionism, the US complains. It's a double standard.
So the solution is to allow the actual entrenched interests to determine the future of things when they also barely grasp how the current markets function and are currently proclaiming to be futurists?
The best way for "entrenched interests" to stifle competition is to buy/encourage regulation that keeps everybody else out of their sandbox pre-emptively.
For reference, see every highly-regulated industry everywhere.
You think Sam Altman was in testifying to the US Congress begging for AI regulation because he's just a super nice guy?
Regulation exists because of monopolistic practices and abuses in the early 20th century.
That's a bit oversimplified. Humans have been creating authority systems trying to control others lives and business since formal societies have been a thing, likely even before agriculture. History is also full of examples of arbitrary and counter productive attempts at control, which is a product of basic human nature combined with power, and why we must always be skeptical.
As a member of 'humanity', do you find yourself creating authority systems for AI though? No.
If you are paying for lobbyists to write the legislation you want, as corporations do, you get the law you want - that excludes competition, funds your errors etc.
The point is you are not dealing with 'humanity', you are dealing with those who represent authority for humanity - not the same thing at all. Connected politicians/CEOs etc are not actually representing 'humanity' - they merely say that they are doing so, while representing themselves.
That can be, however regulation has just changed monopolistic practices to even more profitable oligarchaistic practices. Just look at Standard Oil.
OpenAI was not an entrenched interest until 2023. Yahoo mattered until 2009. Nokia was the king of mobile phones until 2010.
Technology changes very quickly and the future of things is hardly decided by entrenched interests.
Won't somebody please think of the children?
Yes, a common rhetoric, and terrorism and national security.
They’re demanding collective conversation. You don’t have to be involved if you prefer to be asocial except to post impotent rage online.
Same way the pols aren’t futurists and perfect neither is anyone else. Everyone should sit at the table and discuss this like adults.
You want to go live in the hills alone, go for it, Dick Proenneke. Society is people working collectively.
> Which is very much what Europes market looks like today. Stasis and shifting to a stagnating middle.
Preferable to a burgeoning oligarchy.
eu resident here. i’ve observed with sadness what a scared and terrified lots the europeans have become. but at least their young people can do drugs, party 72 hours straight, and graffiti all walls in berlin so hey what’s not to like?
one day some historian will be able to pinpoint the exact point in time that europe chose to be anti-progress and fervent traditionalist hell-bent on protecting pizza recipes, ruins of ancient civilization, and a so-called single market. one day!
No, that... that's exactly what we have today. An oligarchy persists through captured state regulation. A more free market would have a constantly changing top.
Historically, freer markets have lead to monopolies. It's why we have antitrust regulations in the first place (now if only they were enforced...)
Depends on the time horizon you look at. A completely unregulated market usually ends up dominated by monopolists… who last a generation or two and then are usurped and become declining oligarchs. True all the way back to the Medici.
In a rigidly regulated market with preemptive action by regulators (like EU, Japan) you end up with a persistent oligarchy that is never replaced. An aristocracy of sorts.
The middle road is the best. Set up a fair playing field and rules of the game, but allow innovation to happen unhindered, until the dust has settled. There should be regulation, but the rules must be bought with blood. The risk of premature regulation is worse.
> There should be regulation, but the rules must be bought with blood.
That's an awfully callous approach, and displays a disturbing lack of empathy toward other people.
Calculated, not callous. Quite the opposite: precaution kills people every day, just not as visibly. This is especially true in the area of medicine where innovation (new medicines) aren’t made available even when no other treatment is approved. People die every day by the hundreds of thousands of diseases that we could be innovating against.
You're both right, and that's exactly how early regulation often ends up stifling innovation. Trying to shape a market too soon tends to lock in assumptions that later prove wrong.
Sometimes you can't reverse the damage and societal change after the market has already been created and shaped. Look at fossil fuels, plastic, social media, etc. We're now dependent on things that cause us harm, the damage done is irreversible and regulation is no longer possible because these innovations are now embedded in the foundations of modern society.
Innovation is good, but there's no need to go as fast as possible. We can be careful about things and study the effects more deeply before unleashing life changing technologies into the world. Now we're seeing the internet get destroyed by LLMs because a few people decided it was ok to do so. The benefits of this are not even clear yet, but we're still doing it just because we can. It's like driving a car at full speed into a corner just to see what's behind it.
I think it’s one of those “everyone knows” things that plastic and social media are bad, but I think the world without them is way, way worse. People focus on these popular narratives but if people thought social media was bad, they wouldn’t use it.
Personally, I don’t think they’re bad. Plastic isn’t that harmful, and neither is social media.
I think people romanticize the past and status quo. Change is scary, so when things change and the world is bad, it is easy to point at anything that changed and say “see, the change is what did it!”
People don't use things that they know are bad, but someone who has grown up in an environment where everyone uses social media for example, can't know that it's bad because they can't experience the alternative anymore. We don't know the effects all the accumulating plastic has on our bodies. The positive effects of these things can be bigger than the negative ones, but we can't know that because we're not even trying to figure it out. Sometimes it might be impossible to find out all the effects before large scale adoption, but still we should at least try. Currently the only study we do before deciding is the one to figure out if it'll make a profit for the owner.
> We don't know the effects all the accumulating plastic has on our bodies.
This is handwaving. We can be pretty well sure at this point what the effects aren’t, given their widespread prevalence for generations. We have a 2+ billion sample size.
No, we can't be sure. There's a lot of diseases that we don't know the cause of, for example. Cancers, dementia, Alzheimer's, etc. There is a possibility that the rates of those diseases are higher because of plastics. Plastic pollution also accumulates, there was a lot less plastic in the environment a few decades ago. We add more faster than it gets removed, and there could be some threshold after which it becomes more of an issue. We might see the effect a few decades from now. Not only on humans, but it's everywhere in the environment now, affecting all life on earth.
You're not arguing in a way that strikes me as intellectually honest.
You're hypothesizing the existence of large negative effects with minimal evidence.
But the positive effects of plastics and social media are extremely well understood and documented. Plastics have revolutionized practically every industry we have.
With that kind of pattern of evidence, I think it makes sense to discount the negatives and be sure to account for all the positives before saying that deploying the technology was a bad idea.
I agree that plastics probably do have more positives than negatives, but my point is that many of our innovations do have large negative effects, and if we take them into use before we understand those negative effects it can be impossible to fix the problems later. Now that we're starting to understand the extent of plastic pollution in our environment, if some future study reveals that it's a causal factor in some of our diseases it'll be too late to do anything about it. The plastic is in the environment and we can't get it out with regulation anymore.
Why take such risks when we could take our time doing more studies and thinking about all the possible scenarios? If we did, we might use plastics where they save lives and not use them in single-use containers and fabrics. We'd get most of the benefit without any of the harm.
> if people thought social media was bad, they wouldn’t use it.
Do you think Heroin is good?
I'm sure it's very good the first time you take it. If you don't consider all the effects before taking it, it does make sense. You feel very good, but the even stronger negative effects come after. Same can be said about a lot of technology.
Is the implication in your question that social media is addictive and should be banned or regulated on that basis?
While some people get addicted to it, the vast majority of users are not addicts. They choose to use it.
Addiction is a matter of degree. There's a bunch of polls where a large majority of people strongly agree that "they spend too much time on social media". Are they addicts? Are they "coosing to use it"? Are they saying it's too much because that's a trendy thing to say?
People who take Heroin think it is good in the situation they are taking it.
> Look at fossil fuels
WHAT?! Do you think we as humanity would have gotten to all the modern inventions we have today like the internet, space travel, atomic energy, if we had skipped the fossil fuel era by preemptively regulating it?
How do you imagine that? Unless you invent a time machine, go to the past, and give inventors schematics of modern tech achievable without fossil fuels.
Maybe not as fast as we did, but eventually we would have. Maybe more research would have been put into other forms of energy if the effects of fossil fuels were considered more thoroughly and usage was limited to a degree that didn't have a chance cause such fast climate change. And so what if the rate of progress would have been slower and we'd be 50 years behind current tech? At least we wouldn't have to worry about all the damage we've caused now, and the costs associated with that. Due to that damage our future progress might halt while a slower, more careful society would continue advancing far into the future.
I think it's an open question whether we can reboot society without the use of fossil fuels. I'm personally of the opinion that we wouldn't be able to.
Simply taking away some giant precursor for the advancements we enjoy today and then assuming it all would have worked out somehow is a bit naive.
I would need to see a very detailed pipeline from growing wheat in an agrarian society to the development of a microprocessor without fossil fuels to understand the point you're making. The mining, the transport, the manufacture, the packaging, the incredible number of supply chains, and the ability to give people time to spend on jobs like that rather than trying to grow their own food are all major barriers I see to the scenario you're suggesting.
The whole other aspect of this discussion that I think is not being explored is that technology is fundamentally competitive, and so it's very difficult to control the rate at which technology advances because we do not have a global government (and if we did have a global government, we'd have even more problems than we do now). As a comment I read yesterday said, technology concentrates gains towards those who can deploy it. And so there's going to be competition to deploy new technologies. Country-level regulation that tries to prevent this locally is only going to lead to other countries gaining the lead.
You might be right, but I'm wasn't saying we should ban all use of any technology that has any negative effects, but that we should at least try to understand all the effects before taking it into use, and try to avoid the worst outcomes by regulating how to use the tech. If it turns out that fossil fuels are the only way to achieve modern technology then we should decide to take the risk of the negative effects knowing that there's such a risk. We shouldn't just blindly rush into any direction that might give us some benefit.
Regarding competition, yes you're right. Effective regulation is impossible before we learn global co-operation, and that's probably never going to happen.
Very naive take that's not based in reality but would only work in fiction.
Historically, all nations that developed and deployed new tech, new sources of energy and new weapons, have gained economic and military superiority over nations who did not, which ended up being conquered/enslaved.
UK would not have managed to be the world power before the US, without their coal fueled industrial era.
So as history goes, if you refuse to take part in, or cannot keep up in the international tech, energy and weapons race, you'll be subjugated by those who win that race. That's why the US lifted all brakes on AI, to make sure they'll win and not China. What EU is doing, self regulating itself to death, is ensuring its future will be at the mercy of US and China. I'm not the one saying this, history proves it.
You're right, in a system based on competition it's not possible to prevent these technologies from being used as soon as they're invented if there's some advantage to be gained. We need to figure out global co-operation before such a thing is realistic.
But if such co-operation was possible, it would make sense to progress more carefully.
There is no such thing as "global cooperation" in our reality for things beyond platitudes. That's only a fantasy for sci-fi novels. Every tribe wants to rule the others, because if you don't, the other tribes will rule you.
It's been the case since our caveman days. That's why tribes that don't focus on conquest end up removed form the gene pool. Now extend tribe to nation to make it relevant to current day.
The internet was created in the military at the start of the fossil era, there is no reason, why it should be affected by the oil era. If we wouldn't travel that much, because we don't use cars and planes that much, the internet would be even more important.
Space travel does need a lot of oil, so it might be affected, but the beginning of it were in the 40s so the research idea was already there.
Atomic energy is also from the 40s and might have been the alternative to oil, so it would thrive more if we haven't used oil that much.
Also all 3 ARE heavily regulated and mostly done by nation states.
How would you have won the world wars without oil?
Your augment only work in a fictional world where oil does not exist and you have the hindsight of today.
But when oil does exist and if you would have chosen not to use it, you will have long been steamrolled by industrialized nations powers who used their superior oil fueled economy and military to destroy or enslave your nation and you wouldn't be writing this today.
I thought we are arguing about regulating oil not to not use oil at all.
> How would you have won the world wars without oil?
You don't need to win world wars to have technological advancement, in fact my country didn't. I think the problem with this discussion, is that we all disagree what to regulate, that's how we ended up with the current situation after all.
I interpreted it to mean that we wouldn't use plastic for everything. I think we would be fine having glass bottles and paper, carton, wood for grocery wrapping. It wouldn't be so individual per company, but this not important for the economy and consumers, and also would result in a more competitive market.
I also interpreted it to mean that we wouldn't have so much cars and don't use planes beside really important stuff (i.e. international politics). The cities simply expand to the travel speed of the primary means of transportation. We would simply have more walkable cities and would use more trains. Amazon probably wouldn't be possible and we would have more local producers. In fact this is what we currently aim for and it is hard, because transition means that we have larger cities then we can support with the primary means of transportation.
As for your example inventions: we did have computers in the 40s and the need for networking would arise. Space travel is in danger, but you can use oil for space travel without using it for everyday consumer products. As I already wrote, we would have more atomic energy, not sure if that would be good though.
Depends what those assumptions are. If by protecting humans from AI gross negligence, then the assumptions are predetermined to be siding towards human normals (just one example). Lets hope logic and understanding of the long term situation proceeds the arguments in the rulesets.
You're just guessing as much as anyone. Almost every generation in history has had doomers predicting the fall of their corner of civilization from some new thing. From religion schisms, printing presses, radio, TV, advertisements, the internet, etc. You can look at some of the earliest writings by English priests in the 1500s predicting social decay and destruction of society which would sound exactly like social media posts in 2025 about AI. We should at a minimum under the problem space before restricting it, especially given the nature of policy being extremely slow to change (see: copyright).
I'd urge you to read a book like Black Swan, or study up on statistics.
Doomers have been wrong about completely different doom scenarios in the past (+), but it says nothing about to this new scenario. If you're doing statistics in your head about it, you're wrong. We can't use scenarios from the past to make predictions about completely novel scenarios like thinking computers.
(+) although they were very close to being right about nuclear doom, and may well be right about climate change doom.
I'd like for you to expand your point on understanding statistics better. I think I have a very good understanding of statistics, but I don't see how it relates to your point.
Your point is fundamentally philosophical, which is you can't use the past to predict the future. But that's actually a fairly reductive point in this context.
GP's point is that simply making an argument about why everything will fail is not sufficient to have it be true. So we need to see something significantly more compelling than a bunch of arguments about why it's going to be really bad to really believe it, since we always get arguments about why things are really, really bad.
> which is you can't use the past to predict the future
Of course you can use the past to predict (well, estimate) the future. How fast does wheat grow? Collect a hundred years of statistics of wheat growth and weather patterns, and you can estimate how fast it will grow this year with a high level of accuracy, unless a "black swan" event occurs which wasn't in the past data.
Note carefully what we're doing here: we're applying probability on statistical data of wheat growth from the past to estimate wheat growth in the future.
There's no past data about the effects of AI on society, so there's no way to make statements about whether it will be safe in the future. However, people use the statistics that other, completely unrelated, things in the past didn't cause "doom" (societal collapse) to predict that AI won't cause doom. But statistics and probability doesn't work this way, using historical data about one thing to predict the future of another thing is a fallacy. Even if in our minds they are related (doom/societal collapse caused by a new technology), mathematically, they are not related.
> we always get arguments about why things are really, really bad.
When we're dealing with a completely new, powerful thing that we have no past data on, we absolutely should consider the worst, and of course, the median, and best case scenarios, and we should prepare for all of these. It's nonsensical to shout down the people preparing for the worst and working to make sure it doesn't happen, or to label them as doomers, just because society has survived other unrelated bad things in the past.
Ah, I see your point is not philosophical. It's that we don't have historical data about the effect of AI. I understand your point now. I tend to be quite a bit more liberal and allow things to play out because I think many systems are too complex to predict. But I don't think that's a point that we'll settle here.
The experience with other industries like cars (specially EV) shows that the ability of EU regulators to shape global and home markets is a lot more limited than they like to think.
Not really china make big policy bet a decade early and win the battle the put the whole government to buy this new tech before everyone else, forcing buses to be electric if you want the federal level thumbs up, or the lottery system for example.
So I disagree, probably Europe will be even more behind in ev if they doesn't push eu manufacturers to invest so heavily in the industry.
You can se for example than for legacy manufacturers the only ones in the top ten are Europeans being 3 out of 10 companies, not Japanese or Korean for example, and in Europe Volkswagen already overtake Tesla in sales Q1 for example and Audi isn't that much away also.
What will happen, like every time a market is regulated in the EU, is that the market will move on without the EU.
The point is to stop and deter market failure, not anticipate hypothetical market failure
That has never worked.
If the regulators were qualified to work in the industry, then guess what: they'd be working in the industry.
[dead]
We know what the market will look like. Quasi monopoly and basic user rights violated.
Regulating it while the cat is out of the bag leads to monopolistic conglomerates like Meta and Google. Meta shouldn't have been allowed to usurp instagram and whatsapp, Google shouldn't have been allowed to bring Youtube into the fold. Now it's too late to regulate a way out of this.
It’s easy to say this in hindsight, though this is the first time I think I’ve seen someone say that about YouTube even though I’ve seen it about Instagram and WhatsApp a lot.
The YouTube deal was a lot earlier than Instagram, 2006. Google was way smaller than now. iPhone wasn’t announced. And it wasn’t two social networks merging.
Very hard to see how regulators could have the clairvoyance to see into this specific future and its counter-factual.
>Now it's too late to regulate a way out of this.
Technically untrue, monopoly busting is a kind of regulation. I wouldn't bet on it happening on any meaningful scale, given how strongly IT benefits from economies of scale, but we could be surprised.
> before we have any idea what the market is going to look like in a couple years.
Oh, we already know large chunks of it, and the regulations explicitly address that.
If the chest-beating crowd would be presented with these regulations piecemeal, without ever mentioning EU, they'd probably be in overwhelming support of each part.
But since they don't care to read anything and have an instinctive aversion to all things regulatory and most things EU, we get the boos and the jeers
I literally lived this with GDPR. In the beginning every one ran around pretending to understand what it meant. There were a ton of consultants and lawyers that basically made up stuff that barely made sense. They grifted money out of startups by taking the most aggressive interpretation and selling policy templates.
In the end the regulation was diluted to something that made sense(ish) but that process took about 4 years. It also slowed down all enterprise deals because no one knew if a deal was going to be against GDPR and the lawyers defaulted to “no” in those orgs.
Asking regulators to understand and shape market evolution in AI is basically asking them to trade stocks by reading company reports written in mandarin.
The main thing is the EU basically didn’t enforce it. I was really excited for data portability but it hasn’t really come to pass
> In the end the regulation was diluted to something that made sense(ish) but that process took about 4 years.
Is the same regulation that was introduced in 2016. The only people who pretend not to understand it are those who think that selling user data to 2000+ "partners" is privacy
Exactly. No anonymity, no thought crime, lots of filters to screen out bad misinformation, etc. Regulate it.
they dont want a marlet. They want total control, as usual for control freaks.
It doesn't seem unreasonable. If you train a model that can reliably reproduce thousands/millions of copyrighted works, you shouldn't be distributibg it. If it were just regular software that had that capability, would it be allowed? Just because it's a fancy Ai model it is ok?
> that can reliably reproduce thousands/millions of copyrighted works, you shouldn't be distributibg it. If it were just regular software that had that capability, would it be allowed?
LLMs are hardly reliable ways to reproduce copyrighted works. The closest examples usually involve prompting the LLM with a significant portion of the copyrighted work and then seeing it can predict a number of tokens that follow. It’s a big stretch to say that they’re reliably reproducing copyrighted works any more than, say, a Google search producing a short excerpt of a document in the search results or a blog writer quoting a section of a book.
It’s also interesting to see the sudden anti-LLM takes that twist themselves into arguing against tools or platforms that might reproduce some copyrighted content. By this argument, should BitTorrent also be banned? If someone posts a section of copyrighted content to Hacker News as a comment, should YCombinator be held responsible?
Then they should easily fall within the regulation section posted earlier.
If you cannot see the difference between BitTorrent and Ai models, then it's probably not worth engaging with you.
But Ai model have been shown to reproduce the training data
https://gizmodo.com/ai-art-generators-ai-copyright-stable-di...
https://arxiv.org/abs/2301.13188
> LLMs are hardly reliable ways to reproduce copyrighted works
Only because the companies are intentionally making it so. If they weren't trained to not reproduce copyrighted works they would be able to.
They're probably training them to refuse, but fundamentally the models are obviously too small to usually memorise content, and can only do it when there's many copies in the training set. Quotation is a waste of parameters better used for generalisation.
The other thing is that approximately all of the training set is copyrighted, because that's the default even for e.g. comments on forums like this comment you're reading now.
The other other thing is that at least two of the big model makers went and pirated book archives on top of crawling the web.
it's like these people never tried asking for song lyrics
LLMs even fail on tasks like "repeat back to me exactly the following text: ..." To say they can exactly and reliably reproduce copyrighted work is quite a claim.
You can also ask people to repeat a text and some will fail. What I want to say is that even if some LLMs (probably only older ones) will fail doesn't mean future ones will fail (in the majority). Especially if benchmarks indicate they are becoming smarter over time.
It is entirely unreasonable to prevent a general purpose model to be distributed for the largely frivolous reason that maybe some copyrighted works could be approximated using it. We don´t make metallurgy illegal because it's possible to make guns with metal.
When a model that has this capability is being distributed, copyright infringement is not happening. It is happening when a person _uses_ the model to reproduce a copyrighted work without the appropriate license. This is not meaningfully different to the distinction between my ISP selling me internet access and me using said internet access to download copyrighted material. If the copyright holders want to pursue people who are actually doing copyright infringement, they should have to sue the people who are actually doing copyright infringement and they shouldn't have broad power to shut down anything and everything that could be construed as maybe being capable of helping copyright infringement.
Copyright protections aren't valuable enough to society to destroy everything else in society just to make enforcing copyright easier. In fact, considering how it is actually enforced today, it's not hard to argue that the impact of copyright on modern society is a net negative.
I have a Xerox machine that can reliably reproduce copyrighted works. Is that a problem, too?
Blaming tools for the actions of their users is stupid.
If the Xerox machine had all of the copyrighted works in it and you just had to ask it nicely to print them I think you'd say the tool is in the wrong there, not the user.
Xerox already went through that lawsuit and won, which is why photocopiers still exist. The tool isn't in the wrong for being told to print out the copyrighted works. The user still had to make the conscious decision to copy that particular work. Hence, still the user's fault.
You take the copyrighted work to the printer, you don't upload data to an LLM first, it is already in the machine. If you got LLMs without training data (however that works) and the user needs to provide the data, then it would be ok.
You don't "upload" data to an LLM, but that's already been explained multiple times, and evidently it didn't soak in.
LLMs extract semantic information from their training data and store it at extremely low precision in latent space. To the extent original works can be recovered from them, those works were nothing intrinsically special to begin with. At best such works simply milk our existing culture by recapitulating ancient archetypes, a la Harry Potter or Star Wars.
If the copyright cartels choose to fight AI, the copyright cartels will and must lose. This isn't Napster Part 2: Electric Boogaloo. There is too much at stake this time.
LLMs do not have all copyrighted works in them.
In some cases they can be prompted to guess a number of tokens that follow an excerpt from another work.
They do not contain all copyrighted works, though. That’s an incorrect understanding.
One of the reasons the New York Times didn't supply the prompts in their lawsuit is because it takes an enormous amount of effort to get LLMs to produce copyrighted works. In particular, you have to actually hand LLMs copyrighted works in the prompt to get them to continue it.
It's not like users are accidentally producing copies of Harry Potter.
Are there any LLMs available with a, "give me copyrighted material" button? I don't think that is how they work.
Commercial use of someone's image also already has laws concerning that as far as I know, don't they?
You'd think wrong.
Helpfully the law already disagrees. That Xerox machine tampers with the printed result, leaving a faint signature that is meant to help detect forgeries. You know, for when users copy things that are actually illegal to copy. Xerox machine (and every other printer sold today) literally leaves a paper trail to trace it back to them.
https://en.wikipedia.org/wiki/Printer_tracking_dots
i believe only color printers are known to have this functionality, and it’s typically used for detecting counterfeit, not for enforcing copyright
You're quite right. Still, it's a decent example of blaming the tool for the actions of its users. The law clearly exerted enough pressure to convince the tool maker to modify that tool against the user's wishes.
> Still, it's a decent example of blaming the tool for the actions of its users.
They're not really "blaming" the tool though. They're using a supply chain attack against the subset of users they're interested in.
If I've copied someone else's copyrighted work on my Xerox machine, then give it to you, you can't reproduce the work I copied. If I leave a copy of it in the scanner when I give it to you, that's another story. The issue here isn't the ability of an LLM to produce it when I provide it with the copyrighted work as an input, it's whether or not there's an input baked-in at the time of distribution that gives it the ability to continue producing it even if the person who receives it doesn't have access to the work to provide it in the first place.
To be clear, I don't have any particular insight on whether this is possible right now with LLMs, and I'm not taking a stance on copyright law in general with this comment. I don't think your argument makes sense though because there's a clear technical difference that seems like it would be pretty significant as a matter of law. There are plenty of reasonable arguments against things like the agreement mentioned in the article, but in my opinion, your objection isn't one of the.
You can train a LLM on completely clean data, creative commons and legally licensed text, and at inference time someone will just put a whole article or chapter in the model and has full access to regenerate it however they like.
Re-quoting the section the parent comment included from this agreement:
> > GPAI model providers need to establish reasonable copyright measures to mitigate the risk that a downstream system or application into which a model is integrated generates copyright-infringing outputs, including through avoiding overfitting of their GPAI model. Where a GPAI model is provided to another entity, providers are encouraged to make the conclusion or validity of the contractual provision of the model dependent upon a promise of that entity to take appropriate measures to avoid the repeated generation of output that is identical or recognisably similar to protected works.
It sounds to me like an LLM you describe would be covered if they people distributing it put in a clause in the license saying that people can't do that.
According to the law in some jurisdictions it is. (notably most EU Member States, and several others worldwide).
In those places actually fees are included ("reprographic levy") in the appliance, and the needed supply prices, or public operators may need to pay additionally based on usage. That money goes towards funds created to compensate copyright holders for loss of profit due to copyright infringement carries out through the use of photocopiers.
Xerox is in no way singled out and discriminated against. (Yes, I know this is an Americanism)
And that's a stupid, corrupt law. Trying to apply it to AI will not work out quite as well as it did with photocopiers.
It's a trojan horse, they try to do the same thing that is happening in the banking sector.
By this they want AI model provider to have a strong grip on their users, so controling their usage to not risk issues with the regulator. Then, the European technocrats will be able control the whole field by being able to control the top providers, that then will overreach by controlling their users.
> One of the key aspects of the act is how a model provider is responsible if the downstream partners misuse it in any way
AFAICT the actual text of the act[0] does not mention anything like that. The closest to what you describe is part of the chapter on copyright of the Code of Practice[1], however the code does not add any new requirements to the act (it is not even part of the act itself). What it does is to present a way (which does not mean it is the only one) to comply with the act's requirements (as a relevant example, the act requires to respect machine-readable opt-out mechanisms when training but doesn't specify which ones, but the code of practice explicitly mentions respecting robots.txt during web scraping).
The part about copyright outputs in the code is actually (measure 1.4):
> (1) In order to mitigate the risk that a downstream AI system, into which a general-purpose AI model is integrated, generates output that may infringe rights in works or other subject matter protected by Union law on copyright or related rights, Signatories commit:
> a) to implement appropriate and proportionate technical safeguards to prevent their models from generating outputs that reproduce training content protected by Union law on copyright and related rights in an infringing manner, and
> b) to prohibit copyright-infringing uses of a model in their acceptable use policy, terms and conditions, or other equivalent documents, or in case of general-purpose AI models released under free and open source licenses to alert users to the prohibition of copyright infringing uses of the model in the documentation accompanying the model without prejudice to the free and open source nature of the license.
> (2) This Measure applies irrespective of whether a Signatory vertically integrates the model into its own AI system(s) or whether the model is provided to another entity based on contractual relations.
Keep in mind that "Signatories" here is whoever signed the Code of Practice: obviously if i make my own AI model and do not sign that code of practice myself (but i still follow the act requirements), someone picking up my AI model and signing the Code of Practice themselves doesn't obligate me to follow it too. That'd be like someone releasing a plugin for Photoshop under the GPL and then demanding Adobe release Photoshop's source code.
As for open source models, the "(1b)" above is quite clear (for open source models that want to use this code of practice - which they do not have to!) that all they have to do is to mention in their documentation that their users should not generate copyright infringing content with them.
In fact the act has a lot of exceptions for open-source models. AFAIK Meta's beef with the act is that the EU AI office (or whatever it is called, i do not remember) does not recognize Meta's AI as open source, so they do not get to benefit from those exceptions, though i'm not sure about the details here.
[0] https://eur-lex.europa.eu/legal-content/EN/TXT/HTML/?uri=OJ:...
[1] https://ec.europa.eu/newsroom/dae/redirection/document/11811...
[dead]
There’s a summary of the guidelines here for anyone who is wondering:
https://artificialintelligenceact.eu/introduction-to-code-of...
It’s certainly onerous. I don’t see how it helps anyone except for big copyright holders, lawyers and bureaucrats.
These regulations may end up creating a trap for European companies.
Essentially, the goal is to establish a series of thresholds that result in significantly more complex and onerous compliance requirements, for example when a model is trained past a certain scale.
Burgeoning EU companies would be reluctant to cross any one of those thresholds and have to deal with sharply increased regulatory risks.
On the other hand, large corporations in the US or China are currently benefiting from a Darwinian ecosystem at home that allows them to evolve their frontier models at breakneck speed.
Those non-EU companies will then be able to enter the EU market with far more polished AI-based products and far deeper pockets to face any regulations.
Also EU Users will try to use the better AI products with e.g. a VPN to the US.
Most won't. Remember that this is an issue almost noone (outside a certain bubble) is aware of.
Well, if there's not much difference why bother. If there are copyright restrictions on things people care about Europeans are perfectly capable of bypassing restrictions, like watching the ending of Game of Thrones etc.
Haha, huge, HUGE L-take. Go to any library or coffeeshop, and you'll see most students on their laptops are on ChatGPT. Do you think they won't immediately figure out how to use a VPN to move to the "better" models from the US or China if the EU regulations cripple the ones available in the EU?
EU's preemptive war on AI will be like the RIAA's war on music piracy. EU consumers will get their digital stuff one way or another, only EU's domestic products will just fall behind by not competing to create a equally good product that the consumers want.
Can you please make your substantive points without snark or name-calling?
https://news.ycombinator.com/newsguidelines.html
> Do you think they won't immediately figure out how to use a VPN to move to the "better" models
I think they don't even know the term "model" (in AI context), let alone which one's the best. They only know ChatGPT.
I do think it's possible that stories spread like "the new cool ChatGPT update is US-only: Here's how to access it in the EU".
However I don't think many will make use of that.
Anecdotally, most people around me (even CS colleagues) only use the standard model, ChatGPT 4o, and don't even take a look at the other options.
Additionally, AI companies could quickly get in trouble if they accept payments from EU credit cards.
>I think they don't even know the term "model" (in AI context), let alone which one's the best. They only know ChatGPT.
They don't know how torrents work either, but they always find a way to pirate movies to avoid Netflix's shitty policies. Necessity is the mother of invention.
>However I don't think many will make use of that.
You underestimate the drive kids/young adults have trying to maximize their grades/output while doing the bare minimum to have more time for themselves.
>Additionally, AI companies could quickly get in trouble if they accept payments from EU credit cards.
Well, if the EU keep this up, that might not be an issue long term in the future, when without top of the line AI and choked by regulations and with the costs of caring for an ageing demographics sucking up all the economic output, the EU economy falls further and further into irrelevancy.
And then they'll get fined a few billion anyway to cover the gap for no European tech to tax.
As an European, this sounds like an excellent solution.
US megatech funding our public infrastructure? Amazing. Especially after US attacked us with tarrifs.
Just like Russian mega-energy powering your grid?
Bad idea.
Europe is digging a hole of a combination of suffocating regulation and dependance on foreign players. It's so dumb, but Europeans are so used to it they can't see the problem.
It's always the same argument, and it is true. The US retained an edge over the rest of the world through deregulating tech.
My issue with this is that it doesn't look like America's laissez-faire stance on this issues helped Americans much. Internet companies have gotten absolutely humongous and gave rise to a new class of techno-oligarchs that are now funding anti-democracy campaigns.
I feel like getting slightly less performant models is a fair price to pay for increased scrutiny over these powerful private actors.
The problem is that misaligned AI will eventually affect everyone worldwide. Even if us Americans cause the problem, it won't stay an American problem.
If Europe wants leverage, the best plan is to tell ASML to turn off the supply of chips.
> It’s certainly onerous.
What exactly is onerous about it?
It's basically micromanaging an industry that European countries have not been able to cultivate themselves. It's legislation for legislation's sake. If you had a naive hope that Mario Draghi's gloomy report on the EU's competitiveness would pave the way for a political breakthrough in the EU - one is tempted to say something along the lines of communist China's market reforms in the 70s - then you have to conclude that the EU is continuing in exactly the same direction. I have actually lost faith in the EU.
[flagged]
This all seems fine.
Most of these items should be implemented by major providers…
The problem is this severely harms the ability to release opens weights models, and only leaves the average person with options that aren't good for privacy.
I don't care about your overly verbose, blandly written slop. If I wanted a llm summary, I would ask an llm myself.
This really is the 2025 equivalent to posting links to a google result page, imo.
More verbose than the source text? And who cares about bland writing when you're summarizing a legal text?
It is... helpful though. More so than your reply
Touché, I'll grant you that.
Nope. This text is embedded in HN and will survive rather better than the prompt or the search result, both of which are non-reproducible. It may bear no relation to reality but at least it won't abruptly disappear.
Unless, ya know, it gets marked as Flagged/Dead.
Huh. Well, yes, it has indeed now been deleted. I guess that's the end of that line of thinking then.
I dont think my original post deserved to be flagged but I suspect it was auto enforcement.
EU regulations are sometimes able to bully the world into compliance (eg. cookies).
Usually minorities are able to impose "wins" on a majority when the price of compliance is lower than the price of defiance.
This is not the case with AI. The stakes are enormous. AI is full steam ahead and no one is getting in the way short of nuclear war.
But AI also carries tremendous risks, from something simple as automating warfare to something like a evil AGI.
In Germany we have still traumas from automatic machine guns setup on the wall between East and West Germany. The Ukraine is fighting a drone war in the trenches with a psychological effect on soldiers comparable to WWI.
Stake are enormous. Not only toward the good. There is enough science fiction written about it. Regulation and laws are necessary!
I think your machine gun example illustrates people are quite capable of masacreing each other without AI or even high tech - in past periods sometimes over 30% of males died in warfare. While AI could get involved it's kind of a separate thing.
Yeah, his automated gun phobia argument is dumb. Should we ban all future tech development because some people are a scared of some things that can be dangerous but useful? NO.
Plus, ironically, Germany's Rheinmetall is a leader in automated anti-air guns so the people's phobia of automated guns is pointless and, at least in this case, common sense won, but in many others like nuclear energy, it lost.
It seems like Germans area easy to manipulate to get them to go against their best interests, if you manage to trigger some phobias in them via propaganda. "Ohoohoh look out, it's the nuclear boogieman, now switch your economy to Russian gas instead, it's safer"
The switching to russian gas is bad for know, but was rational back then. The idea was to give russia leverage on europe besides war, so that they don't need war.
>but was rational back then.
Only if you're a corrupt German politician getting bribed by Russia to sell out long term national security for short term corporate profits.
It was also considered a stupid idea back then by NATO powers asking Germany WTF are you doing, tying your economy to the nation we're preparing to go to war with.
> The idea was to give russia leverage on europe besides war, so that they don't need war.
The present day proves it was a stupid idea.
"You were given the choice between war and dishonor. You chose dishonor, and you will have war." - Churchill
It worked quite well between France and Germany 50 years earlier.
Yes it was naive, given the philosophy of the leaders of the UdSSR/Russia, but I don't think it was that much problematic. We do need some years to adapt, but it doesn't meaningfully impact the ability to send weapons to the ukraine and impose sanctions (in the long term). Meanwhile we got cheap gas for some decades and Russia got some other trade partners beside China. Would we better of if we didn't use the oil in the first place? Then Russia would have bounded earlier only to China and Nordkorea, etc. . It also did have less environmental impact then shipping the oil from the US.
>It worked quite well between France and Germany 50 years earlier.
France and Germany were democracies under the umbrella of the US rule acting as arbiter. It's disingenuous and even stupid, to argue an economic relationship with USSR and Putin's Russia as being the same thing.
Yes I agree it was naive. It is something people come up with, if they think everyone cares for their own population's best and "western" values. Yet that is an assumption we used to base a lot on and still do.
Did the US force France into it? I thought that it was an idea of the french government (Charles de Gaulle), while the population had much resentment, which only vanished after having successful business together. Germany hadn't much choice though. I don't think it would had lasting impact if it were decreed and not coming from the local population.
You could hope making Russia richer, could in them rather be rich then large, which is basically the deal we have with China, which is still an alien dictatorship.
Here’s a nice history of the decades old relationship: https://www.dw.com/en/russian-gas-in-germany-a-complicated-5...
It was a major success, contributing to the thawing of relationships with the Soviet Union and probably contributed to the peaceful end of the Soviet Union. It supported several EU countries through their economic development and kept the EU afloat through the financial crisis.
It was a very important source of energy and there is no replacement. This can be seen by the flight of capital, deindustrialisation and poor economic prospects in Germany and the EU.
But as far as I know, many countries still import energy from Russia, either directly or laundered through third parties.
I think the argument was about automated killing, not automated weapons.
There are already drones from Germany capable of automatic target acquisition, but they still require a human in the loop to pull the trigger. Not because they technically couldn't, but because they are required to.
I don't disagree that we need regulation, but I also think citing literal fiction isn't a good argument. We're also very, very far away from anything approaching AGI, so the idea of it becoming evil seems a bit far fetched.
I agree fiction is a bad argument.
On the other hand, firstly every single person disagrees what the phrase AGI means, varying from "we've had it for years already" to "the ability to do provably impossible things like solve the halting problem"; and secondly we have a very bad track record for knowing how long it will take to invent anything in the field of AI with both positive and negative failures, for example constantly thinking that self driving cars are just around the corner vs. people saying an AI that could play Go well was "decades" away a mere few months before it beat the world champion.
Autonomous sentry turrets have already been a thing since the 2000s. If we assume that military technology is always at least some 5-10 years ahead of civilian, it is likely that some if not all of the "defense" contractors have far more terrifying autonomous weapons.
Did you catch the news about Grok wanting to kill the jews last week? All you need for AI or AGI to be evil is a prompt saying be evil.
We don't need AGI in order for AI to destroy humanity.
regulation does not stop weapons from being created that utilizes AI. It only slows down honest states that try to abide by it, and gives the dishonest ones a head start.
Guess what happens to the race then?
you can choose to live in fear, the rest of us are embracing growth
The only thing the cookie law has accomplished for users, is pestering everyone with endless popups (full of dark patterns). WWW is pretty much unbearable to use without uBlock filtering that nonsense away. User tracking and fingerprinting has moved server side. Zero user privacy has been gained, because there's too much money to be made and the industry routed around this brain dead legislation.
> User tracking and fingerprinting has moved server side.
This smells like a misconception of the GDPR. The GDPR is not about cookies, it is about tracking. You are not allowed to track your users without consent, even if you do not use any cookies.
Login is tracking, even when login is functional, not for tracking.
Laws are analyzed by lawyers and they will err on side of caution, so you end up with these notices.
Cookies and crossdomain tracking is slightly different to a login. Login would occur on one platform and would not track you when you go on to amazon or some porn site or read infowars. But crossdomain cookies do not need auth and they are everywhere because webmasters get paid for adding them, they track you everywhere.
Well in my case I just do not use those websites with an enormous amount of “partners” anymore. Cookie legislation was great because it now shows you how many businesses are ad based, it added a lot of transparency. It is annoying only because you want the shit for free and it carries a lot of cookies usually. All of the businesses that do not track beyond the necessary do not have that issue with the cookie banners IMO. GDPR is great for users and not too difficult to implement. All of the stuff related to it where you can ask the company what data they hold about you is also awesome.
I'm surprised that most of the comments here are siding with Europe blindly?
Am I the only one who assumes by default that European regulation will be heavy-handed and ill conceived?
Well Europe haven't enacted policies actually breaking American monopolies until now.
Europeans are still essentially on Google, Meta and Amazon for most of their browsing experiences. So I'm assuming Europe's goal is not to compete or break American moat but to force them to be polite and to preserve national sovereignty on important national security aspects.
A position which is essentially reasonable if not too polite.
> So I'm assuming Europe's goal is not to compete or break American moat but to force them to be polite and to preserve national sovereignty on important national security aspects.
When push comes to shove the US company will always prioritize US interest. If you want to stay under the US umbrella by all means. But honestly it looks very short sighted to me.
After seeing this news https://observer.co.uk/news/columnists/article/the-networker..., how can you have any faith that they will play nice?
You have only one option. Grow alternatives. Fund your own companies. China managed to fund the local market without picking winners. If European countries really care, they need to do the same for tech.
If they don't they will forever stay under the influence of another big brother. It is US today, but it could be China tomorrow.
The EU sucks at venture capital.
What is bad about heavy handed regulation to protect citizens?
That it is very likely not going to work as advertised, and might even backfire.
The EU AI regulation establishes complex rules and requirements for models trained above 10^25 FLOPS. Mistral is currently the only European company operating at that scale, and they are also asking for a pause before these rules go into effect.
The sad reality is that nobody ever cares about the security/ethics of their product unless they are pressured. Model evaluation against some well defined ethics framework or something like HarmBench are not without costs, nobody wants to do that. It is similar to pentesting. It is good that such suggestions are being pushed forward to make sure model owners are responsible here. It also protects authors and reduces the risk of their works being copied verbatim. I think this is what morel owners are afraid of the most.
TBH I would most prefer that models weren't forbidden to answer certain questions.
This is the same entity that has literally ruled that you can be charged with blasphemy for insulting religious figures, so intent to protect citizens is not a motive I ascribe to them.
>charged with blasphemy for insulting religious figures
Isn't that the exact definition of "blasphemy"[1]?
[1] https://www.merriam-webster.com/dictionary/blasphemy
Indeed it is. And when you hear that someone has been charged with blasphemy do you think of a modern society or a medieval hellscape?
But it IS protecting citizens from blasphemy.
What entity specifically?
The EU Court of Human Rights upheld a blasphemy conviction for calling Muhammad (who married a 9 year old) a pedophile https://en.wikipedia.org/wiki/E.S._v._Austria_(2018)
Dang that's a crazy outcome.
>Even in a lively discussion it was not compatible with Article 10 of the Convention to pack incriminating statements into the wrapping of an otherwise acceptable expression of opinion and claim that this rendered passable those statements exceeding the permissible limits of freedom of expression.
Although the expression of this opinion is otherwise acceptable, it was packed with "incriminating statements". But the subject of these incriminating statements is 2000 year old mythical figure.
I don't think good intentions alone are enough to do good.
The EU is pushing for a backdoor in all major messaging/email providers to "protect the children". But it's for our own good you see? The EU knows best and it wants your data without limits and without probable cause. Everyone is a suspect.
1984 wasn't supposed to be a blueprint.
A good example of how this can end up with negative outcomes is the cookie directive, which is how we ended up with cookie consent popovers on every website that does absolutely nothing to prevent tracking and has only amounted to making lives more frustrating in the EU and abroad.
It was a decade too late and written by people who were incredibly out of touch with the actual problem. The GDPR is a bit better, but it's still a far bigger nuisance for regular European citizens than the companies that still largely unhindered track and profile the same.
Cookie consent popovers were the deliberate decisions of company to create the worst possible compliance. A much simpler one could have been to stop tracking users especially when it is not their primary business.
Newer regulations also mandate that "reject all cookies" should be a one click action but surprisingly compliance is low. Once again, the enemy of the customer here is the company, not the eu regulation.
I don’t believe that every website has colluded to give themselves a horrible user experience in some kind of mass protest against the GDPR. My guess is that companies are acting in their interests, which is exactly what I expect them to do and if the EU is not capable of figuring out what that will look like then it is a valid criticism of their ability to make regulations
What makes you think the regulators didn't predict the outcome?
Of course the business which depend on harvesting data will do anything they can to continue harvesting data. The regulation just makes that require consent. This is good.
If businesses are intent to keep on harvesting data by using dark patterns to obtain "consent", these businesses should either die or change. This is good.
Websites use ready-to be used cookie banners provider by their advertisers. Who have all the incentive to make the process as painful as possible unless you click "accept", and essentially followed the model that Facebook pioneered.
And since most people click on accept, websites don't really care either.
Yet that user interface is against the law and enforcing the GDPR would improve it.
Perfect example of regulation shaping a market. And succeeding at only ill results.
So because sometimes a regulation misses the mark, governments should not try to regulate?
Well, pragmatically, I'd say no. We must judge regulations not by the well wishes and intentions behind them but the actual outcomes they have. These regulations affect people, jobs and lives.
The odds of the EU actually hitting a useful mark with these types of regulations, given their technical illiteracy, it's is just astronomically unlikely.
I think OP is criticising blindly trusting the regulation hits the mark because Meta is mad about it. Zuckerberg can be a bastard and correctly call out a burdensome law.
Bad argument, the solution is not to not regulate, it's to make a new law mandating companies to make cookies opt-in behind a menu that can't be a banner. And if this somehow backfires too, we go again. Giving up is not the solution to the privacy crisis.
[dead]
You end up with anemic industry and heavy dependability on foreign players.
He also said “ill conceived”
Will they resort to turning off the Internet to protect citizens?
Is this AI agreement about "turning off the Internet"?
Or maybe just exclude Meta from the EU? :)
it does not protect citizens? the EU shoves down a lot of the member state's throats.
what's bad about it is when people say "it's to protect citizens" when it's really a political move to control american companies
Because it doesn't protect us.
It just creates barriers for internal players, while giving a massive head start for evil outside players.
"Even the very wise cannot see all ends." And these people aren't what I'd call "very wise."
Meanwhile, nobody in China gives a flying fuck about regulators in the EU. You probably don't care about what the Chinese are doing now, but believe me, you will if the EU hands the next trillion-Euro market over to them without a fight.
Everyone working on AI will care, if ASML stops servicing TSMC's machines. If Europe is serious about responsible AI, I think applying pressure to ASML might be their only real option.
If Europe is serious about responsible AI, I think applying pressure to ASML might be their only real option.
True, but now they get to butt heads with the US, who call the tunes at ASML even though ASML is a European company.
We (the US) have given China every possible incentive to break that dependency short of dropping bombs on them, and it would be foolish to think the TSMC/ASML status quo will still hold in 5-10 years. Say what you will about China, they aren't a nation of morons. Now that it's clear what's at stake, I think they will respond rationally and effectively.
"blindly"? Only if you assume you are right in your opinion can you arrive at the conclusion that your detractors didn't learn about it.
Since you then admit to "assume by default", are you sure you are not what you complain about?
I was specifically referring to several comments that specifically stated that they did not know what the regulation was, but that they assumed Europe was right and Meta was wrong.
I, prior to reading the details of the regulation myself, was commenting on my surprise at the default inclinations of people.
At no point did I pass judgement on the regulation and even after reading a little bit on it I need to read more to actually decide whether I think it's good or bad.
Being American it impacts me less, so it's lower on my to do list.
Let's see, how many times did I get robo-called in the last decade? Zero :)
Sometimes the regulations are heavy-handed and ill-conceived. Most of the time, they are influenced by one lobby or another. For example, car emissions limits scale with _weight_ of all things, which completely defeats the point and actually makes today's car market worse for the environment than it used to be, _because of_ emissions regulations. However, it is undeniable that that the average European is better off in terms of privacy.
It's just foreign interests trying to keep Europe down
I feel like Europe does a plenty good job of that itself
> Am I the only one who assumes by default
And that's the problem: assuming by default.
How about not assuming by default? How about reading something about this? How about forming your own opinion, and not the opinion of the trillion- dollar supranational corporations?
Are you saying that upon reading a sentence like
"Meta disagrees with European regulation"
That you don't have an immediate guess at which party you are most likely to agree with?
I do and I think most people do.
I'm not about to go around spreading my uninformed opinion though. What my comment said was that I was surprised at people's kneejerk reaction that Europe must be right, especially on HN. Perhaps I should have also chided those people for commenting at all, but that's hindsight for you.
The "kneejerk reaction" is precisely because it's Meta. You know, the poor innocent trillion-dollar supranational corporation whose latest foray into AI was "opt-in all of our users into training our AI and make the opt-out an awkward broken multi-step process designed to discourage the opt-out".
Whereas EU's "heavy-handed and ill-conceived" regulations are "respect copyright, respect user choice, document your models, and use AI responsibly".
Maybe the others have put in a little more effort to understand the regulation before blindly criticising it? Similar to the GDPR, a lot of it is just common sense—if you don’t think that "the market" as represented by global mega-corps will just sort it out, that is.
Our friends in the EU have a long history of well-intentioned but misguided policy and regulations, which has led to stunted growth in their tech sector.
Maybe some think that is a good thing - and perhaps it may be - but I feel it's more likely any regulation regarding AI at this point in time is premature, doomed for failure and unintended consequences.
Yet at the same time, they also have a long history of very successful policy, such as the USB-C issue, but also the GDPR, which has raised the issue of our right to privacy all over the world.
How long can we let AI go without regulation? Just yesterday, there was a report here on Delta using AI to squeeze higher ticket prices from customers. Next up is insurance companies. How long do you want to watch? Until all accountability is gone for good?
Hard disagree on both GDPR and USBC.
If I had to pick a connector that the world was forced to use forever due to some European technocrat, I would not have picked usb-c.
Hell, the ports on my MacBook are nearly shot just a few years in.
Plus GDPR has created more value for lawyers and consultants than it has for EU citizens.
The USB-C charging ports on my phones have always collected lint to the point they totally stop working and have to be cleaned out vigorously.
I don't know how this problem is so much worse with USB-C or the physics behind it, but it's a very common issue.
This port could be improved for sure.
As someone with both a usb-c and micro-usb phone, I can assure you that other connectors are not free of that problem. The micro-usb one definitely feels worse. Not sure about the old proprietary crap that used to be forced down our throats so we buy Apple AND Nokia chargers, and a new one for each model, too.
> Plus GDPR has created more value for lawyers and consultants than it has for EU citizens.
Monetary value, certainly, but that’s considering money as the only desirable value to measure against.
Who said money. Time and human effort are the most valuable commodities.
That time and effort wasted on consultants and lawyers could have been spent on more important problems or used to more efficiently solve the current one.
I mean, getting USB-C to be usable on everything is like a nice-to-have, I wouldn't call it "very successful policy".
It’s just an example. The EU has often, and often successfully, pushed for standardisation to the benefit of end users.
Which... has the consequences of stifling innovation. Regulations/policy is two-way street.
Who's to say USB-C is the end-all-be-all connector? We're happy with it today, but Apple's Lightning connector had merit. What if two new, competing connectors come out in a few year's time?
The EU regulation, as-is, simply will not allow a new technically superior connector to enter the market. Fast forward a decade when USB-C is dead, EU will keep it limping along - stifling more innovation along the way.
Standardization like this is difficult to achieve via consensus - but via policy/regulation? These are the same governing bodies that hardly understand technology/internet. Normally standardization is achieved via two (or more) competing standards where one eventually "wins" via adoption.
Well intentioned, but with negative side-effects.
If the industry comes out with a new, better connector, they can use it, as long as they also provide USB-C ports. If enough of them collectively decide the new one is superior, then they can start using that port in favor of USB-C altogether.
The EU says nothing about USB-C being the bestest and greatest, they only say that companies have to come to a consensus and have to have 1 port that is shared between all devices for the sake of consumers.
I personally much prefer USB-C over the horrid clusterfuck of proprietary cables that weren't compatible with one another, that's for sure.
"the industry"
If one company does though they're basically screwed as I understand it.....
> The EU regulation, as-is, simply will not allow a new technically superior connector to enter the market.
As in: the EU regulation literally addresses this. You'd know it if you didn't blindly repeat uneducated talking points by others who are as clueless as you are.
> Standardization like this is difficult to achieve via consensus - but via policy/regulation?
In the ancient times of 15 or so years ago every manufacturer had their own connector incompatible with each other. There would often be connectors incompatible with each other within a single manufacturer's product range.
The EU said: settle on a single connector voluntarily, or else. At the time the industry settled on micro-USB and started working on USB-C. Hell, even Power Delivery wasn't standardized until USB-C.
Consensus doesn't always work. Often you do need government intervention.
I'm specifically referring to several comments that say they have not read the regulation at all, but think it must be good if Meta opposes it.
> GDPR
You mean that thing (or is that another law?) that forces me to find that "I really don't care in the slightest" button about cookies on every single page?
That is malicious compliance with the law, and more or less indicative of a failure of enforcement against offenders.
But the law still caused that, of course regulation is going to be adversarial and they should anticipate that.
No, the laws that ensures that private individuals have the power to know what is stored about them, change incorrect data, and have it deleted unless legally necessary to hold it - all in a timely manner and financially penalize companies that do not.
> and have it deleted unless legally necessary to hold it
Tell that to X which disables your ability to delete your account if it gets suspended.
No, GDPR is the law that allowed me to successfully request the deletion of everything companies like Meta have ever harvested on me without my consent and for them to permanently delete it.
Fun fact, GitHub doesn't have cookie banners. It's almost like it's possible to run a huge site without being a parasite and harvesting every iota of data of your site's visitors!
That's not the GDPR.
I’d side with Europe blindly over any corporation.
The European government has at least a passing interest in the well being of human beings while that is not valued by the incentives that corporations live by
The EU is pushing for a backdoor in all major messaging/email providers to "protect the children". No limits and no probable cause required. Everyone is a suspect.
Are you still sure you want to side blindly with the EU?
All corporations that exist everywhere make worse decisions than Europe is a weirdly broad statement to make.
[dead]
If I've got to side blindly with any entity it is definitely not going to be Meta. That's all there is.
That's fair, but you don't need to blindly side with anyone.
My original post was about all the comments saying they knew nothing about the regulation, but that they sided with Europe.
I think that gleeful ignorance caught me off guard.
I feel the same but about the EU. After all, I have a choice whether to use Meta or not. There is no escaping the EU sort of leaving my current life.
Meta famously tracks people extensively even if they don't have an account there, through a technique called shadow profiles.
I mean, ideally no one would side blindly at all :D
That's the issue with people's from a certain side of politics, they don't vote for something they always side / vote against something or someone ... Blindly. It's like pure hate going over reason. But it's ok they are the 'good' ones so they are always right and don't really need to think
Sometimes people are just too lazy to read an article. If you just gave one argument in favor of Meta, then perhaps that could have started a useful conversation.
Perhaps… if a sane person could find anything in favor of one of the most Evil corporations in the history of mankind…
>if a sane person could find anything in favor of one of the most Evil corporations in the history of mankind.
You need some perspective - Meta wouldn't even crack the top 100 in terms of evil:
https://en.m.wikipedia.org/wiki/East_India_Company
https://en.wikipedia.org/wiki/Abir_Congo_Company
https://en.wikipedia.org/wiki/List_of_companies_involved_in_...
https://en.wikipedia.org/wiki/DuPont#Controversies_and_crime...
https://en.m.wikipedia.org/wiki/Chiquita
this alone is worse than all of what you listed combined
https://www.business-humanrights.org/en/latest-news/meta-all...
No... making teenagers feel depressed sometimes is not in fact worse than facilitating the Holocaust, using human limbs as currency, enslaving half the world and dousing the earth with poisons combined.
it is when you consider number of people affected
No, it isn't.
I'm not saying Meta isn't evil - they're a corporation, and all corporations are evil - but you must live in an incredibly narrow-minded and privileged bubble to believe that Meta is categorically more evil than all other evils in the span of human history combined.
Go take a tour of Dachau and look at the ovens and realize what you're claiming. That that pales in comparison to targeted ads.
Just... no.
Dachau was enabled by the Metas of that time. It needed advertising aka. propaganda to get to this political regime and it needed surveillance to keep people in check and target the people who get a sponsorship for that lifelong vacation.
all of the combined pales in comparison to what meta did and is doing to society at the scale of which they are doing it
Depends on the visibility of the weapon used and the time scale it starts to show the debilitating effects.
Or you know, some actually read it and agree?
I'm specifically talking about comments that say they haven't read it, but that they side with Europe. Look through the thread, there's a ton like that
So you're surprised that people are siding with Europe blindly, but you're "assuming by default" that you should side with Meta blindly.
Perhaps it's easier to actually look at the points in contention to form your opinion.
I don't remember saying anything about blindly deciding things being a good thing.
Everything in this thread even remotely anti-EU-regulation is being extreme downvoted
The regulations are pretty reasonable though.
Yeah it's kinda weird.
Feels like I need to go find a tech site full of people who actually like tech instead of hating it.
Your opinions aren't the problem, and tech isn't the problem. It's entirely your bad-faith strawman arguments and trolling.
https://news.ycombinator.com/item?id=44609135
That feeling is correct: this site is better without you. Please put your money where your mouth is and leave.
I think your command of the English language might be your issue
Don't know if I'm biased but it seems there has been a slow but consistent and accelerating redditification of hacker news.
It's the AI hype and the people who think they are hackers because they can ask a LLM to write code.
Idk I feel like there are a lot of non-technical people who work in tech here now.
Yeah I think that's part of it.
Probably partly because reddit somehow seems to have become even worse over the last several years. So there are probably more people fleeing
I like tech
I don't like meta or anything it has done, or stands for
See that's crazy though.
You don't like open source ML (including or not including LLMs, depending on how you feel about them)
You don't like React?
You don't like PyTorch?
Like a lot of really smart and really dedicated people work on pretty cool stuff at Meta. You don't have to like Facebook, Instagram, etc to see that.
To be fair, anyone who genuinely likes React is probably insane?
Plenty of great projects are developed by people working at Meta. Doesn't change the fact that the company as a whole should be split in at least 6 parts, and at least two thirds of these parts should be regulated to death. And when it comes to activities that do not improve anyone's life such as advertisement and data collection, I do mean literally regulated into bankruptcy.
No we like tech that works for the people/public, not against them. I know its a crazy idea.
Tech and techies don't like to be monopolized
As others have pointed out, we like tech.
We don't like what trillion-dollar supranational corporations and infinite VC money are doing with tech.
Hating things like "We're saving your precise movements and location for 10+ years" and "we're using AI to predict how much you can be charged for stuff" is not hating technology
If you don't hate big tech you haven't paying attention. Enshittification became a popular word for a reason.
I like tech, but I despise cults
It is fascinating. I assume that the tech world is further to the left, and that interpretation of "left" is very pro-AI regulation.
Are you suggesting something here?
Are you aware of the irony in your post?
I don't recall sharing my opinion on this particular regulation.
I think perhaps you need to reread my comment or lookup "irony"
Presumably it is Meta's growth they have in mind.
Edit: from the linked in post, Meta is concerned about the growth of European companies:
"We share concerns raised by these businesses that this over-reach will throttle the development and deployment of frontier AI models in Europe, and stunt European companies looking to build businesses on top of them."
Sure, but Meta saying "We share concerns raised by these businesses" translates to: It is in our and only our benefit for PR reasons to agree with someone, we don't care who they are, we don't give a fuck, but just this second it sounds great to use them for our lobbying.
Meta has never done and will never do anything in the general public's interest. All they care about is harvesting more data to sell more ads.
> has never done and will never do anything in the general public's interest
I'm no Meta apologist, but haven't they been at the forefront of open-source AI development? That seems to be in the "general public's interest".
Obviously they also have a business to run, so their public benefit can only go so far before they start running afoul of their fiduciary responsibilities.
Of course. Skimming over the AI Code of Practice, there is nothing particularly unexpected or qualifying as “overreach”. Of course, to be compliant, model providers can’t be shady which perhaps conflicts with Meta’s general way of work.
EU is going to add popups to all the LLMs like they did all the websites. :(
No, the EU did not do that.
Companies did that and thoughtless website owners, small and large, who decided that it is better to collect arbitrary data, even if they have no capacity to convert it into information.
The solution to get rid of cookie banners, as it was intended, is super simple: only use cookies if absolutely necessary.
It was and is a blatant misuse. The website owners all have a choice: shift the responsibility from themselves to the users and bugger them with endless pop ups, collect the data and don’t give a shit about user experience. Or, just don’t use cookies for a change.
And look which decision they all made.
A few notable examples do exist: https://fabiensanglard.net/ No popups, no banner, nothing. He just don’t collect anything, thus, no need for a cookie banner.
The mistake the EU made was to not foresee the madness used to make these decisions.
I’ll give you that it was an ugly, ugly outcome. :(
> The mistake the EU made was to not foresee the madness used to make these decisions.
It's not madness, it's a totally predictable response, and all web users pay the price for the EC's lack of foresight every day. That they didn't foresee it should cause us to question their ability to foresee the downstream effects of all their other planned regulations.
Interesting framing. If you continue this line of thought, it will end up in a philosophical argument about what kind of image of humanity one has. So your solution would be to always expect everybody to be the worst version of themselves? In that case, that will make for some quite restrictive laws, I guess.
People are generally responsive to incentives. In this case, the GDPR required:
1. Consent to be freely given, specific, informed and unambiguous and as easy to withdraw as to give 2. High penalties for failure to comply (€20 million or 4 % of worldwide annual turnover, whichever is higher)
Compliance is tricky and mistakes are costly. A pop-up banner is the easiest off-the-shelf solution, and most site operators care about focusing on their actual business rather than compliance, so it's not surprising that they took this easy path.
If your model of the world or "image of humanity" can't predict an outcome like this, then maybe it's wrong.
> and most site operators care about focusing on their actual business rather than compliance,
And that is exactly the point. Thank you. What is encoded as compliance in your example is actually the user experience. They off-loaded responsibility completely to the users. Compliance is identical to UX at this point, and they all know it. To modify your sentence: “and most site operators care about focusing on their actual business rather than user experience.”
The other thing is a lack of differentiation. The high penalities you are talking about are for all but of the top traffic website. I agree, it would be insane to play the gamble of removing the banners in that league. But tell me: why has ever single-site- website of a restaurant, fishing club and retro gamer blog a cookie banner? For what reason? They won’t making a turnover you dream about in your example even if they would win the lottery, twice.
> Compliance is tricky
How is "not selling user data to 2000+ 'partners'" tricky?
> most site operators care about focusing on their actual business
How is their business "send user's precise geolocation data to a third party that will keep that data for 10 years"?
Compliance with GDPR is trivial in 99% of cases
Well, you and I could have easily anticipated this outcome. So could regulators. For that reason alone…it’s stupid policy on their part imo.
Writing policy is not supposed to be an exercise where you “will” a utopia into existence. Policy should consider current reality. if your policy just ends up inconveniencing 99% of users, what are we even doing lol?
I don’t have all the answers. Maybe a carrot-and-stick approach could have helped? For example giving a one time tax break to any org that fully complies with the regulation? To limit abuse, you could restrict the tax break to companies with at least X number of EU customers.
I’m sure there are other creative solutions as well. Or just implementing larger fines.
If the law incentivized practically every website to implement the law in the "wrong" way, then the law seems wrong and its implications weren't fully thought out.
"If you have a dumb incentive system, you get dumb outcomes" - Charlie Munger
> The solution to get rid of cookie banners, as it was intended, is super simple: only use cookies if absolutely necessary.
You are absolutely right... Here is the site on europa.eu (the EU version of .gov) that goes into how the GDPR works. https://commission.europa.eu/law/law-topic/data-protection/r...
Right there... "This site uses cookies." Yes, it's a footer rather than a banner. There is no option to reject all cookies (you can accept all cookies or only "necessary" cookies).
Do you have a suggestion for how the GDPR site could implement this differently so that they wouldn't need a cookie footer?
> Do you have a suggestion for how the GDPR site could implement this differently so that they wouldn't need a cookie footer?
Well, it's a information-only website, it has no ads or even a login, so they don't need to use any cookies at all. In fact if you look at the page response in the browser dev tools, there's in fact no cookies on the website, so to be honest they should just delete the cookie banner.
At https://commission.europa.eu/cookies-policy_en#thirdpartycoo... you can see the list of 3rd party cookies they use (and are required to notify about it).
In theory, they could rewrite their site to not require any of those services.This is why the EU law was nonsense. Many of those cookies listed are just because of they want embed things like YouTube or Vimeo videos. Embedding YouTube to show videos to your users is massively cheaper and easier than self hosted video infrastructure. The idea that the GDPR's own website just implemented GPDR "wrong" because they should just avoid using cookies is nonsense and impractical.
The other part of the point I was trying to make that if there's a different technological solution to cookie banners, the europa.eu sites are not demonstrating it. Instead, companies that don't do it that way get fined for some inadequacy in their approach.
Thus, I hold that the GPDR requires cookie banners.
---
Another part to consider that if videos (and LinkedIn for job searches and Google Maps for maps and Internet Archive for whatever they embed from there) are sufficiently onerous 3rd party cookies ("yea, we're being good with our cookies, but we use 3rd party providers and can't do anything about them, but we informed and you accepted their cookies")... then wouldn't it be an opportunity for the Federal Ministry of Transport and Digital Infrastructure https://en.wikipedia.org/wiki/Federal_Ministry_for_Transport or similar to have grants https://www.foerderdatenbank.de for companies to create a viable GDPR friendly alternative to those services?
That is, if the GDPR and other EU regulations weren't stifling innovation and establishing regulatory capture (its expensive to do and retain the lawyers needed to skirt the rules) making it impossible for such newer alternative companies to thrive and prosper within the EU.
Which is what the article is about.
But this is a failure on the part of the EU law makers. They did not understand how their laws would look in practice.
Obviously some websites need to collect certain data and the EU provided a pathway for them to do that, user consent. It was essentially obvious that every site which wanted to collect data for some reason also could just ask for consent. If this wasn't intended by the EU it was obviously foreseeable.
>The mistake the EU made was to not foresee the madness used to make these decisions.
Exactly. Because the EU law makers are incompetent and they lack technical understanding and the ability to write laws which clearly define what is and what isn't okay.
What makes all these EU laws so insufferable isn't that they make certain things illegal, it is that they force everyone to adopt specific compliance processes, which often do exactly nothing to achieve the intended goal.
User consent was the compliance path to be able to gather more user data. Not foreseeing that sites would just ask that consent was a failure of stupid bureaucrats.
Of course they did not intend that sites would just show pop ups, but the law they created made this the most straightforward path for compliance.
Tracking users isn’t actually needed, though. Websites that store data only for the functionality they offer don’t need to seek consent.
The actual problem is weak enforcement. If the maximum fines allowed by the law had been levied, several companies would’ve been effectively ended or excluded from the EU. That would’ve been good incentive for non-malicious compliance.
That possibly cannot be the common notion to frame this.
I agree with some parts it but also see two significant issues:
1. It is even statistically implausible that everyone working at the EU is tech-illiterate and stupid and everybody at HN is a body of enlightenment on two legs. This is a tech-heavy forum, but I would guess most here are bloody amateurs regarding theory and science of law and you need at least two disciplines at work here, probably more.
This is drifting too quickly into a territory of critique by platitudes for the sake of criticism.
2. The EU made an error of commission, not omission, and I think that that is a good thing. They need to make errors in order to learn from them and get better. Critique by using platitudes is not going to help the case. It is actually working against it. The next person initiating a EU procedure to correct the current error with the popups will have the burden of doing everything perfectly right, all at once, thought through front to back, or face the wrath of the all-knowing internet. So, how should that work out? Exactly like this: we will be stuck for half an eternity and no one will correct anything because if you don’t do anything you can’t do any wrong! We as a society mostly record the things that someone did wrong but almost never record something somebody should have done but didn’t. That’s an error of omission, and is usually magnitudes more significant than an error of commission. What is needed is an alternative way of handling and judging errors. Otherwise, the path of learning by error will be blocked by populism.
——- In my mind, the main issue is not that the EU made a mistake. The main issue is that it is not getting corrected in time and we will probably have to suffer another ten years or so until the error gets removed. The EU as a system needs to be accelerated by a margin so that it gets to an iterative approach if an error was made. I would argue with a cybernetic feedback loop approach here, but as we are on HN, this would translate to: move fast and break things.
On point 1. Tech illiteracy is something that affects an organization, it is independent of whether some individuals in that organization understand the issues involved. I am not arguing that nobody at the EU understands technology, but that key people pushing forward certain pieces of legislation have a severe lack of technical background.
On point 2. My argument is that the EU is fundamentally legislating wrong. The laws they create are extremely complex and very hard to decipher, even by large corporate law teams. The EU does not create laws which clearly outlaw certain behaviors, they create corridors of compliance, which legislate how corporations have to set up processes to allow for certain ends. This makes adhering to these laws extremely difficult, as you can not figure out if something you are trying to do is illegal. Instead you have to work backwards, start by what you want to do, then follow the law backwards and decipher the way bureaucrats want you to accomplish that thing.
I do not particularly care about cookie banners. They are just an annoying thing. But they clearly demonstrate how the EU is thinking about legislation, not as strict rules, but as creating corridors. In the case of cookie banners the EU bureaucrats themselves did not understand that the corridor they created allowed basically anyone to still collect user data, if they got the user to click "accept".
The EU creates corridors of compliance. These corridors often map very poorly onto the actual processes and often do little to solve the actual issues. The EU needs to stop seeing themselves as innovators, who create broad highly detailed regulations. They need to radically reform themselves and need to provide, clear and concise laws which guarantee basic adherence to the desired standards. Only then will their laws find social acceptance and will not be viewed as bureaucratic overreach.
> Exactly. Because the EU law makers are incompetent and they lack technical understanding and the ability to write laws which clearly define what is and what isn't okay.
I am sorry but I too agree with OP's statement. The EU is full of technocrats who have no idea about tech and they get easily swayed by lobbies selling them on a dream that is completely untethered to the reality we live in.
> The next person initiating a EU procedure to correct the current error with the popups will have the burden of doing everything perfectly right, all at once, thought through front to back, or face the wrath of the all-knowing internet.
You are talking as if someone is actually looking at the problem. is that so? Because if there was such a feedback loop that you seem to think exists in order to correct this issue, then where is it?
> In my mind, the main issue is not that the EU made a mistake. The main issue is that it is not getting corrected in time and we will probably have to suffer another ten years or so until the error gets removed.
So we should not hold people accountable when they make mistakes and waste everyone's time then?
There is plenty of evidence to show that the EU as a whole is incompetent when it comes to tech.
Case and point the Chat control law that is being pushed despite every single expert warning of the dire consequences in terms of privacy, and setting a dangerous precedent. Yet, they keep pushing it because it is seen as a political win.
If the EU knew something about tech they would know that placing back-doors in all communication applications is non starter.
> You are talking as if someone is actually looking at the problem. is that so? Because if there was such a feedback loop that you seem to think exists in order to correct this issue, then where is it?
Yes, the problem is known and actually worked on. There are several approaches, some being initiated on country level (probably because EU is too slow) some within the institution, as this one:
https://www.edps.europa.eu/data-protection/our-work/subjects...
No, I don’t think that institutionalised feedback loops exist there, but I do not know. I can only infer from observation that they are probably not in place, as this would, I would think, show up as “move fast and break things”.
> So we should not hold people accountable when they make mistakes and waste everyone's time then?
I have not made any direct remark to accountability, but I’ll play along: what happens by handling mistakes that way is accountability through fear. What is, in my opinion, needed is calculated risk taking and responsibility on a base of trust and not punishment. Otherwise, eventually, you will be left with no one taking over the job or people taking over the job who will conserve the status quo. This is the opposite of pushing things through at high speed. There needs to be an environment in place which can absorb this variety before you can do that(see also: Peter Senge’s “Learning Organisation”).
On a final note, I agree that the whole lobbying got out of hand. I also agree on the back-door issue and I would probably agree on a dozen other things. I am not in the seat of generally approving what the European Administration is doing. One of my initial points, however, was that the EU is not “the evil, dumb-as-brick-creator” of the cookie-popup-mess. Instead, this is probably one of the biggest cases of malicious compliance in history. And still, the EU gets the full, 100% blame, almost unanimously (and no comment as to what the initial goal was). That is quite a shift in accountability you just were interested in not to loose.
The internet is riddled with popups and attention grabbing dark patterns, but the only one that's a problem is the one that actually lets you opt out of being tracked to death?
...yes? There are countless ways it could have been implemented that would have been more effective, and less irritating for billions of people. Force companies to respect the DNT header. Ta-daa, done. But that wouldn't have been profitable, so instead let's cook up a cottage industry of increasingly obnoxious consent banners.
No popup is required, just every lobotomized idiot copies what the big players do....
Oh ma dey have popups. We need dem too! Haha, we happy!
Actually, it's because marketing departments rely heavily on tracking cookies and pixels to be their job, as their job is measured on things like conversations and understanding how effective their ad spend is.
The regulations came along, but nobody told marketing how to do their job without the cookies, so every business site keeps doing the same thing they were doing, but with a cookie banner that is hopefully obtrusive enough that users just click through it.
No it's because I'll get fined by some bureaucrat who has never run a business in his life if I don't put a pointless popup on my stupid-simple shopify store.
Is it an option for your simple store to not collect data about subjects without their consent? Seems like an easy win.
Your choice to use frameworks subsidized by surveillance capitalism doesn't need to preclude my ability to agree to participate does it?
Maybe a handy notification when I visit your store asking if I agree to participate would be a happy compromise?
You know it's possible to make good reasoned points without cramming in "<psuedo-marxist buzzword> capitalism" into a sentence for absolutely no reason.
All I want is to not be forced to irritate my customers about something that nobody cares about. It doesn't have to be complicated. It is how the internet was for all of its existence until a few years ago.
> shopify store.
There you go. Shopify does a bunch of analytics gathering for you. Whether you choose to use it or not, the decision was made by someone who thought it would be a value add and now you need a banner.
No need for a cookie banner if you don't collect data without consent. Every modern browser supports APIs that answer that question without pestering the user with a cookie banner.
It's important to point out that it's actually not at all about cookies. It's tracking by using information stored on the user's device in general that needs to have consent.
You could use localStorage for the purposes of tracking and it still needs to have a popup/banner.
An authentication cookie does not need a cookie banner, but if you issue lots of network requests for tracking and monitor server logs, that does now need a cookie banner.
If you don't store anything, but use fingerprinting, that is not covered by the law but could be covered by GDPR afaiu
I hate these popups so much, the fact that they havent corrected any of this bs shows how slow these people are to move
The "I still don't care about cookies" extension works quite well. Auto-clicks accept and closes the window in approx half a second.
Who are 'they', and 'these people'? Nb I haven't had a pop up for years. Perhaps it could be you that is slow. Do you ad-blocking?
https://www.joindns4.eu/for-public
you have to be in europe to understand my comment
Kaplan's LinkedIn post says absolutely nothing about what is objectionable about the policy. I'm inclined to think "growth-stunting" could mean anything as tame as mandating user opt-in for new features as opposed to the "opt-out" that's popular among US companies.
It's always the go to excuse against any regulation.
I hope this isn't coming down to an argument of "AI can't advance if there are rules". Things like copyright, protection of the sources of information, etc.
Why does meta need to sign anything? I thought the EU made laws that anyone operating in the EU including meta had to comply to.
It's not a law, it's a voluntary code of conduct given heft by EU endorsement.
> it's a voluntary code of conduct
So then it's something completely worthless in the globally competitive cutthroat business world, that even the companies who signed won't follow, they just signed it for virtue signaling.
If you want companies to actually follow a rule, you make it a law and you send their CEOs to jail when they break it.
"Voluntary codes of conduct" have less value in the business world than toilet paper. Zuck was just tired of this performative bullshit and said the quiet part out loud.
No, it's a voluntary code of conduct so AI providers can start implementing changes before the conduct becomes a legal requirement, and so the code itself can be updated in the face of reality before legislators have to finalize anything. The EU does not have foresight into what reasonable laws should look like, they are nervous about unintended consequences, and they do not want to drive good-faith organizations away, they are trying to do this correctly.
This cynical take seems wise and world-weary but it is just plain ignorant, please read the link.
It's a chance for the business to try out the rules, so they can have an informed opinion and make useful feedback when the EU turn it into an actual law. And also so they don't have to scramble to compile once they suddenly become biding.
But well, I wouldn't expect Meta to sign into it either.
“Heft of EU endorsement.” It’s amazing how Europeans have simply acquiesced to an illegitimate EU imitation government simply saying, “We dictate your life now!”.
European aristocrats just decided that you shall now be subjects again and Europeans said ok. It’s kind of astonishing how easy it was, and most Europeans I met almost violently reject that notion in spite of the fact that it’s exactly what happened as they still haven’t even really gotten an understanding for just how much Brussels is stuffing them.
In a legitimate system it would need to be up to each sovereign state to decide something like that, but in contrast to the US, there is absolutely nothing that limits the illegitimate power grab of the EU.
Honestly, US is really not in a good shape to support your argument.
If aristocratic figures had so much power in EU, they wouldnt be fleeing from the union.
In reality, US is plagued with greed, scams, mafias in all sectors, human rights violations and a economy thats like a house of cards. In contrast, you feel human when you're in EU. You have voice, rights and common sense!
It definitely has its flaws, but atleast the presidents there are not rug pulling their own citizens and giving pardons to crypto scammers.. Right?
You don’t understand the fundamental structure of the EU
> in contrast to the US, there is absolutely nothing that limits the illegitimate power grab of the EU.
I am happy to inform you that the EU actually works according to treaties which basically cover every point of a constitution and has a full set of courts of law ensuring the parliament and the European executive respect said treaties and allowing European citizens to defend their interests in case of overreach.
> European aristocrats just decided
I am happy to inform you that the European Union has a democratically elected parliament voting its laws and that the head of commission is appointed by democratically elected heads of states and commissioners are confirmed by said parliament.
If you still need help with any other basic fact about the European Union don’t hesitate to ask.
The world seems to be literally splitting apart, and Meta was a huge part of sowing discontent and stoking violence. I hope to move to Europe one day and I can use an open source LLM at that point
I admit that I am biased enough to immediately expect the AI agreement to be exactly what we need right now if this is how Meta reacts to it. Which I know is stupid because I genuinely have no idea what is in it.
There seem to be 3 chapters of this "AI Code of Practice" https://digital-strategy.ec.europa.eu/en/policies/contents-c... and it's drafting history https://digital-strategy.ec.europa.eu/en/policies/ai-code-pr...
I did not read it yet, only familiar with the previous AI Act https://artificialintelligenceact.eu/ .
If I'd were to guess Meta is going to have a problem with chapter 2 of "AI Code of Practice" because it deals with copyright law, and probably conflicts with their (and others approach) of ripping text out of copyrighted material (is it clear yet if it can be called fair use?)
> is it clear yet if it can be called fair use?
Yes.
https://www.publishersweekly.com/pw/by-topic/digital/copyrig...
Though the EU has its own courts and laws.
If France, fair use doesn't even exist!
We have exceptions, which are similar, but the important difference is that courts decide what is fair and what is not, whereas exceptions are written in law. It is a more rigid system that tend to favor copyright owners because if what is seen as "fair" doesn't fit one of the listed exceptions, copyright still applies. Note that AI training probably fits one of the exceptions in French law (but again, it is complicated).
I don't know the law in other European countries, but AFAIK, EU and international directives don't do much to address the exceptions to copyright, so it is up to each individual country.
> If France, fair use doesn't even exist!
Same in Sweden. The U.S. has one of the broadest and most flexible fair use laws.
In Sweden we have "citaträtten" (the right to quote). It only applies to text and it is usually said that you can't quote more than 20% of the original text.
District judge pretrial ruling on June 25th, I'd be surprised this doesn't get challenged soon in higher courts.
And acquiring the copyrighted materials is still illegal - this is not a blanket protection for all AI training on copyrighted materials
Even if it gets challenged successfully (and tbh I hope it does), the damage is already done. Blocking it at this stage just pulls up the ladder behind the behemoths.
Unless the courts are willing to put injunctions on any model that made use of illegally obtained copyrighted material - which would pretty much be all of them.
But a ruling can determine that the results of the violation needs to be destroyed.
Anthropic bought millions of books and scanned them, meaning that (at least for those sources) they were legally obtained. There has also been rampant piracy used to obtain similar material, which I won't defend. But it's not an absolute - training can be done on legally acquired material.
Why is acquiring copyrighted materials illegal?
You can just buy books in bulk under the first sale doctrine and scan them.
Which is not what any of the companies did
Anthropic ALSO get copyrighted material legally, but they pirated massive amounts first
Apologies, I read your original statement as somehow concluding that you couldn't train an AI legally. I just wanted to make it extra clear that based on current legal precedent in the U.S., you absolutely can. Methodology matters, though.
Being evil doesn't make them necessarily wrong.
Agreed, that's why I'm calling out the stupidity of my own bias.
[flagged]
[flagged]
It seems EU governments should be preventing US companies from dominating their countries.
[flagged]
You really went all out with showing your contempt, huh? I'm glad that you're enjoying the tech companies utterly dominating US citizens in the process
As a citizen I’m perfectly happy with the AI Act. As a “person in tech”, the kind of growth being “stunt” here shouldn’t be happening in the first place. It’s not overreach to put some guardrails and protect humans from the overreaching ideas of the techbro elite.
A problem with the EU over regulating from its citizens point of view is the AI companies will set up elsewhere and the EU will become a backwater.
FOMO is not a valid reason to abandon the safety and wellbeing of the people who will inevitably have to endure all this “AI innovation”. It’s just like building a bridge - there are rules and checks and triple checks
At the moment they are more like chatbots and I'm not sure they need the same sort of rules and triple checks as bridges.
Yep, that's why they need to regulate ASML. Tell ASML they can only service 'compliant' foundries, where 'compliant' foundry means 'only sells to compliant datacenters/AI firms'.
Thats how you get every government to throw money at any competitor to ASML and try to steal their IP.
From Europe's POV, it's better to compete in an area Europe leads (fab equipment) than an area Europe lags (frontier models).
AMSL EUV technology was developed in the US you know. I don’t know if you’ve noticed but AMSL follows US export controls
As a techbro elite. I find it incredibly annoying when people regulate shit that ‘could’ be used for something bad (and many good things), instead of regulating someone actually using it for something bad.
You’re too focused on the “regulate” part. It’s a lot easier to see it as a framework. It spells out what you need to anticipate the spirit of the law and what’s considerate good or bad practice.
If you actually read it, you will also realise it’s entirely comprised of “common sense”. Like, you wouldn’t want to do the stuff it says are not to be done anyway. Remember, corps can’t be trusted because they have a business to run. So that’s why when humans can be exposed to risky AI applications, the EU says the model provider needs to be transparent and demonstrate they’re capable of operating a model safely.
main thing humans can't be excused is poverty.
Which is the path EU is choosing. EU has been enjoying colonial loot for so long that they have lost any sense of reality.
I feel a lot of emotions in your comment but no connection with reality. The AI Act is really not that big of a deal. If Meta is unhappy with it, it means it’s working.
Interesting because OpenAI committed to signing
https://openai.com/global-affairs/eu-code-of-practice/
The biggest player in the industry welcomes regulation, in hopes it’ll pull the ladder up behind them that much further. A tale as old as red tape.
[flagged]
> Let us not fool ourselves. There are those online who seek to defend a master that could care less about them. Fascinating.
How could you possibly infer what I said as a defense of Meta rather than an indictment of OpenAI?
Fascinating.
Meta isn't actually an AI company, as much as they'd like you to think they are now. They don't mind if nobody comes out as the big central leader in the space, they even release the weights for their models.
Ask Meta to sign something about voluntarily restricting ad data or something and you'll get your same result there.
[dead]
Yeah well OpenAI also committed to being open.
Why does anybody believe ANYthing OpenAI states?!
Sam has been very pro-regulation for a while now. Remember his “please regulate me” world tour?
OpenAI does direct business with government bodies. Not sure about Meta.
About 2 weeks ago OpenAI won a $200 million contract with the Defense Department. That's after partnering with Anduril for quote "national security missions." And all that is after the military enlisted OpenAI's "Chief Product Officer" and sent him straight to Lt. Colonel to work in a collaborative role directly with the military.
And that's the sort of stuff that's not classified. There's, with 100% certainty, plenty that is.
Meta knows all there is about overreach and of course they don’t want that stunted.
Nit: (possibly cnbc's fault) there should be a hyphen to clarify meta opposes overreach, not growth. "growth-stunting overreach" vs "growth (stunting overreach)"
EU-wide ban of Meta incoming? I'd celebrate personally, Meta and their products are a net negative on society, and only serve to pump money to the other side of the Atlantic, to a nation that has shown outright hostility to European values as of late.
I am enjoying EU self destructing out of pure jealously.
Good. As Elon says, the only thing the EU does export is regulation. Same geniuses that make us click 5 cookie pop-ups every webpage
People complain more about cookie banners than they do the actual invasive tracking by those cookies.
Those banners suck and I wouldn't mind if the EU rolled back that law and tried another approach. At the same time, it's fairly easy to add an extension to your browser that hides them.
Legislation won't always work. It's complex and human behavior is somewhat unpredictable. We've let tech run rampant up to this point - it's going to take some time to figure out how to best control them. Throwing up our hands because it's hard to protect consumers from power multi-national corporations is a pretty silly position imo.
> than they do the actual invasive tracking by those cookies.
maybe people have rationally compared the harm done by those two
can you expand on what sort of rationality would lead a person to consider an at worst annoying pop-up to be more dangerous than data exfiltration to companies and governments that are already acting in adversarial ways? The US government is already using people's social media profiles against them, under the Cloud act any US company can be compelled to hand data over to the government, as Microsoft just testified in France. That's less dangerous than an info pop up?
Of course it has nothing to do with rationality. They're mad at the first thing they see, akin to the smoker who blames the regulators when he has to look at a picture of a rotten lung on a pack of cigarettes
gdpr doesn't stop governments. governments are already spying without permission and they exploit stolen data all the time. so yes, the cost of gdpr compliances including popups is higher than the imperceptible cost of tracked advertising.
For one that is objectively incorrect. GDPR prevents a whole host of data collection outright, shifts the burden for corporations to collecting the minimal amount of data possible, and gives you the right to explicitly consent into what data can be collected.
Being angry at a popup that merely makes transparent, what a company tries to collect from you, and giving you the explicit option to say no to that, is just infantile. It basically amounts to saying that you don't want to think about how companies are exploiting your data, and that you're a sort of internet browsing zombie. That is certainly a lot of things, but it isn't rational.
They didn't give us that. Mostly non-compliant websites gave us that.
The the entire ad industry moved to fingerprinting, mobile ad kits, and 3rd party authentication login systems so it made zero difference even if they did comply. Google and Meta aren't worried about cookies when they have JS on every single website but it burdens every website user.
This is not correct, the regulation has nothing to do with cookies as the storage method, and everything to do with what kind of data is being collected and used to track people.
Meta is hardly at blame here, it is the site owners that choose to add meta tracking code to their site and therefore have to disclose it and opt-in the user via "cookie banners"
that's deflecting responsibility. it's important to care about the actual effects of decisions, not hide behind the best case scenario. especially for governments.
in this case, it is clear that the EU policy resulted in cookie banners
This thread is people going "EU made me either choose to tell you that I spy on you or stop spying on you, now I need to tell everyone I spy on them, fucking EU".
Elon is an idiot.
If he disagrees with EU values so much, he should just stay out of the EU market. It's a free world, nobody forced him to sell cars in the EU.
Trump literally started a trade war because the EU exports more to the US than vice versa.
He also did the war thing on the UK which imports more from the US than it exports. He just likes trade wars I think.
Meta on the warpath, Europe falls further behind. Unless you're ready for a fight, don't get in the way of a barbarian when he's got his battle paint on.
> Unless you're ready for a fight, don't get in the way of a barbarian when he's got his battle paint on.
You talking about Zuckerberg?
Yeah. He just settled the Cambridge Analytica suit a couple days ago, he basically won the Canadian online news thing, he's blown billions of dollars on his AI angle. He's jacked up and wants to fight someone.
> It aims to improve transparency and safety surrounding the technology
Really it does, especially with some technology run by so few which is changing things so fast..
> Meta says it won’t sign Europe AI agreement, calling it an overreach that will stunt growth
God forbid critical things and impactful tech like this be created with a measured head, instead of this nonsense mantra of "Move fast and break things"
Id really prefer NOT to break at least what semblance of society social media hasn't already broken.
Not a big fan of this company or its founder but this is the right move.
The EU is getting to be a bigger nuisance than they are worth.
The US, China and others are sprinting and thus spiraling towards the majority of society's destitution unless we force these billionaires hands; figure out how we will eat and sustain our economies where one person is now doing a white or blue (Amazon warehouse robots) collar job that ten use to do.
Is every sprint a spiral towards destruction?
I think it is a legitimate concern in Europe just because their economies are getting squeezed from all sides by USA and China. It's a lot more in the public consciousness now since Trump said all the quiet stuff out loud instead of just letting the slow boil continue.
the Meta that uses advertising tooling for propaganda and elected trump?
The more I read of the existing rule sets within the eurozone the less surprised I am that they make additional shit tier acts like this.
What do surprise me is anything at all working with the existing rulesets, Effectively no one have technical competence and the main purpose of legislation seems to add mostly meaningless but parentally formulated complexities in order to justify hiring more bureaucrats.
>How to live in Europe >1. Have a job that does not need state approval or licensing. >2. Ignore all laws, they are too verbose and too technically complex to enforce properly anyway.
I think you can only happily live in Europe if you are employed by the state and like all the regulations.
I have a strong aversion to Meta and Zuck but EU is pretty tone-deaf. Everything they do reeks of political and anti-American tech undertone.
They're career regulators
The problem with the EU regulation is the same as always, first and foremost they do not understand the topic and can not articulate a clear statement of law.
They create mountains of regulations, which are totally unclear and which require armies of lawyers to interpret. Adherence to these regulations becomes a major risk factor for all involved companies, which then try to avoid interacting with that regulation at all.
Getting involved with the GDPR is a total nightmare, even if you want to respect your users privacy.
Regulating AI like this is especially idiotic, since currently every year shows a major shift in how AI is utilized. It is totally out in the open how hard training an AI "from scratch" will be in 5 years. The EU is incapable of actually writing laws which make it clear what isn't allowed, instead they are creating vague corridors how companies should arrive at certain outcomes.
The bureaucrats see themselves as the innovators here. They aren't trying to make laws which prevent abuses, they are creating corridors for processes for companies to follow. In the case of AI these corridors will seem ridiculous in five years.
[flagged]
Sent from an iPhone probably having USB-C because of the EU.
Just because they occasionally (and even frequently) do good thing, does not mean that overall their policies don't harm them own economies.
The economy does not exist in a vacuum. Making number go up isn't the end goal, it is to improve citizens lives and society as a whole. Everything is a tradeoff.
I charge my phone wirelessly. The presence of a port isn't a positive for me. It's just a hole I could do without. The shape of the hole isn't important.
Besides, I posted from my laptop.
[flagged]
Please don't use ableist language.
[flagged]
Europe is the world‘s second largest economy and has the world‘s highest standard of living. I’m far from a fan of regulation but they’re doing a lot of things right by most measures. Irrelevancy is unlikely in their near future.
Just like GDPR, it will tremendously benefit big corporations (even if Meta is resistant) and those who are happy NOT to follow regulations (which is a lot of Chinese startups).
And consumers will bear the brunt.