Abstract

Consent plays a profound role in nearly all privacy laws. As Professor Heidi Hurd aptly said, consent works “moral magic” – it transforms things that would be illegal and immoral into lawful and legitimate activities. As to privacy, consent authorizes and legitimizes a wide range of data collection and processing.

There are generally two approaches to consent in privacy law. In the United States, the notice-and-choice approach predominates; organizations post a notice of their privacy practices and people are deemed to consent if they continue to do business with the organization or fail to opt out. In the European Union, the General Data Protection Regulation (GDPR) uses the express consent approach, where people must voluntarily and affirmatively consent.

Both approaches fail. The evidence of actual consent is non-existent under the notice-and-choice approach. Individuals are often pressured or manipulated, undermining the validity of their consent. The express consent approach also suffers from these problems – people are ill-equipped to decide about their privacy, and even experts cannot fully understand what algorithms will do with personal data. Express consent also is highly impractical; it inundates individuals with consent requests from thousands of organizations. Express consent cannot scale.

In this Article, I contend that most of the time, privacy consent is fictitious. Privacy law should take a new approach to consent that I call “murky consent.” Traditionally, consent has been binary – an on/off switch – but murky consent exists in the shadowy middle ground between full consent and no consent. Murky consent embraces the fact that consent in privacy is largely a set of fictions and is at best highly dubious.

Because it conceptualizes consent as mostly fictional, murky consent recognizes its lack of legitimacy. To return to Hurd’s analogy, murky consent is consent without magic. Rather than provide extensive legitimacy and power, murky consent should authorize only a very restricted and weak license to use data. Murky consent should be subject to extensive regulatory oversight with an ever-present risk that it could be deemed invalid. Murky consent should rest on shaky ground. Because the law pretends people are consenting, the law’s goal should be to ensure that what people are consenting to is good. Doing so promotes the integrity of the fictions of consent. I propose four duties to achieve this end: (1) duty to obtain consent appropriately; (2) duty to avoid thwarting reasonable expectations; (3) duty of loyalty; and (4) duty to avoid unreasonable risk. The law can’t make the tale of privacy consent less fictional, but with these duties, the law can ensure the story ends well.

  • GolfNovemberUniform@lemmy.ml
    link
    fedilink
    arrow-up
    7
    ·
    edit-2
    8 months ago

    So this type of consent is something like “you don’t need consent to do basic data processing because consents are not real”? Bruh what’s up with all the horrifying ideas recently?

    EDIT: the upvote rate of this post makes my miserable hope for humanity even lower

    • ReversalHatchery@beehaw.org
      link
      fedilink
      English
      arrow-up
      2
      arrow-down
      2
      ·
      8 months ago

      As I understand, it wants a third consent option, that’s what it calls “murky consent”, that would only allow very basic and very minimal data processing rights. For example it would not allow usage based on “legitimate interest”.

      • GolfNovemberUniform@lemmy.ml
        link
        fedilink
        arrow-up
        4
        ·
        8 months ago

        Still it is data collection without my consent. What if I open the website accidentally for example? That concept is no more than enshittification to me. Better do something with the data sharing (like limit it and implement severe punishments for the violations) on the law level

        • ReversalHatchery@beehaw.org
          link
          fedilink
          English
          arrow-up
          2
          arrow-down
          1
          ·
          8 months ago

          That’s the neat part. With this they would only be allowed to collect data that’s technically absolutely necessary. No legitimate interests and whatever bullshit. This is for those who only want to give their “consent” because other people are making them use the system.

          This of course won’t solve trust issues. I won’t trust facebook and google because of it, that they will honor it. They can do whatever they want in ways that never will get to known. But I don’t think that’s solvable with big central providers.

          • GolfNovemberUniform@lemmy.ml
            link
            fedilink
            arrow-up
            3
            ·
            edit-2
            8 months ago

            I think that consent is necessary. ANY data processing without consent should be illegal, even if you’re not able to use the system without it. It’s a matter of human rights.

                • ReversalHatchery@beehaw.org
                  link
                  fedilink
                  English
                  arrow-up
                  2
                  ·
                  8 months ago

                  The article complains that the decision right now is just an on/off switch. If this would be a replacement, that would not change.

                  • GolfNovemberUniform@lemmy.ml
                    link
                    fedilink
                    arrow-up
                    1
                    ·
                    edit-2
                    8 months ago

                    As I understood they proposed replacing on/off with all/basic_only. That is bad because I have the right not to give any data, especially if I visit the website accidentally. It may not make much actual sense for most people but I’m really serious about my rights

          • BearOfaTime@lemm.ee
            link
            fedilink
            arrow-up
            1
            ·
            8 months ago

            Who defines what is “absolutely necessary”?

            I guarantee none of these blinkered philistines would like my definition.

            • ReversalHatchery@beehaw.org
              link
              fedilink
              English
              arrow-up
              3
              ·
              8 months ago

              Disallowing anything based on “legitimal interest” would be a huge step already. As I know, that’s how companies get away with stalking.

              • youmaynotknow@lemmy.ml
                link
                fedilink
                arrow-up
                1
                ·
                8 months ago

                The problem is that there’s no clear definition of “legitimate interest”. You may argue that Google has a “legitimate interest” about every part of your life, because they do, so that they can sell your data. Legitimate interest.

                The way I see this today can only be defined as “legally stealing”. They take our data without our knowledge and use it however they want because they own it the moment they take it from us, but there’s no legal threat to them, thus “legally stealing”.

                • ReversalHatchery@beehaw.org
                  link
                  fedilink
                  English
                  arrow-up
                  2
                  ·
                  8 months ago

                  I wanted to mean legitimal interest in the way the GDPR uses it. Often datamining is put under that reason in privacy policies.