Why the EU ‘Privacy Icons’ are disastrous

Why the EU ‘Privacy Icons’ are disastrous

Some member of parliament must have severely misunderstood the meaning of ‘privacy-by-design’. These ‘standardised privacy icons’ and their logic are that disastrous, that enforcing these will not strengthen but weaken the new European privacy legislation. The icons and copy suggestions are unclear, unusable and being forced to show these will punish especially the companies and organisations who do privacy right.

▐ ・ ‿ ・▐

Wisdoms Borrowed

Data-Determinism or Gaming Your Consumer Score?

Data-Determinism or Gaming Your Consumer Score?

Professor Pasquale writes that ‘gaming your score’ might even be dangerous, as trying to influence scoring systems could backfire: “If a person attached a fitness device to a dog and tried to claim the resulting exercise log, he suggests, an algorithm might be able to tell the difference and issue that person a high score for propensity toward fraudulent activity.”

Proactive Transparency

Proactive Transparency

In the National Register, the Kruispuntbank Social Security, the Kruispuntbank Enterprises, the Finance Department, etc. we hold a very large amount of data. The government knows perfectly what this information is used for. It’s not much effort to share that (information) with the data subjects. – Tommelein (Belgian Secretary of State for Privacy)

Defining IxD

Defining IxD

Interaction design is the optimisation of state-based systems towards a set of stakeholder effects that include implicit user goals. – Chris Noessel

Privacy Icons: Resources, Discussions & Research

Privacy Icons: Resources, Discussions & Research

There is some evidence that user understanding of privacy policies is enhanced by using icons and labels as well as conventional legal text (a “multi layered” privacy notice approach). However this hypothesis has not really been tested ‘in the wild’. (CREATe Report on The Use of Privacy Icons and Standard Contract Terms for Generating Consumer Trust and Confidence in Digital Services)

Algorithmic Accountability

Algorithmic Accountability

From the list of questions above it should be clear that there are a number of human influences embedded into algorithms, such as criteria choices, training data, semantics, and interpretation. Any investigation must therefore consider algorithms as objects of human creation and take into account intent, including that of any group or institutional processes that may have influenced their design.

AI is not a single entity

AI is not a single entity

“Additionally, AI is not a single entity. Computer programs, even artificially intelligent ones, work far better as specialists rather than generalists. A more likely scenario for achieving artificial intelligence within our lifetime is through a network of sub programs handling vision (computer vision), language (NLP), adaptation (machine learning), movement (robotics)…etc. AI is not a he or a she or even an it, AI is more like a ‘they’.” – Rob Smith

Head Meets Heart: IBM’s UX Guidelines

Head Meets Heart: IBM’s UX Guidelines

Users move between environments (office, car, home, soccer game) and activities (walking, waiting in line, sitting, meeting) many times a day. Their changing circumstances create new contexts designers must assess and design for on a moment-to-moment basis. Design for the most desirable outcomes while keeping the shifting factors of users’ working lives in mind.

Ethics and Moral Questions for Technology

Ethics and Moral Questions for Technology

When we ask these questions, we won’t always like the answer. Just give it an honest try for your smartphone use. However, having asked these questions, and being more aware of some of the more negative aspects of the ‘connected technologies’ we are introducing in our and each-others’ lives at such a fast pace, will allow us to adjust our behaviour and usage, or maybe even some parts of the technology itself.