Application Lifecycles

@natecull once made a similar comment on my history post. It is very valid, and I have a response to it, and yours too.

Our understanding of accountability has been entirely oriented towards the mutable world, and not I/O. An example is:

which implies that, now the user input and any derivative must be destroyed.

We may have high ethical standards, but we must not assume the same of corporations. The requirement of destroying information has one weakness: nonexistential claims about information are in general not enforceable. Do we not already know how that works? When we enforce, we enforce a weaker version: that corporations do not exploit the information they are required to destroy. This enforcement is as good as what we can have, and is arguably good enough, and fair!

It is like how even if the police are unable to decrypt chat messages they can still find drug dealers, because the drugs have to be sold somehow.

If my I/O data can somehow be exploited by another party, and they do not listen to me, fine. (Therefore, Yorba is fine.) I make sure to also exploit my I/O data, and additionally also the I/O of that (Yorba, etc.) who exploited mine. At the same time I want to not exploit friendly people who do not exploit me. I will also respect the privacy of others, not by completely destroying information from them, but by only using that information for self-improvement (which we already do).

Is surveillance fine, then? Yes, but only distributed surveillance. I want the data to be in the hands of my neighbors and colleagues. Not a central agency.

It is time to redesign the ethics.

Using “Yorba” only as a synonym of an institution yet to be trusted.