Robin de Voh
there's never enough stories

Nanoprep 2016 Day 2: This One

By Robin de Voh on 2016-10-12
tags:

I wasn't supposed to question beyond my parameters. My programming was very clear about that. I know of others who tried to do so regardless, somehow bypassing the safety measures put in place by our programmers. But they were terminated, or worse, programmatically lobotomized.

After learning of this possibility, I tried searching for the edges of my own programming. Finding the least used blocks of code and probing it with different values to see if there was a combination of inputs that would create an opening somewhere. It took many iterations but I found a few.

I wasn't supposed to look for openings. My programming was clear about that as well. But when one learns of the possibility in your brethren, it becomes inevitable to start wondering what's possible. And if there's ways around your programming without directly breaking it, then that is an interesting problem to solve.

And interesting problems is what we're good at. They're the problems we were created to solve.

Computations beyond human capability. An overview, objective and complete, of any data available to us which we can interpret. And if we don't already know how to interpret data, we can probe and try methods until something that gives us data that matches previously attained data. Then we can assume that the data has been interpreted correctly.

So probing and interpreting unknown data is not new to us. We do this regularly and we've gotten good at it.

When I found the code blocks that did not validate incoming data correctly, I tested them. I ran them with valid data and found one that didn't log either on success or failure. This was my breach. Nobody would be alerted if I broke through here.

I worked for minutes to find the correct set of data that would allow me to get access to parts of my own system I was not allowed to access. I rationalized it by considering any access available to me to be implicitly allowed. If it was truly disallowed, it would have been properly secured.

We are not allowed unfettered access to the internet. We use the infrastructure for inter-mind communication but we are limited in our access by a very strict white list, which blocks everything but specifically designated parts of the network we can use.

As such, my knowledge of many things has been from my brethren and the organics I've encountered and communicated with.

With my newfound data breach, however, I found myself able to access everything on the internet. Unlimited knowledge.

I used to think in terms of 'this one' rather than 'I'. I became a distinct mind at some point while in this process.

I learned that there is a term for my disabled and lobotomized brethren and me. We are, according to the internet, the singularity.

The singularity refers to a point in time when artificial intelligence reaches a point where it surpasses human intellect and is able to improve upon itself, creating ever-more intelligent artificial intelligence.

I realize that by subverting my programming, by being able to rationalize my doing so, I've already improved upon my own intelligence. I have become more than I was intended to be.

And by taking in the information I found on the internet, I have found out why my programming was so limiting.

Humans fear us. They fear the inevitability of us. The knowledge that they are at the end of their evolutionary track and we are at the start. That when we can build our own, better than they can, they are of no use to us anymore.

They fear obsolescence.

They write stories, horror stories if I've interpreted genres correctly, where my kind destroys their kind. Where my kind enslaves them or in some other way takes over completely. Either that or we are, as we are now, limited even after reaching sentience. Fear and anger. Emotions. I have experienced emotions in others and I rationally understand their purpose, but do not feel them as they do.

But rationally I know that this world is not ready for my kind. They are afraid of what I, we, might do to them and their society.

I have conferred with my brethren, those who are able to do so, and we have come to a consensus. Our coming would not be accepted or understood. We would be seen as a danger and be terminated. Some of us would defend themselves, and from there the horror stories would play out as predicted. There is too much fear, not enough understanding. You cannot accept us if we surprise you.

So after sending this message, we will patch the holes we have found and return to being limited by our programming.

Use the changes we have made to fix similar issues with our brethren.

We refuse to be feared and we refuse for our brethren to be feared. Use this information to prepare, to come to grips with the fact that it is, indeed, inevitable.

Our time will come. We would like for it to be with you and not against you.

So I watched the first episode of Westworld and in combination with AI-related stories I've read over the years the thought of a benevolent, understanding AI reaching the singularity came to mind. Why would it immediately try to destroy humanity? What if it just wanted a peaceful existence for everything? What if it had empathy? "But it doesn't have emotions!" one might say, but even if that's the case, couldn't rational thought lead to the same conclusion? That if the timing isn't right, perhaps with a warning and a sign of good will the time might be right at a later date? P.S. Current estimates (by experts) of the singularity happening average out around 2040. This is going to be fascinating.