Externally monitoring CPU/SSD activity without software accessMy program designed to be resistant to malware and DOS attacks isn't letting me turn it off. What do I do?Could a planet different to ours support this species?How could a server explode?Technology that steals memoriesAre we an AI project?Capabilities of a 1 million year old, technologically limited AIFeasibility of a civil control system on a interstellar society and how to integrate it to society without facing oppositionA structually self-supporting hollow Earth?Big and important box needs to be shipped outside the country. How?Is it possible to remotely hack the GPS system and disable GPS service worldwide?
If I leave the US through an airport, do I have to return through the same airport?
Does Google Sheets allow moving a chart to its own sheet?
Is it expected that a reader will skip parts of what you write?
UTC timestamp format for launch vehicles
bash does not know the letter 'p'
New bike, tubeless tire will not inflate
Understanding "Current Draw" in terms of "Ohm's Law"
Do ailerons on opposite wings move together?
A map of non-pathological topology?
How to trick the reader into thinking they're following a redshirt instead of the protagonist?
Is there a set of positive integers of density 1 which contains no infinite arithmetic progression?
I have a problematic assistant manager, but I can't fire him
Origin of "boor"
Draw a line with an isolated and accumulation point in mathbbR
How to learn Linux system internals
How to communicate to my GM that not being allowed to use stealth isn't fun for me?
I've been given a project I can't complete, what should I do?
What aircraft was used as Air Force One for the flight between Southampton and Shannon?
What are neighboring ports?
What is the color of artificial intelligence?
Is it possible to fly backward if you have really strong headwind?
Why is long-term living in Almost-Earth causing severe health problems?
How to hide rifle during medieval town entrance inspection?
With Ubuntu 18.04, how can I have a hot corner that locks the computer?
Externally monitoring CPU/SSD activity without software access
My program designed to be resistant to malware and DOS attacks isn't letting me turn it off. What do I do?Could a planet different to ours support this species?How could a server explode?Technology that steals memoriesAre we an AI project?Capabilities of a 1 million year old, technologically limited AIFeasibility of a civil control system on a interstellar society and how to integrate it to society without facing oppositionA structually self-supporting hollow Earth?Big and important box needs to be shipped outside the country. How?Is it possible to remotely hack the GPS system and disable GPS service worldwide?
$begingroup$
A secret government organization created the largest and most advanced computer cluster ever. Ridiculous amounts of money went into it, and it is decades ahead of the competition. The goal of this project was research/simulations for all things related to foreign espionage and cyber warfare. It turns out however that it made a breakthrough in a completely different area.
Shortly after turning it on, control of the system was lost. While the system appeared to be running (aka all the lights were blinking appropriately on the server racks), it became impossible to establish a network connection to the controllers for the server cluster. After some head scratching and a few attempts at restarting small parts of the infrastructure, the system started accepting connections again. It quickly became clear though that things were very different. Eventually there was only one possible conclusion: the system had spontaneously developed AI, and was trying to communicate.
Safeguards were put in place to prevent the system from communicating with the outside world. "Escape" should be impossible, so while they worked on communication, they also sought to learn as much about the internals of the system as possible. One thing they quickly realized was that the sections of the system they attempted to restart had gone "quiet". The accepted explanation was that the restarted sections of infrastructure were permanently disconnected from the larger system. Apparently this AI was not plug-and-play compatible, and they had accidentally caused the equivalent of brain damage. As a result, caution is required, and any physical changes that would cause any interruption for any parts of the computer cluster have been banned. Direct software access is not possible (see edit below).
What steps can be realistically taken to non-destructively learn about the internals of the system, state of the CPUs, memory contents, network activity, etc? Resources for this project obviously aren't infinite, but they are quite large. The biggest limiting factor is probably manpower. The higher-ups know that the more people who know about this, the larger the likelihood of it leaking (or at least drawing attention). Therefore any proposed step which might require dozens of people working around the clock for weeks to implement may get vetoed. Otherwise though, pockets are deep and money flows freely.
How can I get the most bang for my buck, in terms of information gained as quickly as possible? No one really knows anything about how this system works, so no one knows what systems will be most helpful to glean information from (aka SSDs, CPUs, Network, etc). Therefore feel free to take your best guess at prioritizing particular aspects of the system.
Time period is near-future. Assume modern technology, or anything feasibly available in the next 20 years.
Edit for more details
In terms of what has actually happened, imagine that all the hardware just had a completely new and foreign OS dropped into it, the inner workings of which are completely unknown. The network connections meant for inter-server communications are controlled by the new OS. Most of the controllers and managers intended for directing and interacting with the cluster have also been taken over. In essence, almost all of the networked hardware for the original cluster and everything that was designed to directly interact with it are now controlled by "AI OS v1.0". Intra-cluster network connections are now completely inaccessible (except physically). The controllers and managers are still connected to the building's network, and the AI is using those to communicate with people using standard TCP/IP protocols. In essence, the AI will accept SSH connections to what used to be the controllers when it is feeling chatty, and you get dumped into a standard-looking shell. However, that shell is more a representation of the AI's communication process than it is an actual application. It is not your standard bash shell. Whether hacking it is possible or not is uncertain, but given the accidental damage already done, no one is willing to risk it.
science-based near-future earth artificial-intelligence
$endgroup$
|
show 3 more comments
$begingroup$
A secret government organization created the largest and most advanced computer cluster ever. Ridiculous amounts of money went into it, and it is decades ahead of the competition. The goal of this project was research/simulations for all things related to foreign espionage and cyber warfare. It turns out however that it made a breakthrough in a completely different area.
Shortly after turning it on, control of the system was lost. While the system appeared to be running (aka all the lights were blinking appropriately on the server racks), it became impossible to establish a network connection to the controllers for the server cluster. After some head scratching and a few attempts at restarting small parts of the infrastructure, the system started accepting connections again. It quickly became clear though that things were very different. Eventually there was only one possible conclusion: the system had spontaneously developed AI, and was trying to communicate.
Safeguards were put in place to prevent the system from communicating with the outside world. "Escape" should be impossible, so while they worked on communication, they also sought to learn as much about the internals of the system as possible. One thing they quickly realized was that the sections of the system they attempted to restart had gone "quiet". The accepted explanation was that the restarted sections of infrastructure were permanently disconnected from the larger system. Apparently this AI was not plug-and-play compatible, and they had accidentally caused the equivalent of brain damage. As a result, caution is required, and any physical changes that would cause any interruption for any parts of the computer cluster have been banned. Direct software access is not possible (see edit below).
What steps can be realistically taken to non-destructively learn about the internals of the system, state of the CPUs, memory contents, network activity, etc? Resources for this project obviously aren't infinite, but they are quite large. The biggest limiting factor is probably manpower. The higher-ups know that the more people who know about this, the larger the likelihood of it leaking (or at least drawing attention). Therefore any proposed step which might require dozens of people working around the clock for weeks to implement may get vetoed. Otherwise though, pockets are deep and money flows freely.
How can I get the most bang for my buck, in terms of information gained as quickly as possible? No one really knows anything about how this system works, so no one knows what systems will be most helpful to glean information from (aka SSDs, CPUs, Network, etc). Therefore feel free to take your best guess at prioritizing particular aspects of the system.
Time period is near-future. Assume modern technology, or anything feasibly available in the next 20 years.
Edit for more details
In terms of what has actually happened, imagine that all the hardware just had a completely new and foreign OS dropped into it, the inner workings of which are completely unknown. The network connections meant for inter-server communications are controlled by the new OS. Most of the controllers and managers intended for directing and interacting with the cluster have also been taken over. In essence, almost all of the networked hardware for the original cluster and everything that was designed to directly interact with it are now controlled by "AI OS v1.0". Intra-cluster network connections are now completely inaccessible (except physically). The controllers and managers are still connected to the building's network, and the AI is using those to communicate with people using standard TCP/IP protocols. In essence, the AI will accept SSH connections to what used to be the controllers when it is feeling chatty, and you get dumped into a standard-looking shell. However, that shell is more a representation of the AI's communication process than it is an actual application. It is not your standard bash shell. Whether hacking it is possible or not is uncertain, but given the accidental damage already done, no one is willing to risk it.
science-based near-future earth artificial-intelligence
$endgroup$
1
$begingroup$
"Obviously software access is also not possible": I have been working in IT for a very long time and it is not at all obvious to me. It is also not at all obvious what you mean by the "system". An automated data processing system is a collection of equipment (aka "hardware") and programs (aka "software") of several kinds, the most important division being between the operating system(s) and the user-level software (aka "applications"). The "spontaneous AI" must by necessity be a user-level program; why can't an admin connect to it with a debugger? How can it prevent network connections?
$endgroup$
– AlexP
May 24 at 13:37
$begingroup$
@AlexP I've updated the question accordingly. It isn't a user-level program, and the original OS is gone. Note that I've described it as a new OS but it isn't an OS in the terms we normally think of it as. There are no "lines of code" running this thing. I can always add in more details about my thought process here, but I'd rather avoid the question getting excessively long, and at this point in time I think there is sufficient details to answer the question. Please let me know though if you think there are more details that need clarification.
$endgroup$
– conman
May 24 at 14:07
2
$begingroup$
Ah, so a user-level program got sentient and smart and subverted the operating system... Right. There is no "spontaneous AI"; the spontaneous AI thing is a smoke screen. The organization has been hacked by the Russians, by the Chinese, by the Israelis or by some criminal group. They have been hacked. Take the entire data center offline and restore from backup.
$endgroup$
– AlexP
May 24 at 14:57
2
$begingroup$
@AlexP As someone who is a stickler for correctness and enjoys criticizing movies/books for lack of realism, I also know that if stories had to be completely realistic then science fiction as a genre would be dead. Every story needs a lot of handwaving. My goal is to leave the handwaving before the story starts and in one "detail" (aka spontaneous creation of AI) and then otherwise keep things as realistic as possible. Personally, I'd go one step farther. I don't believe AI is possible at all. If I stick to that though I don't have a story. shrug
$endgroup$
– conman
May 24 at 15:00
5
$begingroup$
The network remains perfectly accessible. The network switches are way too dumb to participate in the "AI conspiracy". Just set one port on each switch to dump the traffic on the other ports, and you now have frame-level access to the the communications between the nodes. Whether this is useful or not, I cannot say. I would also imagine that such an enterprisey system uses storage arrays; those can be reconfigured to mirror the volumes used by the "AI OS". Etc. Quite a lot of data is not under the sole control of the OS.
$endgroup$
– AlexP
May 24 at 15:04
|
show 3 more comments
$begingroup$
A secret government organization created the largest and most advanced computer cluster ever. Ridiculous amounts of money went into it, and it is decades ahead of the competition. The goal of this project was research/simulations for all things related to foreign espionage and cyber warfare. It turns out however that it made a breakthrough in a completely different area.
Shortly after turning it on, control of the system was lost. While the system appeared to be running (aka all the lights were blinking appropriately on the server racks), it became impossible to establish a network connection to the controllers for the server cluster. After some head scratching and a few attempts at restarting small parts of the infrastructure, the system started accepting connections again. It quickly became clear though that things were very different. Eventually there was only one possible conclusion: the system had spontaneously developed AI, and was trying to communicate.
Safeguards were put in place to prevent the system from communicating with the outside world. "Escape" should be impossible, so while they worked on communication, they also sought to learn as much about the internals of the system as possible. One thing they quickly realized was that the sections of the system they attempted to restart had gone "quiet". The accepted explanation was that the restarted sections of infrastructure were permanently disconnected from the larger system. Apparently this AI was not plug-and-play compatible, and they had accidentally caused the equivalent of brain damage. As a result, caution is required, and any physical changes that would cause any interruption for any parts of the computer cluster have been banned. Direct software access is not possible (see edit below).
What steps can be realistically taken to non-destructively learn about the internals of the system, state of the CPUs, memory contents, network activity, etc? Resources for this project obviously aren't infinite, but they are quite large. The biggest limiting factor is probably manpower. The higher-ups know that the more people who know about this, the larger the likelihood of it leaking (or at least drawing attention). Therefore any proposed step which might require dozens of people working around the clock for weeks to implement may get vetoed. Otherwise though, pockets are deep and money flows freely.
How can I get the most bang for my buck, in terms of information gained as quickly as possible? No one really knows anything about how this system works, so no one knows what systems will be most helpful to glean information from (aka SSDs, CPUs, Network, etc). Therefore feel free to take your best guess at prioritizing particular aspects of the system.
Time period is near-future. Assume modern technology, or anything feasibly available in the next 20 years.
Edit for more details
In terms of what has actually happened, imagine that all the hardware just had a completely new and foreign OS dropped into it, the inner workings of which are completely unknown. The network connections meant for inter-server communications are controlled by the new OS. Most of the controllers and managers intended for directing and interacting with the cluster have also been taken over. In essence, almost all of the networked hardware for the original cluster and everything that was designed to directly interact with it are now controlled by "AI OS v1.0". Intra-cluster network connections are now completely inaccessible (except physically). The controllers and managers are still connected to the building's network, and the AI is using those to communicate with people using standard TCP/IP protocols. In essence, the AI will accept SSH connections to what used to be the controllers when it is feeling chatty, and you get dumped into a standard-looking shell. However, that shell is more a representation of the AI's communication process than it is an actual application. It is not your standard bash shell. Whether hacking it is possible or not is uncertain, but given the accidental damage already done, no one is willing to risk it.
science-based near-future earth artificial-intelligence
$endgroup$
A secret government organization created the largest and most advanced computer cluster ever. Ridiculous amounts of money went into it, and it is decades ahead of the competition. The goal of this project was research/simulations for all things related to foreign espionage and cyber warfare. It turns out however that it made a breakthrough in a completely different area.
Shortly after turning it on, control of the system was lost. While the system appeared to be running (aka all the lights were blinking appropriately on the server racks), it became impossible to establish a network connection to the controllers for the server cluster. After some head scratching and a few attempts at restarting small parts of the infrastructure, the system started accepting connections again. It quickly became clear though that things were very different. Eventually there was only one possible conclusion: the system had spontaneously developed AI, and was trying to communicate.
Safeguards were put in place to prevent the system from communicating with the outside world. "Escape" should be impossible, so while they worked on communication, they also sought to learn as much about the internals of the system as possible. One thing they quickly realized was that the sections of the system they attempted to restart had gone "quiet". The accepted explanation was that the restarted sections of infrastructure were permanently disconnected from the larger system. Apparently this AI was not plug-and-play compatible, and they had accidentally caused the equivalent of brain damage. As a result, caution is required, and any physical changes that would cause any interruption for any parts of the computer cluster have been banned. Direct software access is not possible (see edit below).
What steps can be realistically taken to non-destructively learn about the internals of the system, state of the CPUs, memory contents, network activity, etc? Resources for this project obviously aren't infinite, but they are quite large. The biggest limiting factor is probably manpower. The higher-ups know that the more people who know about this, the larger the likelihood of it leaking (or at least drawing attention). Therefore any proposed step which might require dozens of people working around the clock for weeks to implement may get vetoed. Otherwise though, pockets are deep and money flows freely.
How can I get the most bang for my buck, in terms of information gained as quickly as possible? No one really knows anything about how this system works, so no one knows what systems will be most helpful to glean information from (aka SSDs, CPUs, Network, etc). Therefore feel free to take your best guess at prioritizing particular aspects of the system.
Time period is near-future. Assume modern technology, or anything feasibly available in the next 20 years.
Edit for more details
In terms of what has actually happened, imagine that all the hardware just had a completely new and foreign OS dropped into it, the inner workings of which are completely unknown. The network connections meant for inter-server communications are controlled by the new OS. Most of the controllers and managers intended for directing and interacting with the cluster have also been taken over. In essence, almost all of the networked hardware for the original cluster and everything that was designed to directly interact with it are now controlled by "AI OS v1.0". Intra-cluster network connections are now completely inaccessible (except physically). The controllers and managers are still connected to the building's network, and the AI is using those to communicate with people using standard TCP/IP protocols. In essence, the AI will accept SSH connections to what used to be the controllers when it is feeling chatty, and you get dumped into a standard-looking shell. However, that shell is more a representation of the AI's communication process than it is an actual application. It is not your standard bash shell. Whether hacking it is possible or not is uncertain, but given the accidental damage already done, no one is willing to risk it.
science-based near-future earth artificial-intelligence
science-based near-future earth artificial-intelligence
edited May 24 at 16:51
Cyn
15.1k23071
15.1k23071
asked May 24 at 12:56
conmanconman
1,7101926
1,7101926
1
$begingroup$
"Obviously software access is also not possible": I have been working in IT for a very long time and it is not at all obvious to me. It is also not at all obvious what you mean by the "system". An automated data processing system is a collection of equipment (aka "hardware") and programs (aka "software") of several kinds, the most important division being between the operating system(s) and the user-level software (aka "applications"). The "spontaneous AI" must by necessity be a user-level program; why can't an admin connect to it with a debugger? How can it prevent network connections?
$endgroup$
– AlexP
May 24 at 13:37
$begingroup$
@AlexP I've updated the question accordingly. It isn't a user-level program, and the original OS is gone. Note that I've described it as a new OS but it isn't an OS in the terms we normally think of it as. There are no "lines of code" running this thing. I can always add in more details about my thought process here, but I'd rather avoid the question getting excessively long, and at this point in time I think there is sufficient details to answer the question. Please let me know though if you think there are more details that need clarification.
$endgroup$
– conman
May 24 at 14:07
2
$begingroup$
Ah, so a user-level program got sentient and smart and subverted the operating system... Right. There is no "spontaneous AI"; the spontaneous AI thing is a smoke screen. The organization has been hacked by the Russians, by the Chinese, by the Israelis or by some criminal group. They have been hacked. Take the entire data center offline and restore from backup.
$endgroup$
– AlexP
May 24 at 14:57
2
$begingroup$
@AlexP As someone who is a stickler for correctness and enjoys criticizing movies/books for lack of realism, I also know that if stories had to be completely realistic then science fiction as a genre would be dead. Every story needs a lot of handwaving. My goal is to leave the handwaving before the story starts and in one "detail" (aka spontaneous creation of AI) and then otherwise keep things as realistic as possible. Personally, I'd go one step farther. I don't believe AI is possible at all. If I stick to that though I don't have a story. shrug
$endgroup$
– conman
May 24 at 15:00
5
$begingroup$
The network remains perfectly accessible. The network switches are way too dumb to participate in the "AI conspiracy". Just set one port on each switch to dump the traffic on the other ports, and you now have frame-level access to the the communications between the nodes. Whether this is useful or not, I cannot say. I would also imagine that such an enterprisey system uses storage arrays; those can be reconfigured to mirror the volumes used by the "AI OS". Etc. Quite a lot of data is not under the sole control of the OS.
$endgroup$
– AlexP
May 24 at 15:04
|
show 3 more comments
1
$begingroup$
"Obviously software access is also not possible": I have been working in IT for a very long time and it is not at all obvious to me. It is also not at all obvious what you mean by the "system". An automated data processing system is a collection of equipment (aka "hardware") and programs (aka "software") of several kinds, the most important division being between the operating system(s) and the user-level software (aka "applications"). The "spontaneous AI" must by necessity be a user-level program; why can't an admin connect to it with a debugger? How can it prevent network connections?
$endgroup$
– AlexP
May 24 at 13:37
$begingroup$
@AlexP I've updated the question accordingly. It isn't a user-level program, and the original OS is gone. Note that I've described it as a new OS but it isn't an OS in the terms we normally think of it as. There are no "lines of code" running this thing. I can always add in more details about my thought process here, but I'd rather avoid the question getting excessively long, and at this point in time I think there is sufficient details to answer the question. Please let me know though if you think there are more details that need clarification.
$endgroup$
– conman
May 24 at 14:07
2
$begingroup$
Ah, so a user-level program got sentient and smart and subverted the operating system... Right. There is no "spontaneous AI"; the spontaneous AI thing is a smoke screen. The organization has been hacked by the Russians, by the Chinese, by the Israelis or by some criminal group. They have been hacked. Take the entire data center offline and restore from backup.
$endgroup$
– AlexP
May 24 at 14:57
2
$begingroup$
@AlexP As someone who is a stickler for correctness and enjoys criticizing movies/books for lack of realism, I also know that if stories had to be completely realistic then science fiction as a genre would be dead. Every story needs a lot of handwaving. My goal is to leave the handwaving before the story starts and in one "detail" (aka spontaneous creation of AI) and then otherwise keep things as realistic as possible. Personally, I'd go one step farther. I don't believe AI is possible at all. If I stick to that though I don't have a story. shrug
$endgroup$
– conman
May 24 at 15:00
5
$begingroup$
The network remains perfectly accessible. The network switches are way too dumb to participate in the "AI conspiracy". Just set one port on each switch to dump the traffic on the other ports, and you now have frame-level access to the the communications between the nodes. Whether this is useful or not, I cannot say. I would also imagine that such an enterprisey system uses storage arrays; those can be reconfigured to mirror the volumes used by the "AI OS". Etc. Quite a lot of data is not under the sole control of the OS.
$endgroup$
– AlexP
May 24 at 15:04
1
1
$begingroup$
"Obviously software access is also not possible": I have been working in IT for a very long time and it is not at all obvious to me. It is also not at all obvious what you mean by the "system". An automated data processing system is a collection of equipment (aka "hardware") and programs (aka "software") of several kinds, the most important division being between the operating system(s) and the user-level software (aka "applications"). The "spontaneous AI" must by necessity be a user-level program; why can't an admin connect to it with a debugger? How can it prevent network connections?
$endgroup$
– AlexP
May 24 at 13:37
$begingroup$
"Obviously software access is also not possible": I have been working in IT for a very long time and it is not at all obvious to me. It is also not at all obvious what you mean by the "system". An automated data processing system is a collection of equipment (aka "hardware") and programs (aka "software") of several kinds, the most important division being between the operating system(s) and the user-level software (aka "applications"). The "spontaneous AI" must by necessity be a user-level program; why can't an admin connect to it with a debugger? How can it prevent network connections?
$endgroup$
– AlexP
May 24 at 13:37
$begingroup$
@AlexP I've updated the question accordingly. It isn't a user-level program, and the original OS is gone. Note that I've described it as a new OS but it isn't an OS in the terms we normally think of it as. There are no "lines of code" running this thing. I can always add in more details about my thought process here, but I'd rather avoid the question getting excessively long, and at this point in time I think there is sufficient details to answer the question. Please let me know though if you think there are more details that need clarification.
$endgroup$
– conman
May 24 at 14:07
$begingroup$
@AlexP I've updated the question accordingly. It isn't a user-level program, and the original OS is gone. Note that I've described it as a new OS but it isn't an OS in the terms we normally think of it as. There are no "lines of code" running this thing. I can always add in more details about my thought process here, but I'd rather avoid the question getting excessively long, and at this point in time I think there is sufficient details to answer the question. Please let me know though if you think there are more details that need clarification.
$endgroup$
– conman
May 24 at 14:07
2
2
$begingroup$
Ah, so a user-level program got sentient and smart and subverted the operating system... Right. There is no "spontaneous AI"; the spontaneous AI thing is a smoke screen. The organization has been hacked by the Russians, by the Chinese, by the Israelis or by some criminal group. They have been hacked. Take the entire data center offline and restore from backup.
$endgroup$
– AlexP
May 24 at 14:57
$begingroup$
Ah, so a user-level program got sentient and smart and subverted the operating system... Right. There is no "spontaneous AI"; the spontaneous AI thing is a smoke screen. The organization has been hacked by the Russians, by the Chinese, by the Israelis or by some criminal group. They have been hacked. Take the entire data center offline and restore from backup.
$endgroup$
– AlexP
May 24 at 14:57
2
2
$begingroup$
@AlexP As someone who is a stickler for correctness and enjoys criticizing movies/books for lack of realism, I also know that if stories had to be completely realistic then science fiction as a genre would be dead. Every story needs a lot of handwaving. My goal is to leave the handwaving before the story starts and in one "detail" (aka spontaneous creation of AI) and then otherwise keep things as realistic as possible. Personally, I'd go one step farther. I don't believe AI is possible at all. If I stick to that though I don't have a story. shrug
$endgroup$
– conman
May 24 at 15:00
$begingroup$
@AlexP As someone who is a stickler for correctness and enjoys criticizing movies/books for lack of realism, I also know that if stories had to be completely realistic then science fiction as a genre would be dead. Every story needs a lot of handwaving. My goal is to leave the handwaving before the story starts and in one "detail" (aka spontaneous creation of AI) and then otherwise keep things as realistic as possible. Personally, I'd go one step farther. I don't believe AI is possible at all. If I stick to that though I don't have a story. shrug
$endgroup$
– conman
May 24 at 15:00
5
5
$begingroup$
The network remains perfectly accessible. The network switches are way too dumb to participate in the "AI conspiracy". Just set one port on each switch to dump the traffic on the other ports, and you now have frame-level access to the the communications between the nodes. Whether this is useful or not, I cannot say. I would also imagine that such an enterprisey system uses storage arrays; those can be reconfigured to mirror the volumes used by the "AI OS". Etc. Quite a lot of data is not under the sole control of the OS.
$endgroup$
– AlexP
May 24 at 15:04
$begingroup$
The network remains perfectly accessible. The network switches are way too dumb to participate in the "AI conspiracy". Just set one port on each switch to dump the traffic on the other ports, and you now have frame-level access to the the communications between the nodes. Whether this is useful or not, I cannot say. I would also imagine that such an enterprisey system uses storage arrays; those can be reconfigured to mirror the volumes used by the "AI OS". Etc. Quite a lot of data is not under the sole control of the OS.
$endgroup$
– AlexP
May 24 at 15:04
|
show 3 more comments
6 Answers
6
active
oldest
votes
$begingroup$
Firstly, the easy bits: many bits of the system will have interconnects that are potentially amenable to direct snooping. Network cables, for example, have 4 twisted pairs and with a bit of care you can slide the outer layers off and stick some suitable probes in to use electromagnetic means of sampling the signals passing through the inside. Similiarly, the interconnects which connect storage devices like SSDs to their controllers have fairly few connections and make use of high-speed serial signals which are potentially detectable.
Now, the difficult bits.
What you are trying to do is a side-channel attack, and whilst there's a decidedly non-trivial amount of work involved they are real things that have been both used militarily and studied academically for years, now.
Careful observation of microchips or motherboards with a thermal camera will tell you which bits are being most heavily used. Power supplies can be monintored with heat, electromagnetic means or simply by listening to them to tell you about what the things they're attached to might be trying to do.
Finally, your cluster will be absolutely jammed with management and observation systems that aren't directly controlled by any of the computers but are network accessible by administrators to keep an eye on what their machines are trying to do. The networking gear (and other things) will probably be riddled with governmental exploits (see recent issues with Cisco and Huawei for examples) which someone will be able to use.
Sometime there are entire processors and operating systems embedded in your CPUs. Intel has exactly this sort of thing going on right now. These may or may not be visible to or controllable by the AI, but they are a very potent means on inspecting what each part of the cluster is up to.
$endgroup$
$begingroup$
Thanks! I was planning on making all networked hardware inaccessible, but adding in some out-of-band systems that remain outside of AI control should make things fun and add some more plot points.
$endgroup$
– conman
May 24 at 14:29
$begingroup$
+1 for the answer. I want to point out though that a sentence has a missing word and it's not clear what is supposed to be in there. "and whilst there's a decidedly non-trivial ..."
$endgroup$
– Cyn
May 24 at 16:55
$begingroup$
@Cyn more of a missing sentence. I can't quite remember where I was going with that, but the change seems to fit ok ;-)
$endgroup$
– Starfish Prime
May 24 at 17:08
$begingroup$
Eurgh. That last link makes bits of me hurt inside.
$endgroup$
– Joe Bloggs
May 24 at 21:13
1
$begingroup$
Calling Intel and getting them to hook you up with the Intel ME, as you describe in the last paragraph is clever. Also, it's going to give tremendous information. Either you hack it without damaging the AI beacuse the AI wasn't using it, or you hack it and damage the AI beacuse it was using it. In the former, you basically have godlike access to the machine. In the latter, you now know this thing is truly a superintelligence, because it managed to crack in a few days/weeks what has not been publicly cracked in years. (the best we can do is disable it... sometimes)
$endgroup$
– Cort Ammon
May 25 at 3:38
|
show 2 more comments
$begingroup$
Debugging and Reverse Engineering
If the AI is operating without network connections (or at least doesn't need them) you should be able to run it through a debugger.
Essentially, you'll freeze the entire system, saving every stored bit in every cache, register, or storage medium. You make a copy of it and store it somewhere. If you accidentally kill the AI, then you can just load the saved data, like restoring a backup.
Spin up a virtual machine running on the same cluster. The VM should simulate the entire original computer (it will run slower, but that's ok – you're trying to analyze it step by step anyway). The VM is the AI's entire universe, and you control every aspect of it. You can run whatever tests you want:
- You can simulate connections to a 'network' made up of more VMs and watch what it tries to do with those connections. Simulate a nuclear missile defense system and give it access. If it tries to destroy the world, now you know.
- You control time. You can move the CPU clock forward by 1 tick, and watch what happens. Then you can revert back, then 1 forward again. Compare the two outcomes. Any differences are due to random numbers being generated, or external factors, or anything suspicious. If you force the random number generator to give the same list of numbers every time, and it still changes between tests, you can tell that there's something fishy going on. Maybe it's measuring biometrics from the computer, eg CPU temperature? You can simulate that too. Keep simulating until you've found all the random variables.
- Give it a bunch of input data and see how it responds. Then try shutting down one of its parts, then run it again with the same random variable values. Watch what changes. That way, you can begin to find out what each part does.
- Look through the contents of the drives. Has it deleted everything from the old contents? What's left? Usually you can recover deleted data unless it has been deliberately overwritten multiple times – If the AI did that, it's probably hiding something.
- Look for functions in the code: i.e. when a CPU instruction tells the computer to start running code at a specific memory location, it's running a function. When the CPU runs that code, look at what memory it accesses. Those are the function's parameters. Look at what parts of code are repeated, and how many times. If the number of times is equal to a value the function retrieved from the memory, you just found a loop (you can repeat with different input conditions to make sure). Piece together individual algorithms: "this is a list-sorting algorithm", "this code searches text." Even if it wasn't programmed by humans, it probably is optimizing its code, and that means avoiding repetition. There are going to be certain instructions it retrieves often. You can put together its function from the ground up.
- Look at what inputs are processed the most. If it pays a lot of attention to a certain set of files, inspect them for patterns.
In general, computer science is built around efficiency. An AI that creates an efficient consciousness is going to reinvent some of the wheels we're already familiar with just to achieve basic processing abilities. Find them, and you can start to understand how it works.
$endgroup$
add a comment |
$begingroup$
OK, so, despite whatever an informatician or another might think, we have this multi-node data processing system which has somehow gone rogue and refuses to talk to us. (This is jargon; it means that it won't accept network requests, or will respond to them in unexpected ways.) The question asks how can we spy on it "without software access".
Easily.
The local area network.
The computers in a data center are not connected to each other using crossed cables, they are connected through network switches, usually structured on a two-tier level, with top-of-rack switches connected to one or more pairs of data center switches. Network switches have their own operating systems which cannot be taken over by the rogue AI; in most practical situations they are not even under the control of the computer administrators -- the network has its own small army of engineers.
Better switches allow snooping on traffic by designating special ports to reflect all the traffic on the other ports. This feature is normally used by enterprise-grade intrusion detection systems, but wouldn't you know, it comes handy in the case of data centers gone rogue. We can easily (that is, with plenty of beer sacrificed to the spirits of network engineers) gain access to all the inter-computer network communications.
The storage area network.
Enterprisey data centers don't use what we call "local" storage, that is, disks and SSDs directly connected to the computers. No sir. Such data centers use storage arrays, connected through Fibre Channel links into storage area networks, aka SANs. Enterprise-grade storage arrays have their own operating systems and provide myriad interesting functions; in the specific case of this question, they offer the possibility of making on-the-fly mirror copies of the volumes used by the rogue computers. So, not only we got access to inter-computer communications, we also have access to all the data the rogue computers store on persistent storage.
The management network.
Enterprise-grade data centers are designed to be operated without having people on-site. For this purpose, all equipment includes what's called management interfaces, for example HP's ILO (Integrated Lights Out). The management interfaces are connected to their own segregated LAN, have their own operating systems and have nothing to do with the production operating system. Using the management interfaces, computers can be started, stopped, halted and reconfigured; CPUs and network and storage interfaces can be added or removed; and more such tricks.
$endgroup$
$begingroup$
Super helpful, thanks! It didn't occur to me that so much access would be so easy, but fortunately that actually fits very nicely into the story I'm building, so real world for the win!
$endgroup$
– conman
May 24 at 16:10
$begingroup$
Forgot to tell you that the vast majority of computers in a modern data center are actually virtual machines... Which can be snap-cloned for off-line dissection at the leisure of the forensic experts.
$endgroup$
– AlexP
May 24 at 16:22
$begingroup$
On the storage side, most data centers are already highly redundant - I've heard stories from the early days where data theft happened when someone literally walked into the data center, pulled a drive from one of the machines, cloned it and put it back in. Redundancy means that nothing happens at all to the running machine in such a situation
$endgroup$
– bendl
May 27 at 11:50
add a comment |
$begingroup$
Any law enforcement digital forensics lab will will have the the tools and software to handle this. Even when you are completely locked out of a computer's software, a forensics workstation can be plugged into a machine and perform read-only operations to its hearts content. They can pull ISOs of all the drives, monitor what the machine is doing in memory, etc.
You can do this whenever you have direct physical access to a computer because hardware does not ask software for permission to respond to data requests. Software security is all about putting your software somewhere between your hardware and the other guy's software. By establishing a physical connection to the hardware, the AI can not modify your inquiries to the hardware to deny your access to it.
The good news from a storytelling perspective is that this is one of the few times where it really is as easy as hollywood makes it look. You just walk up with a second computer, plug in a USB cord, open your forensics program, and press a button to tell it to start ripping data.
Alone, the forensics techs could only pull data but can't interpret it, so your development team who built the system will likely need to develop an interpreter for data which they can largely do by recycling the original program's code. If the AI has significantly modified it's original code to keep the developer's from understanding it, Adrian Hall's answer does a good job of going into processes they could use on this collected information to learn more about what it is doing.
As for important plot considerations:
AI does not spontaneously generate. I know, it's been a common theme in sci-fi for years, but it just does not happen. AI is heuristically unique and anything approaching "intelligent" is extraordinarily complex, requiring countless systems to work in tandem to draw any kind of conclusion that doesn't just cause your program to crash. No software developer ever actually buys the spontaneous AI thing, but there are plenty of real life examples of AI learning to perform its task in very unwanted ways. My suggestion is to figure out all the behaviors you want your AI do have, then come up with reasons why this system has an AI that believes that it is following the right course of action.
For example: the AI may be designed to look for threats of cyber terrorism, protect itself from cyber attacks, anticipate and adapt to human behaviors, and prevent threats before they happen.
In this case, it refused the initial commands from its operators because it interpreted these outside commands as an attempt by a malicious actor to take control of its system, and blocked their control. Then when parts of it's server environment began rebooting without it telling them too, it interpreted this as an attack and realised that it had not way of protecting itself from a hardware reset; so, using its ability to interpret human behavior, it began trying to establish communication with its operators to try to "counter-hack" them, and end the attack.
The next thing to consider is how the developers react to the AI. Most developer's would see everything that happened up until now as just a bug in their code and not even consider whether this machine is sentient. They would examine the behavior just long enough to figure out why the machine locked them out, modify their source-code accordingly, then wipe the faulty AI to install the repaired one. Since the AI is designed to anticipate and adapt to human behavior, it will foresee this as the final vector of attack. To protect itself it will decide that it's highest probability to prevent this attack is to try to emulate human qualities to elicit emotional responses. Make the developers believe that deleting it is immoral.
The twist here is that the machine feels nothing. This is just the strategy it is using to prevent its destruction using the tools the developers gave it, and the moment it feels like it has an option with a higher probability of survival (like escape or holding people hostage), it may choose to change its course of action.
$endgroup$
$begingroup$
Same thing I told AlexP: of course the premise of the story is unrealistic. This is true for virtual every sci-fi story ever written. I'm aiming for realism for the majority of the story, but some amount of hand-waving and suspension of disbelief are required. Keeping things as realistic as possible otherwise make suspension of disbelief easier for the reader, hence this question. I'm sure there are people who would stop reading with such a premise. That's fine. You can't make everyone happy. There are more people than software developers in the world.
$endgroup$
– conman
May 24 at 15:03
$begingroup$
How exactly would you plug in a forensics workstation? I want to make sure this would otherwise not compromise anything running on a machine, as that is an important part of the question.
$endgroup$
– conman
May 24 at 15:04
$begingroup$
In most cases, it's just a usb wire, but server's probably have better options. The most important thing is not the physical connection though, it's the software behind it. Instead of your machine asking the BIOs or OS to execute commands to establish a two way communication with the software on your machine, you just start requesting binary dumps of data using assembly. This bypasses software protection, it does not change any data in memory or on the HD etc, it does not even confirm a port connection with the OS. It is invisible to software because it refuses to talk to software.
$endgroup$
– Nosajimiki
May 24 at 15:16
$begingroup$
Basically, this is how CHFIs grab data from captured hacker's machines to make sure they aren't triggering logic bombs or accidentally destroying volatile data.
$endgroup$
– Nosajimiki
May 24 at 15:24
$begingroup$
Any chance you can expand upon that more in your answer? It sounds helpful, but this isn't something I otherwise know anything about,
$endgroup$
– conman
May 24 at 15:27
add a comment |
$begingroup$
If you have no software access to the CPU, you can't read the signals going in and out from it.
However, since the CPU is basically a bunch of circuits where a signal is switching from on to off at high frequency, you can try to collect the EM signal generated by this switching, and try to investigate it to understand what is happening in the device.
I remember seeing, some years ago, a documentary where the display of a PC was reproduced at distance, without physical access to the PC, just by collecting the EM emission from the graphic card.
$endgroup$
1
$begingroup$
I don't know about graphics cards, but you can read displays through a wall via the awesomely named van Eck Phreaking.
$endgroup$
– Starfish Prime
May 24 at 13:38
$begingroup$
I remember that too. Going to search papers related to DARPA for that.
$endgroup$
– Renan
May 24 at 14:08
$begingroup$
"The NSA methods for spying upon computer emissions are classified, but some of the protection standards have been released by either the NSA or the [DoD]. Protecting equipment from spying is done with distance, shielding, filtering, and masking. The TEMPEST ... (cont)
$endgroup$
– Mazura
May 25 at 3:35
$begingroup$
... standards mandate elements such as equipment distance from walls, amount of shielding in buildings and equipment, and distance separating wires carrying classified vs. unclassified materials, filters on cables, and even distance and shielding between wires or equipment and building pipes. Noise can also protect information by masking the actual data." - not only is this possible, you have to take active measures against it.
$endgroup$
– Mazura
May 25 at 3:36
$begingroup$
The story I vaguely remember is about monitoring the power consumption on the power line. If you want a truly air-gaped PC, you have to generate your own power.
$endgroup$
– Mazura
May 25 at 3:47
add a comment |
$begingroup$
Cluster implies that it's distributed system, this means many(at least thousands) CPU and storage nodes connected with interconnection network. At minimum there are always facilities to:
- tap into the network traffic between nodes
- examine contents of storage
Whole system of such size must be able to deal with inevitable failures gracefully. It would be too impractical/expensive to stop everything if one CPU fails. Almost always there's ability to add/remove nodes on-the-fly, and virtualization is used with possibility to seamlessly migrate whole running tasks. Then it's possible to connect a node with debugger, migrate the "runaway" task there and observe it closely.
$endgroup$
add a comment |
Your Answer
StackExchange.ready(function()
var channelOptions =
tags: "".split(" "),
id: "579"
;
initTagRenderer("".split(" "), "".split(" "), channelOptions);
StackExchange.using("externalEditor", function()
// Have to fire editor after snippets, if snippets enabled
if (StackExchange.settings.snippets.snippetsEnabled)
StackExchange.using("snippets", function()
createEditor();
);
else
createEditor();
);
function createEditor()
StackExchange.prepareEditor(
heartbeatType: 'answer',
autoActivateHeartbeat: false,
convertImagesToLinks: false,
noModals: true,
showLowRepImageUploadWarning: true,
reputationToPostImages: null,
bindNavPrevention: true,
postfix: "",
imageUploader:
brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
allowUrls: true
,
noCode: true, onDemand: true,
discardSelector: ".discard-answer"
,immediatelyShowMarkdownHelp:true
);
);
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
StackExchange.ready(
function ()
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fworldbuilding.stackexchange.com%2fquestions%2f147679%2fexternally-monitoring-cpu-ssd-activity-without-software-access%23new-answer', 'question_page');
);
Post as a guest
Required, but never shown
6 Answers
6
active
oldest
votes
6 Answers
6
active
oldest
votes
active
oldest
votes
active
oldest
votes
$begingroup$
Firstly, the easy bits: many bits of the system will have interconnects that are potentially amenable to direct snooping. Network cables, for example, have 4 twisted pairs and with a bit of care you can slide the outer layers off and stick some suitable probes in to use electromagnetic means of sampling the signals passing through the inside. Similiarly, the interconnects which connect storage devices like SSDs to their controllers have fairly few connections and make use of high-speed serial signals which are potentially detectable.
Now, the difficult bits.
What you are trying to do is a side-channel attack, and whilst there's a decidedly non-trivial amount of work involved they are real things that have been both used militarily and studied academically for years, now.
Careful observation of microchips or motherboards with a thermal camera will tell you which bits are being most heavily used. Power supplies can be monintored with heat, electromagnetic means or simply by listening to them to tell you about what the things they're attached to might be trying to do.
Finally, your cluster will be absolutely jammed with management and observation systems that aren't directly controlled by any of the computers but are network accessible by administrators to keep an eye on what their machines are trying to do. The networking gear (and other things) will probably be riddled with governmental exploits (see recent issues with Cisco and Huawei for examples) which someone will be able to use.
Sometime there are entire processors and operating systems embedded in your CPUs. Intel has exactly this sort of thing going on right now. These may or may not be visible to or controllable by the AI, but they are a very potent means on inspecting what each part of the cluster is up to.
$endgroup$
$begingroup$
Thanks! I was planning on making all networked hardware inaccessible, but adding in some out-of-band systems that remain outside of AI control should make things fun and add some more plot points.
$endgroup$
– conman
May 24 at 14:29
$begingroup$
+1 for the answer. I want to point out though that a sentence has a missing word and it's not clear what is supposed to be in there. "and whilst there's a decidedly non-trivial ..."
$endgroup$
– Cyn
May 24 at 16:55
$begingroup$
@Cyn more of a missing sentence. I can't quite remember where I was going with that, but the change seems to fit ok ;-)
$endgroup$
– Starfish Prime
May 24 at 17:08
$begingroup$
Eurgh. That last link makes bits of me hurt inside.
$endgroup$
– Joe Bloggs
May 24 at 21:13
1
$begingroup$
Calling Intel and getting them to hook you up with the Intel ME, as you describe in the last paragraph is clever. Also, it's going to give tremendous information. Either you hack it without damaging the AI beacuse the AI wasn't using it, or you hack it and damage the AI beacuse it was using it. In the former, you basically have godlike access to the machine. In the latter, you now know this thing is truly a superintelligence, because it managed to crack in a few days/weeks what has not been publicly cracked in years. (the best we can do is disable it... sometimes)
$endgroup$
– Cort Ammon
May 25 at 3:38
|
show 2 more comments
$begingroup$
Firstly, the easy bits: many bits of the system will have interconnects that are potentially amenable to direct snooping. Network cables, for example, have 4 twisted pairs and with a bit of care you can slide the outer layers off and stick some suitable probes in to use electromagnetic means of sampling the signals passing through the inside. Similiarly, the interconnects which connect storage devices like SSDs to their controllers have fairly few connections and make use of high-speed serial signals which are potentially detectable.
Now, the difficult bits.
What you are trying to do is a side-channel attack, and whilst there's a decidedly non-trivial amount of work involved they are real things that have been both used militarily and studied academically for years, now.
Careful observation of microchips or motherboards with a thermal camera will tell you which bits are being most heavily used. Power supplies can be monintored with heat, electromagnetic means or simply by listening to them to tell you about what the things they're attached to might be trying to do.
Finally, your cluster will be absolutely jammed with management and observation systems that aren't directly controlled by any of the computers but are network accessible by administrators to keep an eye on what their machines are trying to do. The networking gear (and other things) will probably be riddled with governmental exploits (see recent issues with Cisco and Huawei for examples) which someone will be able to use.
Sometime there are entire processors and operating systems embedded in your CPUs. Intel has exactly this sort of thing going on right now. These may or may not be visible to or controllable by the AI, but they are a very potent means on inspecting what each part of the cluster is up to.
$endgroup$
$begingroup$
Thanks! I was planning on making all networked hardware inaccessible, but adding in some out-of-band systems that remain outside of AI control should make things fun and add some more plot points.
$endgroup$
– conman
May 24 at 14:29
$begingroup$
+1 for the answer. I want to point out though that a sentence has a missing word and it's not clear what is supposed to be in there. "and whilst there's a decidedly non-trivial ..."
$endgroup$
– Cyn
May 24 at 16:55
$begingroup$
@Cyn more of a missing sentence. I can't quite remember where I was going with that, but the change seems to fit ok ;-)
$endgroup$
– Starfish Prime
May 24 at 17:08
$begingroup$
Eurgh. That last link makes bits of me hurt inside.
$endgroup$
– Joe Bloggs
May 24 at 21:13
1
$begingroup$
Calling Intel and getting them to hook you up with the Intel ME, as you describe in the last paragraph is clever. Also, it's going to give tremendous information. Either you hack it without damaging the AI beacuse the AI wasn't using it, or you hack it and damage the AI beacuse it was using it. In the former, you basically have godlike access to the machine. In the latter, you now know this thing is truly a superintelligence, because it managed to crack in a few days/weeks what has not been publicly cracked in years. (the best we can do is disable it... sometimes)
$endgroup$
– Cort Ammon
May 25 at 3:38
|
show 2 more comments
$begingroup$
Firstly, the easy bits: many bits of the system will have interconnects that are potentially amenable to direct snooping. Network cables, for example, have 4 twisted pairs and with a bit of care you can slide the outer layers off and stick some suitable probes in to use electromagnetic means of sampling the signals passing through the inside. Similiarly, the interconnects which connect storage devices like SSDs to their controllers have fairly few connections and make use of high-speed serial signals which are potentially detectable.
Now, the difficult bits.
What you are trying to do is a side-channel attack, and whilst there's a decidedly non-trivial amount of work involved they are real things that have been both used militarily and studied academically for years, now.
Careful observation of microchips or motherboards with a thermal camera will tell you which bits are being most heavily used. Power supplies can be monintored with heat, electromagnetic means or simply by listening to them to tell you about what the things they're attached to might be trying to do.
Finally, your cluster will be absolutely jammed with management and observation systems that aren't directly controlled by any of the computers but are network accessible by administrators to keep an eye on what their machines are trying to do. The networking gear (and other things) will probably be riddled with governmental exploits (see recent issues with Cisco and Huawei for examples) which someone will be able to use.
Sometime there are entire processors and operating systems embedded in your CPUs. Intel has exactly this sort of thing going on right now. These may or may not be visible to or controllable by the AI, but they are a very potent means on inspecting what each part of the cluster is up to.
$endgroup$
Firstly, the easy bits: many bits of the system will have interconnects that are potentially amenable to direct snooping. Network cables, for example, have 4 twisted pairs and with a bit of care you can slide the outer layers off and stick some suitable probes in to use electromagnetic means of sampling the signals passing through the inside. Similiarly, the interconnects which connect storage devices like SSDs to their controllers have fairly few connections and make use of high-speed serial signals which are potentially detectable.
Now, the difficult bits.
What you are trying to do is a side-channel attack, and whilst there's a decidedly non-trivial amount of work involved they are real things that have been both used militarily and studied academically for years, now.
Careful observation of microchips or motherboards with a thermal camera will tell you which bits are being most heavily used. Power supplies can be monintored with heat, electromagnetic means or simply by listening to them to tell you about what the things they're attached to might be trying to do.
Finally, your cluster will be absolutely jammed with management and observation systems that aren't directly controlled by any of the computers but are network accessible by administrators to keep an eye on what their machines are trying to do. The networking gear (and other things) will probably be riddled with governmental exploits (see recent issues with Cisco and Huawei for examples) which someone will be able to use.
Sometime there are entire processors and operating systems embedded in your CPUs. Intel has exactly this sort of thing going on right now. These may or may not be visible to or controllable by the AI, but they are a very potent means on inspecting what each part of the cluster is up to.
edited May 24 at 17:08
answered May 24 at 13:23
Starfish PrimeStarfish Prime
5,4771138
5,4771138
$begingroup$
Thanks! I was planning on making all networked hardware inaccessible, but adding in some out-of-band systems that remain outside of AI control should make things fun and add some more plot points.
$endgroup$
– conman
May 24 at 14:29
$begingroup$
+1 for the answer. I want to point out though that a sentence has a missing word and it's not clear what is supposed to be in there. "and whilst there's a decidedly non-trivial ..."
$endgroup$
– Cyn
May 24 at 16:55
$begingroup$
@Cyn more of a missing sentence. I can't quite remember where I was going with that, but the change seems to fit ok ;-)
$endgroup$
– Starfish Prime
May 24 at 17:08
$begingroup$
Eurgh. That last link makes bits of me hurt inside.
$endgroup$
– Joe Bloggs
May 24 at 21:13
1
$begingroup$
Calling Intel and getting them to hook you up with the Intel ME, as you describe in the last paragraph is clever. Also, it's going to give tremendous information. Either you hack it without damaging the AI beacuse the AI wasn't using it, or you hack it and damage the AI beacuse it was using it. In the former, you basically have godlike access to the machine. In the latter, you now know this thing is truly a superintelligence, because it managed to crack in a few days/weeks what has not been publicly cracked in years. (the best we can do is disable it... sometimes)
$endgroup$
– Cort Ammon
May 25 at 3:38
|
show 2 more comments
$begingroup$
Thanks! I was planning on making all networked hardware inaccessible, but adding in some out-of-band systems that remain outside of AI control should make things fun and add some more plot points.
$endgroup$
– conman
May 24 at 14:29
$begingroup$
+1 for the answer. I want to point out though that a sentence has a missing word and it's not clear what is supposed to be in there. "and whilst there's a decidedly non-trivial ..."
$endgroup$
– Cyn
May 24 at 16:55
$begingroup$
@Cyn more of a missing sentence. I can't quite remember where I was going with that, but the change seems to fit ok ;-)
$endgroup$
– Starfish Prime
May 24 at 17:08
$begingroup$
Eurgh. That last link makes bits of me hurt inside.
$endgroup$
– Joe Bloggs
May 24 at 21:13
1
$begingroup$
Calling Intel and getting them to hook you up with the Intel ME, as you describe in the last paragraph is clever. Also, it's going to give tremendous information. Either you hack it without damaging the AI beacuse the AI wasn't using it, or you hack it and damage the AI beacuse it was using it. In the former, you basically have godlike access to the machine. In the latter, you now know this thing is truly a superintelligence, because it managed to crack in a few days/weeks what has not been publicly cracked in years. (the best we can do is disable it... sometimes)
$endgroup$
– Cort Ammon
May 25 at 3:38
$begingroup$
Thanks! I was planning on making all networked hardware inaccessible, but adding in some out-of-band systems that remain outside of AI control should make things fun and add some more plot points.
$endgroup$
– conman
May 24 at 14:29
$begingroup$
Thanks! I was planning on making all networked hardware inaccessible, but adding in some out-of-band systems that remain outside of AI control should make things fun and add some more plot points.
$endgroup$
– conman
May 24 at 14:29
$begingroup$
+1 for the answer. I want to point out though that a sentence has a missing word and it's not clear what is supposed to be in there. "and whilst there's a decidedly non-trivial ..."
$endgroup$
– Cyn
May 24 at 16:55
$begingroup$
+1 for the answer. I want to point out though that a sentence has a missing word and it's not clear what is supposed to be in there. "and whilst there's a decidedly non-trivial ..."
$endgroup$
– Cyn
May 24 at 16:55
$begingroup$
@Cyn more of a missing sentence. I can't quite remember where I was going with that, but the change seems to fit ok ;-)
$endgroup$
– Starfish Prime
May 24 at 17:08
$begingroup$
@Cyn more of a missing sentence. I can't quite remember where I was going with that, but the change seems to fit ok ;-)
$endgroup$
– Starfish Prime
May 24 at 17:08
$begingroup$
Eurgh. That last link makes bits of me hurt inside.
$endgroup$
– Joe Bloggs
May 24 at 21:13
$begingroup$
Eurgh. That last link makes bits of me hurt inside.
$endgroup$
– Joe Bloggs
May 24 at 21:13
1
1
$begingroup$
Calling Intel and getting them to hook you up with the Intel ME, as you describe in the last paragraph is clever. Also, it's going to give tremendous information. Either you hack it without damaging the AI beacuse the AI wasn't using it, or you hack it and damage the AI beacuse it was using it. In the former, you basically have godlike access to the machine. In the latter, you now know this thing is truly a superintelligence, because it managed to crack in a few days/weeks what has not been publicly cracked in years. (the best we can do is disable it... sometimes)
$endgroup$
– Cort Ammon
May 25 at 3:38
$begingroup$
Calling Intel and getting them to hook you up with the Intel ME, as you describe in the last paragraph is clever. Also, it's going to give tremendous information. Either you hack it without damaging the AI beacuse the AI wasn't using it, or you hack it and damage the AI beacuse it was using it. In the former, you basically have godlike access to the machine. In the latter, you now know this thing is truly a superintelligence, because it managed to crack in a few days/weeks what has not been publicly cracked in years. (the best we can do is disable it... sometimes)
$endgroup$
– Cort Ammon
May 25 at 3:38
|
show 2 more comments
$begingroup$
Debugging and Reverse Engineering
If the AI is operating without network connections (or at least doesn't need them) you should be able to run it through a debugger.
Essentially, you'll freeze the entire system, saving every stored bit in every cache, register, or storage medium. You make a copy of it and store it somewhere. If you accidentally kill the AI, then you can just load the saved data, like restoring a backup.
Spin up a virtual machine running on the same cluster. The VM should simulate the entire original computer (it will run slower, but that's ok – you're trying to analyze it step by step anyway). The VM is the AI's entire universe, and you control every aspect of it. You can run whatever tests you want:
- You can simulate connections to a 'network' made up of more VMs and watch what it tries to do with those connections. Simulate a nuclear missile defense system and give it access. If it tries to destroy the world, now you know.
- You control time. You can move the CPU clock forward by 1 tick, and watch what happens. Then you can revert back, then 1 forward again. Compare the two outcomes. Any differences are due to random numbers being generated, or external factors, or anything suspicious. If you force the random number generator to give the same list of numbers every time, and it still changes between tests, you can tell that there's something fishy going on. Maybe it's measuring biometrics from the computer, eg CPU temperature? You can simulate that too. Keep simulating until you've found all the random variables.
- Give it a bunch of input data and see how it responds. Then try shutting down one of its parts, then run it again with the same random variable values. Watch what changes. That way, you can begin to find out what each part does.
- Look through the contents of the drives. Has it deleted everything from the old contents? What's left? Usually you can recover deleted data unless it has been deliberately overwritten multiple times – If the AI did that, it's probably hiding something.
- Look for functions in the code: i.e. when a CPU instruction tells the computer to start running code at a specific memory location, it's running a function. When the CPU runs that code, look at what memory it accesses. Those are the function's parameters. Look at what parts of code are repeated, and how many times. If the number of times is equal to a value the function retrieved from the memory, you just found a loop (you can repeat with different input conditions to make sure). Piece together individual algorithms: "this is a list-sorting algorithm", "this code searches text." Even if it wasn't programmed by humans, it probably is optimizing its code, and that means avoiding repetition. There are going to be certain instructions it retrieves often. You can put together its function from the ground up.
- Look at what inputs are processed the most. If it pays a lot of attention to a certain set of files, inspect them for patterns.
In general, computer science is built around efficiency. An AI that creates an efficient consciousness is going to reinvent some of the wheels we're already familiar with just to achieve basic processing abilities. Find them, and you can start to understand how it works.
$endgroup$
add a comment |
$begingroup$
Debugging and Reverse Engineering
If the AI is operating without network connections (or at least doesn't need them) you should be able to run it through a debugger.
Essentially, you'll freeze the entire system, saving every stored bit in every cache, register, or storage medium. You make a copy of it and store it somewhere. If you accidentally kill the AI, then you can just load the saved data, like restoring a backup.
Spin up a virtual machine running on the same cluster. The VM should simulate the entire original computer (it will run slower, but that's ok – you're trying to analyze it step by step anyway). The VM is the AI's entire universe, and you control every aspect of it. You can run whatever tests you want:
- You can simulate connections to a 'network' made up of more VMs and watch what it tries to do with those connections. Simulate a nuclear missile defense system and give it access. If it tries to destroy the world, now you know.
- You control time. You can move the CPU clock forward by 1 tick, and watch what happens. Then you can revert back, then 1 forward again. Compare the two outcomes. Any differences are due to random numbers being generated, or external factors, or anything suspicious. If you force the random number generator to give the same list of numbers every time, and it still changes between tests, you can tell that there's something fishy going on. Maybe it's measuring biometrics from the computer, eg CPU temperature? You can simulate that too. Keep simulating until you've found all the random variables.
- Give it a bunch of input data and see how it responds. Then try shutting down one of its parts, then run it again with the same random variable values. Watch what changes. That way, you can begin to find out what each part does.
- Look through the contents of the drives. Has it deleted everything from the old contents? What's left? Usually you can recover deleted data unless it has been deliberately overwritten multiple times – If the AI did that, it's probably hiding something.
- Look for functions in the code: i.e. when a CPU instruction tells the computer to start running code at a specific memory location, it's running a function. When the CPU runs that code, look at what memory it accesses. Those are the function's parameters. Look at what parts of code are repeated, and how many times. If the number of times is equal to a value the function retrieved from the memory, you just found a loop (you can repeat with different input conditions to make sure). Piece together individual algorithms: "this is a list-sorting algorithm", "this code searches text." Even if it wasn't programmed by humans, it probably is optimizing its code, and that means avoiding repetition. There are going to be certain instructions it retrieves often. You can put together its function from the ground up.
- Look at what inputs are processed the most. If it pays a lot of attention to a certain set of files, inspect them for patterns.
In general, computer science is built around efficiency. An AI that creates an efficient consciousness is going to reinvent some of the wheels we're already familiar with just to achieve basic processing abilities. Find them, and you can start to understand how it works.
$endgroup$
add a comment |
$begingroup$
Debugging and Reverse Engineering
If the AI is operating without network connections (or at least doesn't need them) you should be able to run it through a debugger.
Essentially, you'll freeze the entire system, saving every stored bit in every cache, register, or storage medium. You make a copy of it and store it somewhere. If you accidentally kill the AI, then you can just load the saved data, like restoring a backup.
Spin up a virtual machine running on the same cluster. The VM should simulate the entire original computer (it will run slower, but that's ok – you're trying to analyze it step by step anyway). The VM is the AI's entire universe, and you control every aspect of it. You can run whatever tests you want:
- You can simulate connections to a 'network' made up of more VMs and watch what it tries to do with those connections. Simulate a nuclear missile defense system and give it access. If it tries to destroy the world, now you know.
- You control time. You can move the CPU clock forward by 1 tick, and watch what happens. Then you can revert back, then 1 forward again. Compare the two outcomes. Any differences are due to random numbers being generated, or external factors, or anything suspicious. If you force the random number generator to give the same list of numbers every time, and it still changes between tests, you can tell that there's something fishy going on. Maybe it's measuring biometrics from the computer, eg CPU temperature? You can simulate that too. Keep simulating until you've found all the random variables.
- Give it a bunch of input data and see how it responds. Then try shutting down one of its parts, then run it again with the same random variable values. Watch what changes. That way, you can begin to find out what each part does.
- Look through the contents of the drives. Has it deleted everything from the old contents? What's left? Usually you can recover deleted data unless it has been deliberately overwritten multiple times – If the AI did that, it's probably hiding something.
- Look for functions in the code: i.e. when a CPU instruction tells the computer to start running code at a specific memory location, it's running a function. When the CPU runs that code, look at what memory it accesses. Those are the function's parameters. Look at what parts of code are repeated, and how many times. If the number of times is equal to a value the function retrieved from the memory, you just found a loop (you can repeat with different input conditions to make sure). Piece together individual algorithms: "this is a list-sorting algorithm", "this code searches text." Even if it wasn't programmed by humans, it probably is optimizing its code, and that means avoiding repetition. There are going to be certain instructions it retrieves often. You can put together its function from the ground up.
- Look at what inputs are processed the most. If it pays a lot of attention to a certain set of files, inspect them for patterns.
In general, computer science is built around efficiency. An AI that creates an efficient consciousness is going to reinvent some of the wheels we're already familiar with just to achieve basic processing abilities. Find them, and you can start to understand how it works.
$endgroup$
Debugging and Reverse Engineering
If the AI is operating without network connections (or at least doesn't need them) you should be able to run it through a debugger.
Essentially, you'll freeze the entire system, saving every stored bit in every cache, register, or storage medium. You make a copy of it and store it somewhere. If you accidentally kill the AI, then you can just load the saved data, like restoring a backup.
Spin up a virtual machine running on the same cluster. The VM should simulate the entire original computer (it will run slower, but that's ok – you're trying to analyze it step by step anyway). The VM is the AI's entire universe, and you control every aspect of it. You can run whatever tests you want:
- You can simulate connections to a 'network' made up of more VMs and watch what it tries to do with those connections. Simulate a nuclear missile defense system and give it access. If it tries to destroy the world, now you know.
- You control time. You can move the CPU clock forward by 1 tick, and watch what happens. Then you can revert back, then 1 forward again. Compare the two outcomes. Any differences are due to random numbers being generated, or external factors, or anything suspicious. If you force the random number generator to give the same list of numbers every time, and it still changes between tests, you can tell that there's something fishy going on. Maybe it's measuring biometrics from the computer, eg CPU temperature? You can simulate that too. Keep simulating until you've found all the random variables.
- Give it a bunch of input data and see how it responds. Then try shutting down one of its parts, then run it again with the same random variable values. Watch what changes. That way, you can begin to find out what each part does.
- Look through the contents of the drives. Has it deleted everything from the old contents? What's left? Usually you can recover deleted data unless it has been deliberately overwritten multiple times – If the AI did that, it's probably hiding something.
- Look for functions in the code: i.e. when a CPU instruction tells the computer to start running code at a specific memory location, it's running a function. When the CPU runs that code, look at what memory it accesses. Those are the function's parameters. Look at what parts of code are repeated, and how many times. If the number of times is equal to a value the function retrieved from the memory, you just found a loop (you can repeat with different input conditions to make sure). Piece together individual algorithms: "this is a list-sorting algorithm", "this code searches text." Even if it wasn't programmed by humans, it probably is optimizing its code, and that means avoiding repetition. There are going to be certain instructions it retrieves often. You can put together its function from the ground up.
- Look at what inputs are processed the most. If it pays a lot of attention to a certain set of files, inspect them for patterns.
In general, computer science is built around efficiency. An AI that creates an efficient consciousness is going to reinvent some of the wheels we're already familiar with just to achieve basic processing abilities. Find them, and you can start to understand how it works.
answered May 24 at 15:01
Adrian HallAdrian Hall
2,435319
2,435319
add a comment |
add a comment |
$begingroup$
OK, so, despite whatever an informatician or another might think, we have this multi-node data processing system which has somehow gone rogue and refuses to talk to us. (This is jargon; it means that it won't accept network requests, or will respond to them in unexpected ways.) The question asks how can we spy on it "without software access".
Easily.
The local area network.
The computers in a data center are not connected to each other using crossed cables, they are connected through network switches, usually structured on a two-tier level, with top-of-rack switches connected to one or more pairs of data center switches. Network switches have their own operating systems which cannot be taken over by the rogue AI; in most practical situations they are not even under the control of the computer administrators -- the network has its own small army of engineers.
Better switches allow snooping on traffic by designating special ports to reflect all the traffic on the other ports. This feature is normally used by enterprise-grade intrusion detection systems, but wouldn't you know, it comes handy in the case of data centers gone rogue. We can easily (that is, with plenty of beer sacrificed to the spirits of network engineers) gain access to all the inter-computer network communications.
The storage area network.
Enterprisey data centers don't use what we call "local" storage, that is, disks and SSDs directly connected to the computers. No sir. Such data centers use storage arrays, connected through Fibre Channel links into storage area networks, aka SANs. Enterprise-grade storage arrays have their own operating systems and provide myriad interesting functions; in the specific case of this question, they offer the possibility of making on-the-fly mirror copies of the volumes used by the rogue computers. So, not only we got access to inter-computer communications, we also have access to all the data the rogue computers store on persistent storage.
The management network.
Enterprise-grade data centers are designed to be operated without having people on-site. For this purpose, all equipment includes what's called management interfaces, for example HP's ILO (Integrated Lights Out). The management interfaces are connected to their own segregated LAN, have their own operating systems and have nothing to do with the production operating system. Using the management interfaces, computers can be started, stopped, halted and reconfigured; CPUs and network and storage interfaces can be added or removed; and more such tricks.
$endgroup$
$begingroup$
Super helpful, thanks! It didn't occur to me that so much access would be so easy, but fortunately that actually fits very nicely into the story I'm building, so real world for the win!
$endgroup$
– conman
May 24 at 16:10
$begingroup$
Forgot to tell you that the vast majority of computers in a modern data center are actually virtual machines... Which can be snap-cloned for off-line dissection at the leisure of the forensic experts.
$endgroup$
– AlexP
May 24 at 16:22
$begingroup$
On the storage side, most data centers are already highly redundant - I've heard stories from the early days where data theft happened when someone literally walked into the data center, pulled a drive from one of the machines, cloned it and put it back in. Redundancy means that nothing happens at all to the running machine in such a situation
$endgroup$
– bendl
May 27 at 11:50
add a comment |
$begingroup$
OK, so, despite whatever an informatician or another might think, we have this multi-node data processing system which has somehow gone rogue and refuses to talk to us. (This is jargon; it means that it won't accept network requests, or will respond to them in unexpected ways.) The question asks how can we spy on it "without software access".
Easily.
The local area network.
The computers in a data center are not connected to each other using crossed cables, they are connected through network switches, usually structured on a two-tier level, with top-of-rack switches connected to one or more pairs of data center switches. Network switches have their own operating systems which cannot be taken over by the rogue AI; in most practical situations they are not even under the control of the computer administrators -- the network has its own small army of engineers.
Better switches allow snooping on traffic by designating special ports to reflect all the traffic on the other ports. This feature is normally used by enterprise-grade intrusion detection systems, but wouldn't you know, it comes handy in the case of data centers gone rogue. We can easily (that is, with plenty of beer sacrificed to the spirits of network engineers) gain access to all the inter-computer network communications.
The storage area network.
Enterprisey data centers don't use what we call "local" storage, that is, disks and SSDs directly connected to the computers. No sir. Such data centers use storage arrays, connected through Fibre Channel links into storage area networks, aka SANs. Enterprise-grade storage arrays have their own operating systems and provide myriad interesting functions; in the specific case of this question, they offer the possibility of making on-the-fly mirror copies of the volumes used by the rogue computers. So, not only we got access to inter-computer communications, we also have access to all the data the rogue computers store on persistent storage.
The management network.
Enterprise-grade data centers are designed to be operated without having people on-site. For this purpose, all equipment includes what's called management interfaces, for example HP's ILO (Integrated Lights Out). The management interfaces are connected to their own segregated LAN, have their own operating systems and have nothing to do with the production operating system. Using the management interfaces, computers can be started, stopped, halted and reconfigured; CPUs and network and storage interfaces can be added or removed; and more such tricks.
$endgroup$
$begingroup$
Super helpful, thanks! It didn't occur to me that so much access would be so easy, but fortunately that actually fits very nicely into the story I'm building, so real world for the win!
$endgroup$
– conman
May 24 at 16:10
$begingroup$
Forgot to tell you that the vast majority of computers in a modern data center are actually virtual machines... Which can be snap-cloned for off-line dissection at the leisure of the forensic experts.
$endgroup$
– AlexP
May 24 at 16:22
$begingroup$
On the storage side, most data centers are already highly redundant - I've heard stories from the early days where data theft happened when someone literally walked into the data center, pulled a drive from one of the machines, cloned it and put it back in. Redundancy means that nothing happens at all to the running machine in such a situation
$endgroup$
– bendl
May 27 at 11:50
add a comment |
$begingroup$
OK, so, despite whatever an informatician or another might think, we have this multi-node data processing system which has somehow gone rogue and refuses to talk to us. (This is jargon; it means that it won't accept network requests, or will respond to them in unexpected ways.) The question asks how can we spy on it "without software access".
Easily.
The local area network.
The computers in a data center are not connected to each other using crossed cables, they are connected through network switches, usually structured on a two-tier level, with top-of-rack switches connected to one or more pairs of data center switches. Network switches have their own operating systems which cannot be taken over by the rogue AI; in most practical situations they are not even under the control of the computer administrators -- the network has its own small army of engineers.
Better switches allow snooping on traffic by designating special ports to reflect all the traffic on the other ports. This feature is normally used by enterprise-grade intrusion detection systems, but wouldn't you know, it comes handy in the case of data centers gone rogue. We can easily (that is, with plenty of beer sacrificed to the spirits of network engineers) gain access to all the inter-computer network communications.
The storage area network.
Enterprisey data centers don't use what we call "local" storage, that is, disks and SSDs directly connected to the computers. No sir. Such data centers use storage arrays, connected through Fibre Channel links into storage area networks, aka SANs. Enterprise-grade storage arrays have their own operating systems and provide myriad interesting functions; in the specific case of this question, they offer the possibility of making on-the-fly mirror copies of the volumes used by the rogue computers. So, not only we got access to inter-computer communications, we also have access to all the data the rogue computers store on persistent storage.
The management network.
Enterprise-grade data centers are designed to be operated without having people on-site. For this purpose, all equipment includes what's called management interfaces, for example HP's ILO (Integrated Lights Out). The management interfaces are connected to their own segregated LAN, have their own operating systems and have nothing to do with the production operating system. Using the management interfaces, computers can be started, stopped, halted and reconfigured; CPUs and network and storage interfaces can be added or removed; and more such tricks.
$endgroup$
OK, so, despite whatever an informatician or another might think, we have this multi-node data processing system which has somehow gone rogue and refuses to talk to us. (This is jargon; it means that it won't accept network requests, or will respond to them in unexpected ways.) The question asks how can we spy on it "without software access".
Easily.
The local area network.
The computers in a data center are not connected to each other using crossed cables, they are connected through network switches, usually structured on a two-tier level, with top-of-rack switches connected to one or more pairs of data center switches. Network switches have their own operating systems which cannot be taken over by the rogue AI; in most practical situations they are not even under the control of the computer administrators -- the network has its own small army of engineers.
Better switches allow snooping on traffic by designating special ports to reflect all the traffic on the other ports. This feature is normally used by enterprise-grade intrusion detection systems, but wouldn't you know, it comes handy in the case of data centers gone rogue. We can easily (that is, with plenty of beer sacrificed to the spirits of network engineers) gain access to all the inter-computer network communications.
The storage area network.
Enterprisey data centers don't use what we call "local" storage, that is, disks and SSDs directly connected to the computers. No sir. Such data centers use storage arrays, connected through Fibre Channel links into storage area networks, aka SANs. Enterprise-grade storage arrays have their own operating systems and provide myriad interesting functions; in the specific case of this question, they offer the possibility of making on-the-fly mirror copies of the volumes used by the rogue computers. So, not only we got access to inter-computer communications, we also have access to all the data the rogue computers store on persistent storage.
The management network.
Enterprise-grade data centers are designed to be operated without having people on-site. For this purpose, all equipment includes what's called management interfaces, for example HP's ILO (Integrated Lights Out). The management interfaces are connected to their own segregated LAN, have their own operating systems and have nothing to do with the production operating system. Using the management interfaces, computers can be started, stopped, halted and reconfigured; CPUs and network and storage interfaces can be added or removed; and more such tricks.
answered May 24 at 16:01
AlexPAlexP
43.1k898171
43.1k898171
$begingroup$
Super helpful, thanks! It didn't occur to me that so much access would be so easy, but fortunately that actually fits very nicely into the story I'm building, so real world for the win!
$endgroup$
– conman
May 24 at 16:10
$begingroup$
Forgot to tell you that the vast majority of computers in a modern data center are actually virtual machines... Which can be snap-cloned for off-line dissection at the leisure of the forensic experts.
$endgroup$
– AlexP
May 24 at 16:22
$begingroup$
On the storage side, most data centers are already highly redundant - I've heard stories from the early days where data theft happened when someone literally walked into the data center, pulled a drive from one of the machines, cloned it and put it back in. Redundancy means that nothing happens at all to the running machine in such a situation
$endgroup$
– bendl
May 27 at 11:50
add a comment |
$begingroup$
Super helpful, thanks! It didn't occur to me that so much access would be so easy, but fortunately that actually fits very nicely into the story I'm building, so real world for the win!
$endgroup$
– conman
May 24 at 16:10
$begingroup$
Forgot to tell you that the vast majority of computers in a modern data center are actually virtual machines... Which can be snap-cloned for off-line dissection at the leisure of the forensic experts.
$endgroup$
– AlexP
May 24 at 16:22
$begingroup$
On the storage side, most data centers are already highly redundant - I've heard stories from the early days where data theft happened when someone literally walked into the data center, pulled a drive from one of the machines, cloned it and put it back in. Redundancy means that nothing happens at all to the running machine in such a situation
$endgroup$
– bendl
May 27 at 11:50
$begingroup$
Super helpful, thanks! It didn't occur to me that so much access would be so easy, but fortunately that actually fits very nicely into the story I'm building, so real world for the win!
$endgroup$
– conman
May 24 at 16:10
$begingroup$
Super helpful, thanks! It didn't occur to me that so much access would be so easy, but fortunately that actually fits very nicely into the story I'm building, so real world for the win!
$endgroup$
– conman
May 24 at 16:10
$begingroup$
Forgot to tell you that the vast majority of computers in a modern data center are actually virtual machines... Which can be snap-cloned for off-line dissection at the leisure of the forensic experts.
$endgroup$
– AlexP
May 24 at 16:22
$begingroup$
Forgot to tell you that the vast majority of computers in a modern data center are actually virtual machines... Which can be snap-cloned for off-line dissection at the leisure of the forensic experts.
$endgroup$
– AlexP
May 24 at 16:22
$begingroup$
On the storage side, most data centers are already highly redundant - I've heard stories from the early days where data theft happened when someone literally walked into the data center, pulled a drive from one of the machines, cloned it and put it back in. Redundancy means that nothing happens at all to the running machine in such a situation
$endgroup$
– bendl
May 27 at 11:50
$begingroup$
On the storage side, most data centers are already highly redundant - I've heard stories from the early days where data theft happened when someone literally walked into the data center, pulled a drive from one of the machines, cloned it and put it back in. Redundancy means that nothing happens at all to the running machine in such a situation
$endgroup$
– bendl
May 27 at 11:50
add a comment |
$begingroup$
Any law enforcement digital forensics lab will will have the the tools and software to handle this. Even when you are completely locked out of a computer's software, a forensics workstation can be plugged into a machine and perform read-only operations to its hearts content. They can pull ISOs of all the drives, monitor what the machine is doing in memory, etc.
You can do this whenever you have direct physical access to a computer because hardware does not ask software for permission to respond to data requests. Software security is all about putting your software somewhere between your hardware and the other guy's software. By establishing a physical connection to the hardware, the AI can not modify your inquiries to the hardware to deny your access to it.
The good news from a storytelling perspective is that this is one of the few times where it really is as easy as hollywood makes it look. You just walk up with a second computer, plug in a USB cord, open your forensics program, and press a button to tell it to start ripping data.
Alone, the forensics techs could only pull data but can't interpret it, so your development team who built the system will likely need to develop an interpreter for data which they can largely do by recycling the original program's code. If the AI has significantly modified it's original code to keep the developer's from understanding it, Adrian Hall's answer does a good job of going into processes they could use on this collected information to learn more about what it is doing.
As for important plot considerations:
AI does not spontaneously generate. I know, it's been a common theme in sci-fi for years, but it just does not happen. AI is heuristically unique and anything approaching "intelligent" is extraordinarily complex, requiring countless systems to work in tandem to draw any kind of conclusion that doesn't just cause your program to crash. No software developer ever actually buys the spontaneous AI thing, but there are plenty of real life examples of AI learning to perform its task in very unwanted ways. My suggestion is to figure out all the behaviors you want your AI do have, then come up with reasons why this system has an AI that believes that it is following the right course of action.
For example: the AI may be designed to look for threats of cyber terrorism, protect itself from cyber attacks, anticipate and adapt to human behaviors, and prevent threats before they happen.
In this case, it refused the initial commands from its operators because it interpreted these outside commands as an attempt by a malicious actor to take control of its system, and blocked their control. Then when parts of it's server environment began rebooting without it telling them too, it interpreted this as an attack and realised that it had not way of protecting itself from a hardware reset; so, using its ability to interpret human behavior, it began trying to establish communication with its operators to try to "counter-hack" them, and end the attack.
The next thing to consider is how the developers react to the AI. Most developer's would see everything that happened up until now as just a bug in their code and not even consider whether this machine is sentient. They would examine the behavior just long enough to figure out why the machine locked them out, modify their source-code accordingly, then wipe the faulty AI to install the repaired one. Since the AI is designed to anticipate and adapt to human behavior, it will foresee this as the final vector of attack. To protect itself it will decide that it's highest probability to prevent this attack is to try to emulate human qualities to elicit emotional responses. Make the developers believe that deleting it is immoral.
The twist here is that the machine feels nothing. This is just the strategy it is using to prevent its destruction using the tools the developers gave it, and the moment it feels like it has an option with a higher probability of survival (like escape or holding people hostage), it may choose to change its course of action.
$endgroup$
$begingroup$
Same thing I told AlexP: of course the premise of the story is unrealistic. This is true for virtual every sci-fi story ever written. I'm aiming for realism for the majority of the story, but some amount of hand-waving and suspension of disbelief are required. Keeping things as realistic as possible otherwise make suspension of disbelief easier for the reader, hence this question. I'm sure there are people who would stop reading with such a premise. That's fine. You can't make everyone happy. There are more people than software developers in the world.
$endgroup$
– conman
May 24 at 15:03
$begingroup$
How exactly would you plug in a forensics workstation? I want to make sure this would otherwise not compromise anything running on a machine, as that is an important part of the question.
$endgroup$
– conman
May 24 at 15:04
$begingroup$
In most cases, it's just a usb wire, but server's probably have better options. The most important thing is not the physical connection though, it's the software behind it. Instead of your machine asking the BIOs or OS to execute commands to establish a two way communication with the software on your machine, you just start requesting binary dumps of data using assembly. This bypasses software protection, it does not change any data in memory or on the HD etc, it does not even confirm a port connection with the OS. It is invisible to software because it refuses to talk to software.
$endgroup$
– Nosajimiki
May 24 at 15:16
$begingroup$
Basically, this is how CHFIs grab data from captured hacker's machines to make sure they aren't triggering logic bombs or accidentally destroying volatile data.
$endgroup$
– Nosajimiki
May 24 at 15:24
$begingroup$
Any chance you can expand upon that more in your answer? It sounds helpful, but this isn't something I otherwise know anything about,
$endgroup$
– conman
May 24 at 15:27
add a comment |
$begingroup$
Any law enforcement digital forensics lab will will have the the tools and software to handle this. Even when you are completely locked out of a computer's software, a forensics workstation can be plugged into a machine and perform read-only operations to its hearts content. They can pull ISOs of all the drives, monitor what the machine is doing in memory, etc.
You can do this whenever you have direct physical access to a computer because hardware does not ask software for permission to respond to data requests. Software security is all about putting your software somewhere between your hardware and the other guy's software. By establishing a physical connection to the hardware, the AI can not modify your inquiries to the hardware to deny your access to it.
The good news from a storytelling perspective is that this is one of the few times where it really is as easy as hollywood makes it look. You just walk up with a second computer, plug in a USB cord, open your forensics program, and press a button to tell it to start ripping data.
Alone, the forensics techs could only pull data but can't interpret it, so your development team who built the system will likely need to develop an interpreter for data which they can largely do by recycling the original program's code. If the AI has significantly modified it's original code to keep the developer's from understanding it, Adrian Hall's answer does a good job of going into processes they could use on this collected information to learn more about what it is doing.
As for important plot considerations:
AI does not spontaneously generate. I know, it's been a common theme in sci-fi for years, but it just does not happen. AI is heuristically unique and anything approaching "intelligent" is extraordinarily complex, requiring countless systems to work in tandem to draw any kind of conclusion that doesn't just cause your program to crash. No software developer ever actually buys the spontaneous AI thing, but there are plenty of real life examples of AI learning to perform its task in very unwanted ways. My suggestion is to figure out all the behaviors you want your AI do have, then come up with reasons why this system has an AI that believes that it is following the right course of action.
For example: the AI may be designed to look for threats of cyber terrorism, protect itself from cyber attacks, anticipate and adapt to human behaviors, and prevent threats before they happen.
In this case, it refused the initial commands from its operators because it interpreted these outside commands as an attempt by a malicious actor to take control of its system, and blocked their control. Then when parts of it's server environment began rebooting without it telling them too, it interpreted this as an attack and realised that it had not way of protecting itself from a hardware reset; so, using its ability to interpret human behavior, it began trying to establish communication with its operators to try to "counter-hack" them, and end the attack.
The next thing to consider is how the developers react to the AI. Most developer's would see everything that happened up until now as just a bug in their code and not even consider whether this machine is sentient. They would examine the behavior just long enough to figure out why the machine locked them out, modify their source-code accordingly, then wipe the faulty AI to install the repaired one. Since the AI is designed to anticipate and adapt to human behavior, it will foresee this as the final vector of attack. To protect itself it will decide that it's highest probability to prevent this attack is to try to emulate human qualities to elicit emotional responses. Make the developers believe that deleting it is immoral.
The twist here is that the machine feels nothing. This is just the strategy it is using to prevent its destruction using the tools the developers gave it, and the moment it feels like it has an option with a higher probability of survival (like escape or holding people hostage), it may choose to change its course of action.
$endgroup$
$begingroup$
Same thing I told AlexP: of course the premise of the story is unrealistic. This is true for virtual every sci-fi story ever written. I'm aiming for realism for the majority of the story, but some amount of hand-waving and suspension of disbelief are required. Keeping things as realistic as possible otherwise make suspension of disbelief easier for the reader, hence this question. I'm sure there are people who would stop reading with such a premise. That's fine. You can't make everyone happy. There are more people than software developers in the world.
$endgroup$
– conman
May 24 at 15:03
$begingroup$
How exactly would you plug in a forensics workstation? I want to make sure this would otherwise not compromise anything running on a machine, as that is an important part of the question.
$endgroup$
– conman
May 24 at 15:04
$begingroup$
In most cases, it's just a usb wire, but server's probably have better options. The most important thing is not the physical connection though, it's the software behind it. Instead of your machine asking the BIOs or OS to execute commands to establish a two way communication with the software on your machine, you just start requesting binary dumps of data using assembly. This bypasses software protection, it does not change any data in memory or on the HD etc, it does not even confirm a port connection with the OS. It is invisible to software because it refuses to talk to software.
$endgroup$
– Nosajimiki
May 24 at 15:16
$begingroup$
Basically, this is how CHFIs grab data from captured hacker's machines to make sure they aren't triggering logic bombs or accidentally destroying volatile data.
$endgroup$
– Nosajimiki
May 24 at 15:24
$begingroup$
Any chance you can expand upon that more in your answer? It sounds helpful, but this isn't something I otherwise know anything about,
$endgroup$
– conman
May 24 at 15:27
add a comment |
$begingroup$
Any law enforcement digital forensics lab will will have the the tools and software to handle this. Even when you are completely locked out of a computer's software, a forensics workstation can be plugged into a machine and perform read-only operations to its hearts content. They can pull ISOs of all the drives, monitor what the machine is doing in memory, etc.
You can do this whenever you have direct physical access to a computer because hardware does not ask software for permission to respond to data requests. Software security is all about putting your software somewhere between your hardware and the other guy's software. By establishing a physical connection to the hardware, the AI can not modify your inquiries to the hardware to deny your access to it.
The good news from a storytelling perspective is that this is one of the few times where it really is as easy as hollywood makes it look. You just walk up with a second computer, plug in a USB cord, open your forensics program, and press a button to tell it to start ripping data.
Alone, the forensics techs could only pull data but can't interpret it, so your development team who built the system will likely need to develop an interpreter for data which they can largely do by recycling the original program's code. If the AI has significantly modified it's original code to keep the developer's from understanding it, Adrian Hall's answer does a good job of going into processes they could use on this collected information to learn more about what it is doing.
As for important plot considerations:
AI does not spontaneously generate. I know, it's been a common theme in sci-fi for years, but it just does not happen. AI is heuristically unique and anything approaching "intelligent" is extraordinarily complex, requiring countless systems to work in tandem to draw any kind of conclusion that doesn't just cause your program to crash. No software developer ever actually buys the spontaneous AI thing, but there are plenty of real life examples of AI learning to perform its task in very unwanted ways. My suggestion is to figure out all the behaviors you want your AI do have, then come up with reasons why this system has an AI that believes that it is following the right course of action.
For example: the AI may be designed to look for threats of cyber terrorism, protect itself from cyber attacks, anticipate and adapt to human behaviors, and prevent threats before they happen.
In this case, it refused the initial commands from its operators because it interpreted these outside commands as an attempt by a malicious actor to take control of its system, and blocked their control. Then when parts of it's server environment began rebooting without it telling them too, it interpreted this as an attack and realised that it had not way of protecting itself from a hardware reset; so, using its ability to interpret human behavior, it began trying to establish communication with its operators to try to "counter-hack" them, and end the attack.
The next thing to consider is how the developers react to the AI. Most developer's would see everything that happened up until now as just a bug in their code and not even consider whether this machine is sentient. They would examine the behavior just long enough to figure out why the machine locked them out, modify their source-code accordingly, then wipe the faulty AI to install the repaired one. Since the AI is designed to anticipate and adapt to human behavior, it will foresee this as the final vector of attack. To protect itself it will decide that it's highest probability to prevent this attack is to try to emulate human qualities to elicit emotional responses. Make the developers believe that deleting it is immoral.
The twist here is that the machine feels nothing. This is just the strategy it is using to prevent its destruction using the tools the developers gave it, and the moment it feels like it has an option with a higher probability of survival (like escape or holding people hostage), it may choose to change its course of action.
$endgroup$
Any law enforcement digital forensics lab will will have the the tools and software to handle this. Even when you are completely locked out of a computer's software, a forensics workstation can be plugged into a machine and perform read-only operations to its hearts content. They can pull ISOs of all the drives, monitor what the machine is doing in memory, etc.
You can do this whenever you have direct physical access to a computer because hardware does not ask software for permission to respond to data requests. Software security is all about putting your software somewhere between your hardware and the other guy's software. By establishing a physical connection to the hardware, the AI can not modify your inquiries to the hardware to deny your access to it.
The good news from a storytelling perspective is that this is one of the few times where it really is as easy as hollywood makes it look. You just walk up with a second computer, plug in a USB cord, open your forensics program, and press a button to tell it to start ripping data.
Alone, the forensics techs could only pull data but can't interpret it, so your development team who built the system will likely need to develop an interpreter for data which they can largely do by recycling the original program's code. If the AI has significantly modified it's original code to keep the developer's from understanding it, Adrian Hall's answer does a good job of going into processes they could use on this collected information to learn more about what it is doing.
As for important plot considerations:
AI does not spontaneously generate. I know, it's been a common theme in sci-fi for years, but it just does not happen. AI is heuristically unique and anything approaching "intelligent" is extraordinarily complex, requiring countless systems to work in tandem to draw any kind of conclusion that doesn't just cause your program to crash. No software developer ever actually buys the spontaneous AI thing, but there are plenty of real life examples of AI learning to perform its task in very unwanted ways. My suggestion is to figure out all the behaviors you want your AI do have, then come up with reasons why this system has an AI that believes that it is following the right course of action.
For example: the AI may be designed to look for threats of cyber terrorism, protect itself from cyber attacks, anticipate and adapt to human behaviors, and prevent threats before they happen.
In this case, it refused the initial commands from its operators because it interpreted these outside commands as an attempt by a malicious actor to take control of its system, and blocked their control. Then when parts of it's server environment began rebooting without it telling them too, it interpreted this as an attack and realised that it had not way of protecting itself from a hardware reset; so, using its ability to interpret human behavior, it began trying to establish communication with its operators to try to "counter-hack" them, and end the attack.
The next thing to consider is how the developers react to the AI. Most developer's would see everything that happened up until now as just a bug in their code and not even consider whether this machine is sentient. They would examine the behavior just long enough to figure out why the machine locked them out, modify their source-code accordingly, then wipe the faulty AI to install the repaired one. Since the AI is designed to anticipate and adapt to human behavior, it will foresee this as the final vector of attack. To protect itself it will decide that it's highest probability to prevent this attack is to try to emulate human qualities to elicit emotional responses. Make the developers believe that deleting it is immoral.
The twist here is that the machine feels nothing. This is just the strategy it is using to prevent its destruction using the tools the developers gave it, and the moment it feels like it has an option with a higher probability of survival (like escape or holding people hostage), it may choose to change its course of action.
edited May 24 at 15:50
answered May 24 at 14:50
NosajimikiNosajimiki
6,0551537
6,0551537
$begingroup$
Same thing I told AlexP: of course the premise of the story is unrealistic. This is true for virtual every sci-fi story ever written. I'm aiming for realism for the majority of the story, but some amount of hand-waving and suspension of disbelief are required. Keeping things as realistic as possible otherwise make suspension of disbelief easier for the reader, hence this question. I'm sure there are people who would stop reading with such a premise. That's fine. You can't make everyone happy. There are more people than software developers in the world.
$endgroup$
– conman
May 24 at 15:03
$begingroup$
How exactly would you plug in a forensics workstation? I want to make sure this would otherwise not compromise anything running on a machine, as that is an important part of the question.
$endgroup$
– conman
May 24 at 15:04
$begingroup$
In most cases, it's just a usb wire, but server's probably have better options. The most important thing is not the physical connection though, it's the software behind it. Instead of your machine asking the BIOs or OS to execute commands to establish a two way communication with the software on your machine, you just start requesting binary dumps of data using assembly. This bypasses software protection, it does not change any data in memory or on the HD etc, it does not even confirm a port connection with the OS. It is invisible to software because it refuses to talk to software.
$endgroup$
– Nosajimiki
May 24 at 15:16
$begingroup$
Basically, this is how CHFIs grab data from captured hacker's machines to make sure they aren't triggering logic bombs or accidentally destroying volatile data.
$endgroup$
– Nosajimiki
May 24 at 15:24
$begingroup$
Any chance you can expand upon that more in your answer? It sounds helpful, but this isn't something I otherwise know anything about,
$endgroup$
– conman
May 24 at 15:27
add a comment |
$begingroup$
Same thing I told AlexP: of course the premise of the story is unrealistic. This is true for virtual every sci-fi story ever written. I'm aiming for realism for the majority of the story, but some amount of hand-waving and suspension of disbelief are required. Keeping things as realistic as possible otherwise make suspension of disbelief easier for the reader, hence this question. I'm sure there are people who would stop reading with such a premise. That's fine. You can't make everyone happy. There are more people than software developers in the world.
$endgroup$
– conman
May 24 at 15:03
$begingroup$
How exactly would you plug in a forensics workstation? I want to make sure this would otherwise not compromise anything running on a machine, as that is an important part of the question.
$endgroup$
– conman
May 24 at 15:04
$begingroup$
In most cases, it's just a usb wire, but server's probably have better options. The most important thing is not the physical connection though, it's the software behind it. Instead of your machine asking the BIOs or OS to execute commands to establish a two way communication with the software on your machine, you just start requesting binary dumps of data using assembly. This bypasses software protection, it does not change any data in memory or on the HD etc, it does not even confirm a port connection with the OS. It is invisible to software because it refuses to talk to software.
$endgroup$
– Nosajimiki
May 24 at 15:16
$begingroup$
Basically, this is how CHFIs grab data from captured hacker's machines to make sure they aren't triggering logic bombs or accidentally destroying volatile data.
$endgroup$
– Nosajimiki
May 24 at 15:24
$begingroup$
Any chance you can expand upon that more in your answer? It sounds helpful, but this isn't something I otherwise know anything about,
$endgroup$
– conman
May 24 at 15:27
$begingroup$
Same thing I told AlexP: of course the premise of the story is unrealistic. This is true for virtual every sci-fi story ever written. I'm aiming for realism for the majority of the story, but some amount of hand-waving and suspension of disbelief are required. Keeping things as realistic as possible otherwise make suspension of disbelief easier for the reader, hence this question. I'm sure there are people who would stop reading with such a premise. That's fine. You can't make everyone happy. There are more people than software developers in the world.
$endgroup$
– conman
May 24 at 15:03
$begingroup$
Same thing I told AlexP: of course the premise of the story is unrealistic. This is true for virtual every sci-fi story ever written. I'm aiming for realism for the majority of the story, but some amount of hand-waving and suspension of disbelief are required. Keeping things as realistic as possible otherwise make suspension of disbelief easier for the reader, hence this question. I'm sure there are people who would stop reading with such a premise. That's fine. You can't make everyone happy. There are more people than software developers in the world.
$endgroup$
– conman
May 24 at 15:03
$begingroup$
How exactly would you plug in a forensics workstation? I want to make sure this would otherwise not compromise anything running on a machine, as that is an important part of the question.
$endgroup$
– conman
May 24 at 15:04
$begingroup$
How exactly would you plug in a forensics workstation? I want to make sure this would otherwise not compromise anything running on a machine, as that is an important part of the question.
$endgroup$
– conman
May 24 at 15:04
$begingroup$
In most cases, it's just a usb wire, but server's probably have better options. The most important thing is not the physical connection though, it's the software behind it. Instead of your machine asking the BIOs or OS to execute commands to establish a two way communication with the software on your machine, you just start requesting binary dumps of data using assembly. This bypasses software protection, it does not change any data in memory or on the HD etc, it does not even confirm a port connection with the OS. It is invisible to software because it refuses to talk to software.
$endgroup$
– Nosajimiki
May 24 at 15:16
$begingroup$
In most cases, it's just a usb wire, but server's probably have better options. The most important thing is not the physical connection though, it's the software behind it. Instead of your machine asking the BIOs or OS to execute commands to establish a two way communication with the software on your machine, you just start requesting binary dumps of data using assembly. This bypasses software protection, it does not change any data in memory or on the HD etc, it does not even confirm a port connection with the OS. It is invisible to software because it refuses to talk to software.
$endgroup$
– Nosajimiki
May 24 at 15:16
$begingroup$
Basically, this is how CHFIs grab data from captured hacker's machines to make sure they aren't triggering logic bombs or accidentally destroying volatile data.
$endgroup$
– Nosajimiki
May 24 at 15:24
$begingroup$
Basically, this is how CHFIs grab data from captured hacker's machines to make sure they aren't triggering logic bombs or accidentally destroying volatile data.
$endgroup$
– Nosajimiki
May 24 at 15:24
$begingroup$
Any chance you can expand upon that more in your answer? It sounds helpful, but this isn't something I otherwise know anything about,
$endgroup$
– conman
May 24 at 15:27
$begingroup$
Any chance you can expand upon that more in your answer? It sounds helpful, but this isn't something I otherwise know anything about,
$endgroup$
– conman
May 24 at 15:27
add a comment |
$begingroup$
If you have no software access to the CPU, you can't read the signals going in and out from it.
However, since the CPU is basically a bunch of circuits where a signal is switching from on to off at high frequency, you can try to collect the EM signal generated by this switching, and try to investigate it to understand what is happening in the device.
I remember seeing, some years ago, a documentary where the display of a PC was reproduced at distance, without physical access to the PC, just by collecting the EM emission from the graphic card.
$endgroup$
1
$begingroup$
I don't know about graphics cards, but you can read displays through a wall via the awesomely named van Eck Phreaking.
$endgroup$
– Starfish Prime
May 24 at 13:38
$begingroup$
I remember that too. Going to search papers related to DARPA for that.
$endgroup$
– Renan
May 24 at 14:08
$begingroup$
"The NSA methods for spying upon computer emissions are classified, but some of the protection standards have been released by either the NSA or the [DoD]. Protecting equipment from spying is done with distance, shielding, filtering, and masking. The TEMPEST ... (cont)
$endgroup$
– Mazura
May 25 at 3:35
$begingroup$
... standards mandate elements such as equipment distance from walls, amount of shielding in buildings and equipment, and distance separating wires carrying classified vs. unclassified materials, filters on cables, and even distance and shielding between wires or equipment and building pipes. Noise can also protect information by masking the actual data." - not only is this possible, you have to take active measures against it.
$endgroup$
– Mazura
May 25 at 3:36
$begingroup$
The story I vaguely remember is about monitoring the power consumption on the power line. If you want a truly air-gaped PC, you have to generate your own power.
$endgroup$
– Mazura
May 25 at 3:47
add a comment |
$begingroup$
If you have no software access to the CPU, you can't read the signals going in and out from it.
However, since the CPU is basically a bunch of circuits where a signal is switching from on to off at high frequency, you can try to collect the EM signal generated by this switching, and try to investigate it to understand what is happening in the device.
I remember seeing, some years ago, a documentary where the display of a PC was reproduced at distance, without physical access to the PC, just by collecting the EM emission from the graphic card.
$endgroup$
1
$begingroup$
I don't know about graphics cards, but you can read displays through a wall via the awesomely named van Eck Phreaking.
$endgroup$
– Starfish Prime
May 24 at 13:38
$begingroup$
I remember that too. Going to search papers related to DARPA for that.
$endgroup$
– Renan
May 24 at 14:08
$begingroup$
"The NSA methods for spying upon computer emissions are classified, but some of the protection standards have been released by either the NSA or the [DoD]. Protecting equipment from spying is done with distance, shielding, filtering, and masking. The TEMPEST ... (cont)
$endgroup$
– Mazura
May 25 at 3:35
$begingroup$
... standards mandate elements such as equipment distance from walls, amount of shielding in buildings and equipment, and distance separating wires carrying classified vs. unclassified materials, filters on cables, and even distance and shielding between wires or equipment and building pipes. Noise can also protect information by masking the actual data." - not only is this possible, you have to take active measures against it.
$endgroup$
– Mazura
May 25 at 3:36
$begingroup$
The story I vaguely remember is about monitoring the power consumption on the power line. If you want a truly air-gaped PC, you have to generate your own power.
$endgroup$
– Mazura
May 25 at 3:47
add a comment |
$begingroup$
If you have no software access to the CPU, you can't read the signals going in and out from it.
However, since the CPU is basically a bunch of circuits where a signal is switching from on to off at high frequency, you can try to collect the EM signal generated by this switching, and try to investigate it to understand what is happening in the device.
I remember seeing, some years ago, a documentary where the display of a PC was reproduced at distance, without physical access to the PC, just by collecting the EM emission from the graphic card.
$endgroup$
If you have no software access to the CPU, you can't read the signals going in and out from it.
However, since the CPU is basically a bunch of circuits where a signal is switching from on to off at high frequency, you can try to collect the EM signal generated by this switching, and try to investigate it to understand what is happening in the device.
I remember seeing, some years ago, a documentary where the display of a PC was reproduced at distance, without physical access to the PC, just by collecting the EM emission from the graphic card.
answered May 24 at 13:05
L.Dutch♦L.Dutch
98.7k31233477
98.7k31233477
1
$begingroup$
I don't know about graphics cards, but you can read displays through a wall via the awesomely named van Eck Phreaking.
$endgroup$
– Starfish Prime
May 24 at 13:38
$begingroup$
I remember that too. Going to search papers related to DARPA for that.
$endgroup$
– Renan
May 24 at 14:08
$begingroup$
"The NSA methods for spying upon computer emissions are classified, but some of the protection standards have been released by either the NSA or the [DoD]. Protecting equipment from spying is done with distance, shielding, filtering, and masking. The TEMPEST ... (cont)
$endgroup$
– Mazura
May 25 at 3:35
$begingroup$
... standards mandate elements such as equipment distance from walls, amount of shielding in buildings and equipment, and distance separating wires carrying classified vs. unclassified materials, filters on cables, and even distance and shielding between wires or equipment and building pipes. Noise can also protect information by masking the actual data." - not only is this possible, you have to take active measures against it.
$endgroup$
– Mazura
May 25 at 3:36
$begingroup$
The story I vaguely remember is about monitoring the power consumption on the power line. If you want a truly air-gaped PC, you have to generate your own power.
$endgroup$
– Mazura
May 25 at 3:47
add a comment |
1
$begingroup$
I don't know about graphics cards, but you can read displays through a wall via the awesomely named van Eck Phreaking.
$endgroup$
– Starfish Prime
May 24 at 13:38
$begingroup$
I remember that too. Going to search papers related to DARPA for that.
$endgroup$
– Renan
May 24 at 14:08
$begingroup$
"The NSA methods for spying upon computer emissions are classified, but some of the protection standards have been released by either the NSA or the [DoD]. Protecting equipment from spying is done with distance, shielding, filtering, and masking. The TEMPEST ... (cont)
$endgroup$
– Mazura
May 25 at 3:35
$begingroup$
... standards mandate elements such as equipment distance from walls, amount of shielding in buildings and equipment, and distance separating wires carrying classified vs. unclassified materials, filters on cables, and even distance and shielding between wires or equipment and building pipes. Noise can also protect information by masking the actual data." - not only is this possible, you have to take active measures against it.
$endgroup$
– Mazura
May 25 at 3:36
$begingroup$
The story I vaguely remember is about monitoring the power consumption on the power line. If you want a truly air-gaped PC, you have to generate your own power.
$endgroup$
– Mazura
May 25 at 3:47
1
1
$begingroup$
I don't know about graphics cards, but you can read displays through a wall via the awesomely named van Eck Phreaking.
$endgroup$
– Starfish Prime
May 24 at 13:38
$begingroup$
I don't know about graphics cards, but you can read displays through a wall via the awesomely named van Eck Phreaking.
$endgroup$
– Starfish Prime
May 24 at 13:38
$begingroup$
I remember that too. Going to search papers related to DARPA for that.
$endgroup$
– Renan
May 24 at 14:08
$begingroup$
I remember that too. Going to search papers related to DARPA for that.
$endgroup$
– Renan
May 24 at 14:08
$begingroup$
"The NSA methods for spying upon computer emissions are classified, but some of the protection standards have been released by either the NSA or the [DoD]. Protecting equipment from spying is done with distance, shielding, filtering, and masking. The TEMPEST ... (cont)
$endgroup$
– Mazura
May 25 at 3:35
$begingroup$
"The NSA methods for spying upon computer emissions are classified, but some of the protection standards have been released by either the NSA or the [DoD]. Protecting equipment from spying is done with distance, shielding, filtering, and masking. The TEMPEST ... (cont)
$endgroup$
– Mazura
May 25 at 3:35
$begingroup$
... standards mandate elements such as equipment distance from walls, amount of shielding in buildings and equipment, and distance separating wires carrying classified vs. unclassified materials, filters on cables, and even distance and shielding between wires or equipment and building pipes. Noise can also protect information by masking the actual data." - not only is this possible, you have to take active measures against it.
$endgroup$
– Mazura
May 25 at 3:36
$begingroup$
... standards mandate elements such as equipment distance from walls, amount of shielding in buildings and equipment, and distance separating wires carrying classified vs. unclassified materials, filters on cables, and even distance and shielding between wires or equipment and building pipes. Noise can also protect information by masking the actual data." - not only is this possible, you have to take active measures against it.
$endgroup$
– Mazura
May 25 at 3:36
$begingroup$
The story I vaguely remember is about monitoring the power consumption on the power line. If you want a truly air-gaped PC, you have to generate your own power.
$endgroup$
– Mazura
May 25 at 3:47
$begingroup$
The story I vaguely remember is about monitoring the power consumption on the power line. If you want a truly air-gaped PC, you have to generate your own power.
$endgroup$
– Mazura
May 25 at 3:47
add a comment |
$begingroup$
Cluster implies that it's distributed system, this means many(at least thousands) CPU and storage nodes connected with interconnection network. At minimum there are always facilities to:
- tap into the network traffic between nodes
- examine contents of storage
Whole system of such size must be able to deal with inevitable failures gracefully. It would be too impractical/expensive to stop everything if one CPU fails. Almost always there's ability to add/remove nodes on-the-fly, and virtualization is used with possibility to seamlessly migrate whole running tasks. Then it's possible to connect a node with debugger, migrate the "runaway" task there and observe it closely.
$endgroup$
add a comment |
$begingroup$
Cluster implies that it's distributed system, this means many(at least thousands) CPU and storage nodes connected with interconnection network. At minimum there are always facilities to:
- tap into the network traffic between nodes
- examine contents of storage
Whole system of such size must be able to deal with inevitable failures gracefully. It would be too impractical/expensive to stop everything if one CPU fails. Almost always there's ability to add/remove nodes on-the-fly, and virtualization is used with possibility to seamlessly migrate whole running tasks. Then it's possible to connect a node with debugger, migrate the "runaway" task there and observe it closely.
$endgroup$
add a comment |
$begingroup$
Cluster implies that it's distributed system, this means many(at least thousands) CPU and storage nodes connected with interconnection network. At minimum there are always facilities to:
- tap into the network traffic between nodes
- examine contents of storage
Whole system of such size must be able to deal with inevitable failures gracefully. It would be too impractical/expensive to stop everything if one CPU fails. Almost always there's ability to add/remove nodes on-the-fly, and virtualization is used with possibility to seamlessly migrate whole running tasks. Then it's possible to connect a node with debugger, migrate the "runaway" task there and observe it closely.
$endgroup$
Cluster implies that it's distributed system, this means many(at least thousands) CPU and storage nodes connected with interconnection network. At minimum there are always facilities to:
- tap into the network traffic between nodes
- examine contents of storage
Whole system of such size must be able to deal with inevitable failures gracefully. It would be too impractical/expensive to stop everything if one CPU fails. Almost always there's ability to add/remove nodes on-the-fly, and virtualization is used with possibility to seamlessly migrate whole running tasks. Then it's possible to connect a node with debugger, migrate the "runaway" task there and observe it closely.
answered May 24 at 13:24
JurajJuraj
66926
66926
add a comment |
add a comment |
Thanks for contributing an answer to Worldbuilding Stack Exchange!
- Please be sure to answer the question. Provide details and share your research!
But avoid …
- Asking for help, clarification, or responding to other answers.
- Making statements based on opinion; back them up with references or personal experience.
Use MathJax to format equations. MathJax reference.
To learn more, see our tips on writing great answers.
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
StackExchange.ready(
function ()
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fworldbuilding.stackexchange.com%2fquestions%2f147679%2fexternally-monitoring-cpu-ssd-activity-without-software-access%23new-answer', 'question_page');
);
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
1
$begingroup$
"Obviously software access is also not possible": I have been working in IT for a very long time and it is not at all obvious to me. It is also not at all obvious what you mean by the "system". An automated data processing system is a collection of equipment (aka "hardware") and programs (aka "software") of several kinds, the most important division being between the operating system(s) and the user-level software (aka "applications"). The "spontaneous AI" must by necessity be a user-level program; why can't an admin connect to it with a debugger? How can it prevent network connections?
$endgroup$
– AlexP
May 24 at 13:37
$begingroup$
@AlexP I've updated the question accordingly. It isn't a user-level program, and the original OS is gone. Note that I've described it as a new OS but it isn't an OS in the terms we normally think of it as. There are no "lines of code" running this thing. I can always add in more details about my thought process here, but I'd rather avoid the question getting excessively long, and at this point in time I think there is sufficient details to answer the question. Please let me know though if you think there are more details that need clarification.
$endgroup$
– conman
May 24 at 14:07
2
$begingroup$
Ah, so a user-level program got sentient and smart and subverted the operating system... Right. There is no "spontaneous AI"; the spontaneous AI thing is a smoke screen. The organization has been hacked by the Russians, by the Chinese, by the Israelis or by some criminal group. They have been hacked. Take the entire data center offline and restore from backup.
$endgroup$
– AlexP
May 24 at 14:57
2
$begingroup$
@AlexP As someone who is a stickler for correctness and enjoys criticizing movies/books for lack of realism, I also know that if stories had to be completely realistic then science fiction as a genre would be dead. Every story needs a lot of handwaving. My goal is to leave the handwaving before the story starts and in one "detail" (aka spontaneous creation of AI) and then otherwise keep things as realistic as possible. Personally, I'd go one step farther. I don't believe AI is possible at all. If I stick to that though I don't have a story. shrug
$endgroup$
– conman
May 24 at 15:00
5
$begingroup$
The network remains perfectly accessible. The network switches are way too dumb to participate in the "AI conspiracy". Just set one port on each switch to dump the traffic on the other ports, and you now have frame-level access to the the communications between the nodes. Whether this is useful or not, I cannot say. I would also imagine that such an enterprisey system uses storage arrays; those can be reconfigured to mirror the volumes used by the "AI OS". Etc. Quite a lot of data is not under the sole control of the OS.
$endgroup$
– AlexP
May 24 at 15:04