Artificial Consciousness and Security

05/11/2019
by   Andrew Powell, et al.
0

This paper describes a possible way to improve computer security by implementing a program which implements the following three features related to a weak notion of artificial consciousness: (partial) self-monitoring, ability to compute the truth of quantifier-free propositions and the ability to communicate with the user. The integrity of the program could be enhanced by using a trusted computing approach, that is to say a hardware module that is at the root of a chain of trust. This paper outlines a possible approach but does not refer to an implementation (which would need further work), but the author believes that an implementation using current processors, a debugger, a monitoring program and a trusted processing module is currently possible.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
01/16/2021

SEDAT:Security Enhanced Device Attestation with TPM2.0

Remote attestation is one of the ways to verify the state of an untruste...
research
05/20/2019

simTPM: User-centric TPM for Mobile Devices (Technical Report)

Trusted Platform Modules are valuable building blocks for security solut...
research
12/01/2021

Trusted And Confidential Program Analysis

We develop the concept of Trusted and Confidential Program Analysis (TCP...
research
09/11/2020

Accelerating 2PC-based ML with Limited Trusted Hardware

This paper describes the design, implementation, and evaluation of Otak,...
research
09/04/2020

2.5D Root of Trust: Secure System-Level Integration of Untrusted Chiplets

Dedicated, after acceptance and publication, in memory of the late Vasso...
research
05/02/2020

Who Needs Trust for 5G?

There has been much recent discussion of the criticality of the 5G infra...
research
02/27/2023

Reimplementing Mizar in Rust

This paper describes a new open-source proof processing tool, mizar-rs, ...

Please sign up or login with your details

Forgot password? Click here to reset