Trust-sensitive belief revision

Document
Contributors
Abstract
Proceedings of the Twenty-Fourth International Joint Conference on Artificial Intelligence in Buenos Aires, Argentina, 25â€"31 July 2015. Belief revision is concerned with incorporating new information into a pre-existing set of beliefs. When the new information comes from another agent, we must first determine if that agent should be trusted. In this paper, we define trust as a pre-processing step before revision. We emphasize that trust in an agent is often restricted to a particular domain of expertise. We demonstrate that this form of trust can be captured by associating a state partition with each agent, then relativizing all reports to this partition before revising. We position the resulting family of trust-sensitive revision operators within the class of selective revision operators of Ferme and Hansson, and we examine its properties. In particular, we show how trust-sensitive revision is manipulable, in the sense that agents can sometimes have incentive to pass on misleading information. When multiple reporting agents are involved, we use a distance function over states to represent differing degrees of trust; this ensures that the most trusted reports will be believed.,Conference paper,Published.
Subject (Topical)

Refine your search

Note
Proceedings of the Twenty-Fourth International Joint Conference on Artificial Intelligence
Identifier
ISBN: 9781577357384
Publisher
AAAI Press
Type
Language
Rights
Copyright © 2015 International Joint Conferences on Artificial Intelligence. All rights reserved. No part of this book may be reproduced in any form by any electronic or mechanical means (including photocopying, recording, or information storage and retrieval) without permission in writing from the publisher.