SwePub
Sök i SwePub databas

  Utökad sökning

Träfflista för sökning "WFRF:(Spirakis Paul G.) "

Sökning: WFRF:(Spirakis Paul G.)

  • Resultat 1-7 av 7
Sortera/gruppera träfflistan
   
NumreringReferensOmslagsbildHitta
1.
  • Dolev, Shlomi, et al. (författare)
  • Game Authority for Robust and Scalable Distributed Selfish Computer Systems
  • 2007
  • Ingår i: Proceedings of the twenty-sixth annual ACM symposium on Principles of distributed computing. - 9781595936165 ; , s. 356 - 357
  • Konferensbidrag (refereegranskat)abstract
    • Game theory analyzes social structures of agents that have freedomof choice within a moral code. The society allows freedom andselfishness within the moral code, which social structures enforce,i.e., legislative, executive, and judicial. Social rules encourage individualprofit from which the entire society gains. Distributed computersystems can improve their scalability and robustness by usingexplicit social structures. We propose using a game authority middlewarefor enforcing the moral code on selfish agents.The power of game theory is in predicting the game outcome forspecific assumptions. The prediction holds as long as the playerscannot tamper with the social structure, or change the rules of thegame, i.e., the prisoner cannot escape from prison in the classicalprisoner dilemma. Therefore, we cannot predict the game outcomewithout suitable assumptions on failures and honest selfishness.
  •  
2.
  • Dolev, Shlomi, et al. (författare)
  • Game Authority for Robust Distributed Selfish-Computer Systems (Preliminary Version)
  • 2006
  • Rapport (övrigt vetenskapligt/konstnärligt)abstract
    • Game theory has an elegant way of modeling somestructural aspects of social games. The predicted outcome of thesocial games holds as long as ?the rules of the game? are kept.Therefore, a game authority (which enforces the ?rules?) isimplied. We present the first design for that game authority, andthe first suiting middleware for executing an algorithmic mechanismin distributed systems. The middleware restricts the agents to?play by the rules?, and excludes non-selfish agents since weconsider them as Byzantine. We base our design on a self-stabilizingByzantine agreement that allows processors to audit each other?sactions. We show that when the agents are restricted to actselfishly the resource allocation is asymptotically optimal(according to our novel performance ratio; multi-round anarchycost). Our design also includes services that allow owners to sharea collaborative effort for coalition optimization usinggroup-preplay negotiation. Since there are no guarantees forsuccessful termination of selfish negotiations, we consider?democratic? approaches for promoting ?free choice?.
  •  
3.
  • Dolev, Shlomi, et al. (författare)
  • Rationality authority for provable rational behavior
  • 2015
  • Ingår i: Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics). - Cham : Springer International Publishing. - 1611-3349 .- 0302-9743. - 9783319240237 ; 9295, s. 33-48
  • Konferensbidrag (refereegranskat)abstract
    • Players in a game are assumed to be totally rational and absolutely smart. However, in reality all players may act in non-rational ways and may fail to understand and find their best actions. In particular, participants in social interactions, such as lotteries and auctions, cannot be expected to always find by themselves the “best-reply” to any situation. Indeed, agents may consult with others about the possible outcome of their actions. It is then up to the counselee to assure the rationality of the consultant’s advice. We present a distributed computer system infrastructure, named rationality authority, that allows safe consultation among (possibly biased) parties. The parties’ advices are adapted only after verifying their feasibility and optimality by standard formal proof checkers. The rationality authority design considers computational constraints, as well as privacy and security issues, such as verification methods that do not reveal private preferences. Some of the techniques resembles zero-knowledge proofs. A non-cooperative game is presented by the game inventor along with its (possibly intractable) equilibrium. The game inventor advises playing by this equilibrium and offers a checkable proof for the equilibrium feasibility and optimality. Standard verification procedures, provided by trusted (according to their reputation) verification procedures, are used to verify the proof. Thus, the proposed rationality authority infrastructure facilitates the applications of game theory in several important real-life scenarios by the use of computing systems.
  •  
4.
  •  
5.
  •  
6.
  • Dolev, Shlomi, et al. (författare)
  • Strategies for repeated games with subsystem takeovers implementable by deterministic and self-stabilising automata
  • 2011
  • Ingår i: International Journal of Autonomous and Adaptive Communications Systems. - 1754-8640 .- 1754-8632. ; 4:1, s. 4-38
  • Tidskriftsartikel (refereegranskat)abstract
    • Systems of selfish-computers are subject to transient faults due to temporal malfunctions; just as the society is subjected to human mistakes. Game theory uses punishment for deterring improper behaviour. Due to faults, selfish-computers may punish well-behaved ones. This is one of the key motivations for forgiveness that follows any effective and credible punishment. Therefore, unplanned punishments must provably cease in order to avoid infinite cycles of unsynchronised behaviour of 'tit for tat'. We investigate another aspect of these systems. We consider the possibility of subsystem takeover. The takeover may lead to joint deviations coordinated by an arbitrary selfish-computer that controls an unknown group of subordinate computers. We present strategies that deter the coordinator from deviating in infinitely repeated games. We construct deterministic automata that implement these strategies with optimal complexity. Moreover, we prove that all unplanned punishments eventually cease by showing that the automata can recover from transient faults.
  •  
7.
  • Spirakis, Paul G., et al. (författare)
  • Preface - LNCS Volume 10616
  • 2017
  • Ingår i: Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics). - 1611-3349 .- 0302-9743. ; 10616 LNCS
  • Konferensbidrag (övrigt vetenskapligt/konstnärligt)
  •  
Skapa referenser, mejla, bekava och länka
  • Resultat 1-7 av 7

Kungliga biblioteket hanterar dina personuppgifter i enlighet med EU:s dataskyddsförordning (2018), GDPR. Läs mer om hur det funkar här.
Så här hanterar KB dina uppgifter vid användning av denna tjänst.

 
pil uppåt Stäng

Kopiera och spara länken för att återkomma till aktuell vy