- Students records (personal data, registration to examinations, grades)
- Services:
- Enrolment/expulsion of students
- Registration to examination
- Registration of examination marks
- Information and attestations desk
- Operational Risks
- Conditio sine qua non: Provability of information properties
- Fake registration to examinations: integrity, non-repudiability ("nicht-abstreitbar")
- Leakage of grades, personal data: confidentiality, integrity
- Forgery of attestations: authenticity, integrity
Industry Control Systems
- e.g. Factorys, energy and water plants (public infrastructure)
- "Chinese Hacking Team Caught Takin over decoy water plant"
- "Internet Attack shuts off the Heat in Finland"
- Operational risks: Integrity & Availability of public community support systems
[Self Study Task]() Read about these two scenarios. Find one or more recent examples for attacks on public infrastructure, including some technical details, in the news. Keep all these scenarios in mind, we will come back to them in the next chapter:
- [Hacker breached 63 universities and government agencies](https://www.computerworld.com/article/3170724/hacker-breached-63-universities-and-government-agencies.html)
- [Ransomeware attacks on public services](https://www.nytimes.com/2019/08/22/us/ransomware-attacks-hacking.html)
- [Worst data leaks and breaches in the last decade](https://www.cnet.com/how-to/14-of-the-worst-data-leaks-breaches-scrapes-and-security-snafus-in-the-last-decade/)
### Message
- Goal of IT Security: **Reduction of Operational Risks of IT Systems**
- Security requirements, which definewhatsecurity properties a system should have.
- These again are the basis of asecurity policy: Defineshowthese properties are achieved
Influencing Factors
- Codes and acts (depending on applicable law)
- EU General Data Protection Regulation (GDPR)
- US Sarbanes-Oxley Act (SarbOx)
- Contracts with customers
- Certification
- For information security management systems (ISO 27001)
- Subject to German Digital Signature Act (Signaturgesetz), toCommon
- Criteria
- Company-specific guidelines and regulations
- Access to critical data
- Permission assignment
- Company-specific infrastructure and technical requirements
- System architecture
- Application systems (such as OSs, Database Information Systems)
General Methodology: How to Come up with Security Requirements
Specialized steps in regular software requirements engineering:
1. Identify and classifyvulnerabilities.
2. Identify and classifythreats.
3. Match both, where relevant, to yieldrisks.
4. Analyze and decide which risks should bedealt with.
-> Fine-grained Security Requirements

## Vulnerability Analysis
Goal: Identification of
- technical
- organizational
- human
vulnerabilities of IT systems.
> Vulnerability
>
> Feature of hardware and software constituting, an organization running, or a human operating an IT system, which is a necessary precondition for any attack in that system, with the goal to compromise one of its security properties. Set of all vulnerabilities = a system’sattack surface.
### Human Vulnerabilities
Examples:
- Laziness
- Passwords on Post-It
- Fast-clicking exercise: Windows UAC pop-up boxes
- Social Engineering
- Pressure from your boss
- A favor for your friend
- Blackmailing: The poisoned daughter, ...
- An important-seeming email
- Lack of knowledge
- Importing and executing malware
- Indirect, hidden information flowin access control systems
> Social Engineering
>
> Influencing people into acting against their own interest or the interest of an organisation is often a simpler solution than resorting to malware or hacking.
> Both law enforcement and the financial industry indicate that social engineering continues to enable attackers who lack the technical skills, motivation to use them or the resources to purchase or hire them. Additionally, targeted social engineering allows those technically gifted to orchestrate blended attacks bypassing both human and hardware or software lines of defence. [Europol](https://www.europol.europa.eu/crime-areas-and-trends/crime-areas/cybercrime/social-engineering)
Real Cases
> Self Study Task
>
> Investigate the following real-world media (all linked from moodle). Find any potential security vulnerabilities there and give advice how to avoid them.
> - Watch (no listening required) the interview with staff of the French TV station TV5Monde
> - Read the lifehacker article about Windows UAC
> - Read the Washington Times article about Facebook Scam
> - Read the two eMails I received lately.
#### Indirect Information Flow in Access Control Systems
A More Detailed Scenario
- AlphaCompany has two departments: Research & Development(R&D) and Sales
- Ann is project manager and Bob is developer working in R&D on ProjectX, Chris is a busybody sales manager writing a marketing flyer about ProjectX
- All R&D developers communicate via an electronic bulletin board, including any preliminary product features not yet ready for release
- Bob is responsible for informing sales about release-ready features, using ashared web document
> Security Requirement
>
> No internal information about a project, which is not approved by the project manager, should ever go into the product flyer.
Access Control Configuration
- 3 users:ann,bob,chris
- 2 groups:
- crewx: ann, bob, ...
- sales: ann, bob
- Settings:
```
drw- --- --- 1 ann crewx 2020-04-14 15:10 ProjectXFiles
-rw- r-- --- 1 ann crewx 2020-04-14 15:10 ProjectXBoard
-rw- r-- --- 1 bob sales 2020-04-14 14:22 NotesToSales
-rw- --- --- 1 chris sales 2020-04-13 23:58 SalesFlyer.pdf
- -> exploitation of missing length checks on input buffers
- -> buffer overflow
What an Attacker Needs to Know
##### Necessary Knowledge and Skills
- Source code of the target program (e. g. a privileged server), obtained by disassembling
- Better: symbol table, as with an executable not stripped from debugging information
- Even better: most precise knowledge about the compiler used w.r.t. runtime management
- how call conventions affect the stack layout
- degree to which stack layout is deterministic, which eases experimentation
Sketch of the Attack Approach (Observations during program execution)
- Stack grows towards the small addresses
- -> small whenever a procedure is called, all its information is stored in aprocedure frame = subsequent addresses below those of previously stored procedure frames
- in each procedure frame: address of the next instruction to call after the current procedure returns (ReturnIP)
- after storing the ReturnIP, compilers reserve stack space for local variables -> these occupy lower addresses
##### Preparing the Attack
Attacker carefully prepares an input argument msg:`\0 ...\0 /bin/shell#system `
```cpp
void processSomeMsg(char *msg, int msgSize){
char localBuffer[1024];
int i=0;
while (i<msgSize){
localBuffer[i] = msg[i];
i++;
}
...
}
```
##### Result:
- Attacker made victim program overwrite runtime-critical parts of its stack:
- by counting up to the length of msg
- at the same time writing back over previously save runtime information -> ReturnIP
- After finishing processSomeMsg: victim program executes code at address of ReturnIP =address of a forged call to execute arbitrary programs!
- Additional parameter to this call: file system location of a shell
> The attacker can remotely communicate, upload, download, and execute anything- with cooperation of the OS, since all of this runs with the original privileges of the victim program!
Do an internet research: Find a (more or less) recent example of a successful buffer overflow attack. Describe as precise as possible what happened during the attack and which programming error made it possible!
Commonplace search engines for news articles, but also web databases of software vulnerabilities such as https://cve.mitre.org/ may help you.
- Human: Social engineering, laziness, lack of knowledge
- Organizational: Rights management, key management, room access
- Technical: Weak protection paradigms, specification and implementation errors
#### Examples
Scenario 1: Insider Attack
- Social Engineering, plus
- Exploitation of conceptual vulnerabilities (DAC),plus
- Professionally tailored malware
Scenario 2: Malware(a family heirloom ...)
- Trojan horses: Executable code with hidden functionality.
- Viruses: Code for self-modification and self-duplication, often coupled with damaging the host.
- Logical bombs: Code that is activated by some event recognizable from the host (e. g. time, date, temperature, pressure, geographic location, ...).
- Backdoors: Code that is activated through undocumented interfaces (mostly remote).
- Ransomware: Code for encrypting possibly all user data found on the host, used for blackmailing the victims (to pay for decryption).
- Worms and worm segments: Autonomous, self-duplicating programs. Originally designed for good: to make use of free computing power in local networks.
Scenario 3: Outsider Attack
- Attack Method: Buffer Overflow
- Exploitation of implementation errors
Scenario 4: High-end Malware:Root Kits
- Goal: Invisible, total, sustainable takeover of a complete IT system
- Method: Comprehensive tool kit for fully automated attacks
1. automatic analysis of technical vulnerabilities
2. automated attack execution
3. automated installation of backdoors
4. automated installation and activation of stealth mechanisms
- Target: Attacks on all levels of the software stack:
- firmware
- bootloader
- operating system (e. g. drivers, file system, network interface)
- system applications (e. g. file and process managers)
- user applications (e. g. web servers, email, office)
- tailored to specific software and software versions found there!
> Self-Study Task
>
> Read about the following malware examples, both historical and up to date. Identify each as a virus, logical bomb, backdoor, ransomware, or worm; think about which necessary vulnerabilit(y|ies) make all of these threats so dangerous.
> - One of the most sophisticated pieces of malware ever discovered:Stuxnet
> - One of the first large-scale malware threats in history:Michelangelo
> - Two cooperating pieces of malware that put at risk numerous public institutional IT systems:Emotet and Ryuk.
#### Root Kits
Step 1: Vulnerability Analysis
- Tools look for vulnerabilities in
- Active privileged services and demons (from inside a network:nmap, from outside: by port scans) -> Discovers:web server, remote access server (sshd), file server (ftpd), time server (ntpd), print server (cupsd),bluetoothd,smbd, ...
- Configuration files -> Discovers: weak passwords, open ports
- Operating systems -> Discovers: kernel and system tool versions with known implementation errors
- Using built-in knowledge base: an automatable vulnerability database
- Result: System-specific collection of vulnerabilities -> choice of attack method andtools to execute
Step 2: Attack Execution
- Fabrication oftailored softwareto exploit vulnerabilities in
- Server processes or system tool processes (demons)
- OS kernel itself
to execute code of attacker withroot privileges
- This code
- First installs smoke-bombs for obscuring attack
- Then replaces original system software by pre-fabricated modules
- servers and demons
- utilities and libraries
- OS modules
- containing
- backdoors (-> step 3)
- smoke bombs for future attacks (-> step 4)
- Results:
- Backdoors allow forhigh-privilege access within fractions of seconds
- System modified with attacker’s servers, demons, utilities, OS modules
- Obfuscation of modifications and future access
Step 3: Attack Sustainability
- Backdoors for any further control & command in
- Servers (e. g.sshdemon)
- Utilities (e. g.login)
- Libraries (e. g.PAM, pluggable authentication modules)
- OS (system calls used by programs likesudo)
- Modificationsof utilities and OS to prevent
- Killing root kit processes and connections (kill,signal)
- Removal of root kit files (rm,unlink)
- Results: Unnoticed access for attacker
- Anytime
- Highly privileged
- Extremely fast
- Virtually unpreventable
Step 4: Stealth Mechanisms (Smoke Bombs)
- Clean logfiles (entries for root kit processes, network connections), e.g. syslog,kern.log,user.log,daemon.log,auth.log, ...
- Modify system admin utilities
- Process management(hide running root kit processes), e.g. ps,top,ksysguard,taskman
- File system (hide root kit files), e.g. ls,explorer,finder
- Network (hide active root kit connections), e.g. netstat,ifconfig,ipconfig,iwconfig
- Substitute OS kernel modules and drivers (hide root kit processes, files, network connections), e.g. /proc/...,stat,fstat,pstat
- Result:Processes, files and communication of root kit become invisible
Risk and Damage Potential:
- Likeliness of success: extremely highin today’s commodity OSs
- Attack methods und techniques: exploiting vulnerabilities
- human
- organizational
- technical
- -> A zoo of threats, practical assistance:
- National (Germany): BSI IT-Grundschutz standards and catalogues
- International:Common Criteria
Attacks on Public Infrastructure revisited:
> Self-Study Task
>
> Take a close look at those example scenarios for attacks on public infrastructure you read and researched about in chapter 1. For all of them, try to answer the following questions:
> - Who was the presumed attacker mentioned in the article? Classify them according to what you learned about attacker types.
> - What was the attack objective? Again, classify based on what you learned in this chapter.
> - How was the attack made possible? Identify the types of vulnerabilities exploited.
- Cloud computing:"Loss of VM integrity" -> contract penalties, loss of confidence/reputation
- Industrial plant control:"Tampering with frequency converters" -> damage or destruction of facility
- Critical public infrastructure:"Loss of availability due to DoS attacks" -> interrupted services, possible impact on public safety (cf. Finnish heating plant)
- Traffic management:"Loss of GPS data integrity" -> maximum credible accident w. r. t. safety
- Security Server: Manages and evaluates these modules
### Implementation Alternative B
Application-embedded Policy: The security policy is only known and enforced by oneuser program $\rightarrow$ implemented in a user-space application
Application-level Security Architecture: The security policy is known and enforced by several collaborating user programs in anapplication systems $\rightarrow$ implemented in a local, user-space security architecture
Policy Server Embedded in Middleware: The security policy is communicated and enforced by several collaborating user programs in adistributed application systems $\rightarrow$ implemented in a distributed, user-space security architecture
> Please read each of the following scenarios. Then select the statement you intuitively think is most likely:
> 1. Linda is a young student with a vision. She fights against environmental pollution and volunteers in an NGO for climate protection. After finishing her studies ...
> - ... she becomes an attorney for tax law.
> - ... she becomes an attorney for tax law, but in her spare time consults environmental activists and NGOs.
> 2. Suppose during the 2022 football world cup, Germany reaches the knockout phase after easily winning any game in the previous group phase.
> - Germany wins the semi-final.
> - Germany wins the world cup.
> Think twice about your choices. Can you imagine what other people chose, and why?
Goal of Formal Security Models
- Complete, unambiguous representation of security policies for
- Abstraction from (usually too complex) reality $\rightarrow$ get rid of insignificant details e. g.: allows statements about computability and computation complexity
- Precisionin describing what is significant $\rightarrow$ Model analysis and implementation
> Security Model
>
> A security model is a precise, generally formal representation of a security policy.
Model Spectrum
- Models for access control policies:
- identity-based access control (IBAC)
- role-based access control (RBAC)
- attribute-based access control (ABAC)
- Models for information flow policies
- $\rightarrow$ multilevel security(MLS)
- Models for non-interference/domain isolation policies
- $\rightarrow$ non-interference(NI)
- In Practice: Most oftenhybrid models
### Access Control Models
Formal representations of permissions to execute operations on objects, e. g.:
- Reading files
- Issuing payments
- Controlling industrial centrifuges
Security policies describeaccess rules $\rightarrow$ security models formalize them
> Rules based on the identity of individual subjects (users, apps, processes, ...) or objects (files, directories, database tables, ...) $\rightarrow$ "Ann may read ProjectX Files."
Example: Access control in many OS (e. g. Unix(oids), Windows)
Consequence: Individual users
- enjoy freedom w. r. t. granting access permissions as individually needed
- need to collectively enforce their organization’s security policy:
- competency problem
- responsibility problem
- malware problem
> Mandatory Access Control (MAC)
>
> System designers and administrators specify system-wide rules, that apply for all users and cannot be sidestepped.
Examples:
- Organizational: airport security check
- Technical: medical information systems, policy-controlled operating systems(e. g. SELinux)
Consequence:
- Limited individual freedom
- Enforced by central instance:
- clearly identified
- competent (security experts)
- responsible (organizationally & legally)
##### DAC vs. MAC
In Real-world Scenarios: Mostly hybrid models enforced by both discretionary and mandatory components, e. g.:
- DAC: locally within a project, team members individually define permissions w. r. t. documents (implemented in project management software and workstation OSs) inside this closed scope;
- MAC:globally for the organization, such that e. g. only documents approved for release by organizational policy rules (implemented in servers and their communication middleware) may be accessed from outside a project’s scope.
#### Identity-based Access Control Models (IBAC)
Goal: To precisely specify the rights ofindividual, acting entities.
> Have a look at your (or any) operating system’s API documentation. Find a few examples for
> - operations executable by user processes (e. g. Linuxsyscalls),
> - their arguments,
> - the operating systems resources they work with.
> Try to distinguish between subjects, objects and operations as defined in the classical ACF model. Can you see any ambiguities or inconsistencies w. r. t. the model?
> If you have never worked with an OS API, stick to the simple things (such as reading/writing a file). For Linux, you may find help insection 2 of the online manpages.a ahttp://man7.org/linux/man-pages/dir_section_2.html
##### Access Control Matrix
Access Control Functions in Practice
Lampson [1974] already addresses the questions how to ...
- store in a well-structured way,
- efficiently evaluate, and
- completely analyze an ACF:
> Access Control Matrix (ACM)
>
> An ACM is a matrix $m:S\times O \rightarrow 2^{OP}$, such that $\forall s\in S,\forall o\in O:op\in m(s,o)\Leftrightarrow f(s,o,op)$.
An ACM is a rewriting of the definition of an ACF: nothing is added, nothing is left out ("$\Leftrightarrow$"). Despite a purely theoretical model: paved the way for practically implementing AC meta-informationas
> A fixed-time snapshot of all active entities, passive entities, and any meta-information used for making access decisions is called theprotection state of an access control system.
> Goal of ACFs/ACMs
>
> To precisely specify a protection state of an AC system.
- $\delta(q,\sigma)=q′$ and $\lambda(q,\sigma)=\omega$ can be expressed through thestate diagram: a directed graph $\langleQ,E\rangle$, where each edge $e\in E$ is represented by a state transition’s predecessor node $q$, its successor node $q′$, and a string "$\sigma|\omega$" of its input and output, respectively.
- $x_{s1},...,x_{sm}\in S_q$ and $x_{o1},...,x_{om}\in O_q$ where $s_i$ and $o_i$, $1\leq i\leq m$, are vector indices of the input arguments: $1\leq s_i,o_i\leqk$
- $p_1,...,p_n$ are HRU primitives
- Note: $\circle$ is the (transitive) function composition operator: $(f\circle g)(x)=g(f(x))$
Whenever $q$ is obvious or irrelevant, we use a programming-style notation
Interpretation: The structure of STS definitions is fixed in HRU:
Short, formal macros that describe differences between $q$ and $a$ successor state $q′=\sigma(q,\langleop,(x_1 ,...,x_k)\rangle)$ that result from a complete execution of op:
- $\rightarrow$ Each of these with the intuitive semantics for manipulating $S_q, O_q$ or $m_q$.
Note the atomic semantics: the HRU model assumes that each command successfully called is always completely executed!
How to Design an HRU Security Model:
1. Model Sets: Subjects, objects, operations, rights $\rightarrow$ define the basic sets $S,O,OP,R$
2. STS: Semantics of operations (e. g. the future API of the system to model) that modify the protection state $\rightarrow$ define $\sigma$ using the normalized form/programming syntax of the STS
Informal Policy: "A sample solution (...) can be downloaded by students only after submitting their own solution." $\Leftrightarrow$ "If the automaton receives an input \langlewriteSolution,(s,o)\rangle and the conditions are satisfied, it transitions to a state where s is allowed to download the sample solution."
Informal Policy: "Student solutions can be submitted only before downloading any sample solution." $\Leftrightarrow$ "If the automaton receives an input\langlereadSample,(s,o)\rangleand the conditions are satisfied, it transitions to a state wheresis denied to submit a solution."
> Let $\sigma\sigma\in\sum^*$ be a sequence of inputs consisting of a single input $\sigma\in\sum\cup\{\epsilon\}$ followed by a sequence $\sigma\in\sum^^*$, where $\epsilon$ denotes an empty input sequence. Then, $\delta^*:Q\times\sum^^*\rightarrow Q$ is defined by
A state q of an HRU model is called HRU safe with respect to a right $r\in R$ iff, beginning with q, there is no sequence of commands that enters r in an ACM cell where it did not exist in q.
According to Tripunitara and Li [2013], this property (Due to more technical details, it’s called simple-safety there.) is defined as:
1. Find an upper bound for the length of all input sequences with different effects on the protection state w.r.t. safety
If such can be found: $\exists$ a finite number of input sequences with different effects
2. All these inputs can be tested whether they violate safety. This test terminates because:
- each input sequence is finite
- there is only a finite number of relevant sequences
- $\rightarrow$ safety is decidable
Given a mono-operational HRU model.
Let $\sigma_1...\sigma_n$ be any sequence of inputs in $\sum^*$ that violates $safe(q,r)$, and let $p_1...p_n$ be the corresponding sequence of primitives (same length, since mono-operational).
Proposition: For each such sequence, there is a corresponding finite sequence that
- Still violates $safe(q,r)$
- Consists only of enter and two initial create primitives
In other words: For any input sequence,$\exists$ a finite sequence with the same effect.
Proof:
- We construct these finite sequences ...$\rightarrow$
- Transform $\sigma_1...\sigma_n$ into shorter sequences with the same effect:
1. Remove all input operations that contain delete or destroy primitives. The sequence still violates $safe(q,r)$, because conditions of successive commands must still be satisfied (no absence, only presence of rights is checked).
2. Prepend the sequence with an initial create subject $s_{init}$ operation. This won’t change its netto effect, because the new subject isn’t used anywhere.
3. Prune the last create subject s operation and substitute each following reference to s with $s_{init}$. Repeat until allcreate subjectoperations are removed, except from the initialcreate subject sinit.
4. Same as steps 2 and 3 for objects.
5. Remove all redundant enter operations (remember: each matrix cell is a set $\rightarrow$ unique elements).
| enter r1 into m(x2,x5); | enter r1 into m(x2,x5); | enter r1 into m(x2,x5); | enter r1 into $m(s_{init},x5)$; | enter r1 into $m(s_{init},o_{init})$; | enter r1 into $m(s_{init},o_{init})$; |
| enter r2 into m(x2,x5); | enter r2 into m(x2,x5); | enter r2 into m(x2,x5); | enter r2 into $m(s_{init},x5)$; | enter r2 into $m(s_{init},o_{init})$; | enter r2 into $m(s_{init},o_{init})$; |
| enter r1 into m(x7,x5); | enter r1 into m(x7,x5); | enter r1 into m(x7,x5); | enter r1 into $m(s_{init},x5)$; | enter r1 into $m(s_{init},o_{init})$; | - |
| ... | ... | ... | ... | ... | ... |
Observations
- after step 3:
- Except for $s_{init}$, the sequence creates no more subjects
- All rights of the formerly created subjects are accumulated in $s_{init}\rightarrow$ for the evaluation of $safe(q,r)$, nothing has changed:
- This sequence still violates $safe(q,r)$, but its length is restricted to $(|S_q| + 1)(|O_q|+1)|R|+2$ because
- Each enter must enter a new right into a cell
- The number of cells is restricted to $(|S_q| + 1)(|O_q|+1)$
Conclusions from these Theorems
- Dilemma:
- General (unrestricted) HRU models
- have strong expressiveness $\rightarrow$ can model a broad range of AC policies
- are hard to analyze: algorithms and tools for safety analysis
- $\rightarrow$ cannot certainly produce accurate results
- $\rightarrow$ are hard to design for approximative results
- Mono-operational HRU models
- have weak expressiveness $\rightarrow$ goes as far as uselessness: e. g. for modeling Unix creat(can only create files, sockets, IPC, ... that no user process can access!)
- are efficient to analyze: algorithms and tools for safety analysis
- $\rightarrow$ are always guaranteed to terminate
- $\rightarrow$ are straight-forward to design
Consequences:
- Model variants with restricted yet usable expressiveness have been proposed
- Heuristic analysis methods try to provide educated guesses about safety of unrestricted HRU
- Applications: (static) real-time systems, closed embedded systems
Monotonous Mono-conditional HRU Models
- Monotonous (MHRU): no delete or destroy primitives
- Mono-conditional: at most one clause in conditions part (For monotonous bi-conditional models, safety is already undecidable ...)
- safe(q,r) efficiently decidable
- Applications: Archiving/logging systems (where nothing is ever deleted)
Finite Subject Set
- $\forall q\in Q,\exists n\in N: |S_q|\leq n$
- $safe(q,r)$ decidable, but high computational complexity
Fixed STS
- All STS commands are fixed, match particular application domain (e.g. OS access control [Lipton and Snyder, 1977]) $\rightarrow$ no model reusability
- For Lipton and Snyder [1977]: $safe(q,r)$ decidable in linear time (!)
Strong Type System
- Special model that generalizes HRU: Typed Access Matrix (TAM) [Sandhu, 1992]
- $safe(q,r)$ decidable in polynomial time for ternary, acyclic, monotonous variants
- high, though not unrestricted expressiveness in practice
##### (B) Heuristic Analysis Methods
Motivation:
- Restricted model variants: often too weak for real-world applications
- General HRU models: safety property cannot be guaranteed $\rightarrow$ Let’s try to get a piece from both cakes: Heuristically guided safety estimation [Amthor et al., 2013]
- $\rightarrow$ For each $\sigma$, the heuristic has to decide:
- which operation op to use
- which vector of arguments x to pass
- which $q_i$ to use from the states in $Q$ known so far
- Termination: As soon as $\sigma(q_i,\sigma)$ violates $safe(q_0,r)$.
Goal: Iteratively build up the (possibly infinite!) $Q$ for a model to falsify safety by example (finding a violating, but possible protection state).
Results:
- Termination: Well ... we only have a semi-decidable problem here: It can be guaranteed that a model is unsafe if we terminate. We cannot ever prove the opposite, however! ($\rightarrow$ safety undecidability)
- Performance: A few results
- 2013:Model size 10 000 ≈2215 s
- 2018:Model size 10 000 ≈0,36 s
- 2018:Model size 10 000 000 ≈417 s
Achievements:
- Find typical errors in security policies: Guide their designers, who might know there’s something wrong w. r. t. right proliferation, but not what and why!
- Increase our understanding of unsafety origins: By building clever heuristics, we started to understand how we might design specialized HRU models ($\rightarrow$ fixed STS, type system) that are safety-decidable yet practically (re-) usable [Amthor and Rabe, 2020].
##### Summary HRU Models
Goal
- Analysis of right proliferation in AC models
- Assessing the computational complexity of such analyses
Method
- Combining ACMs and deterministic automata
- Defining $safe(q,r)$ based on this formalism
Conclusions
- Potential right proliferation (privilege escalation): Generally undecidable problem
- $\rightarrow$ HRUmodel family, consisting of application-tailored, safety-decidable variants
- $\rightarrow$ Heuristic analysis methods for practical error-finding
##### The Typed-Access-Matrix Model (TAM)
Goal
- AC model, similar expressiveness to HRU
- $\rightarrow$ can be directly mapped to implementations of an ACM: OS ACLs, DB permission assignment tables
- Better suited for safety analyses: precisely statemodel properties for decidable safety
Idea [Sandhu, 1992]
- Adopted from HRU: subjects, objects, ACM, automaton
> - $Q= 2^S\times 2^O\times TYPE\times M$ is the state space where $S$ and $O$ are subjects set and objects set as in HRU, where $S\subseteq O$, $TYPE=\{type|type:O\rightarrow T\}$ is a set of possible type functions, $M$ is the set of possible $ACMs$ as in HRU,
> - $\sum=OP\times X$ is the (finite) input alphabet where $OP$ is a set of operations as in HRU, $X=O^k$ is a set of $k$-dimensional vectors of arguments (objects) of these operations,
> - $\delta:Q\times\sum\rightarrow Q$ is the state transition function,
> - $q_0\in Q$ is the initial state,
> - $T$ is a static (finite) set of types,
> - $R$ is a (finite) set of access rights.
State Transition Scheme (STS)
$\delta:Q\times\sum\rightarrow Q$ is defined by a set of specifications:

where
- $q= (S_q,O_q,type_q,m_q)\in Q,op\in OP$
- $r_1,...,r_m\in R$
- $x_{s1},...,x_{sm}\in S_q,x_{o1},...,x_{om}\in Oq\backslash S_q$, and $t_1,...,t_k\in T$ where $s_i$ and $o_i, 1\leq i\leq m$ , are vector indices of the input arguments: $1\leq s_i,o_i\leq k$
- Use of her confined objects by third parties $\rightarrow$ transitive right revocation
- Subjects using (or misusing) these objects $\rightarrow$ destruction of these subjects
- Subjects using such objects are confined: cannot forward read information
##### TAM Safety Decidability
Why all this?
- General TAM models (cf. previous definition) $\rightarrow$ safety not decidable (no surprise, since generalization of HRU)
- MTAM:monotonous TAM models; STS without delete or destroy primitives $\rightarrow$ safety decidable if mono-conditional only
- AMTAM:acyclic MTAM models $\rightarrow$ safety decidable, but (most likely) not efficiently: NP-hardproblem
- TAMTAM: ternaryAMTAM models; each STS command requires max. 3 arguments $\rightarrow$ provably same computational power and thus expressive power as AMTAM; safety decidable in polynomial time
> For any operation $op$ with arguments $\langlex_1,t_1\rangle,\langlex_2,t_2\rangle,...,\langlex_k,t_k\rangle$ in an STS of a TAM model, it holds that $t_i, 1\leq i\leq k$
> The type creation graph $TCG=\langleT,E=T\times T\rangle$ for the STS of a TAM model is a directed graph with vertex set $T$ and an $edge\langleu,v\rangle\in E$ iff $\exists op\in OP:u$ is a parent type in $op\wedge v$ is a child type in op.
Note:In bar,u is both a parent type (because of $s_1$) and a child type (because of $s_2$) $\rightarrow$ hence the loop edge.
Safety Decidability
- We call a TAM model acyclic, iff its TCG is acyclic.
> Theorem [Sandhu, 1992, Theorem 5]
>
> Safety of a ternary, acyclic, monotonous TAM model (TAMTAM) is decidable in polynomial time in the size of $m_0$.
- Crucial property acyclic, intuitively:
- Evolution of the system (protection state transitions) checks both rights in the ACMas well as argument types
- TCG is acyclic $\Rightarrow\exists$ a finite sequence of possible state transitions after which no input tuple with argument types, that were not already considered before, can be found
- One may prove that an algorithm, which tries to expandall possible different follow-up states from $q_0$, may terminate after this finite sequence
- Proof details: SeeSandhu [1992].
Expressive Power of TAMTAM
- MTAM: obviously same expressive power as monotonic HRU (MHRU) $\rightarrow$ cannot model:
> Have a look at the syscall API of Linux as a typical file server operating system. Roughly count the number of operations and estimate their average number of arguments based on a few samples. Then try to estimate the average number of files that each user keeps in her home directory. A good sample is your own user directory, which you can count (including subdirectories) as follows: `find ~ -type f | wc -l`
>
> If 200 employees of a medium-sized company have user accounts:
> - How many ACLs must be saved to encode the IBAC policy of this server as a classical ACM?
> - If each ACL take 12 bits, how big is the resulting storage overhead in total?
> - If you had to heuristically analyze safety of this policy: how many different inputs would you have to simulate in the worst case just for the first state transition?
Problems of IBAC Models:
- Scalability w.r.t. the number of controlled entities
- Level of abstraction: System-oriented policy semantics (processes, files, databases, ...) instead of problem-oriented (management levels, user accounts, quota, ...)
Goals of RBAC:
- Solving these problems results in smaller modeling effort results in smaller chance of human errors made in the process:
- Improved scalability and manageability
- Improved, application-oriented semantics: roles≈functions in organizations
> - $roles:S\rightarrow 2^R$ is a total function mapping sessions to sets of roles such that $\forall s\in S:r\in roles(s)\Rightarrow \langleuser(s),r\rangle\in UA$.
- Users U model people: actual humans that operate the AC system
- Roles R model functions (accumulations of tasks), that originate from the workflows and areas of responsibility in organizations
- Permissions P model rights for any particular access to a particular document (e. g. read project documentation, transfer money, write into EPR, ...)
- The user-role-relation $UA\subseteq U\times R$ defines which roles are available to users at any given time $\rightarrow$ must be assumed during runtime first, before they are usable!
- The permission-role-relation $PA\subseteq P\times R$ defines which permissions are associate with roles
- $UA$ and $PA$ describe static policy rules: Roles available to a user are not considered to possibly change, same with permissions associated with a role. Examples:
- "Bob may assume the role of a developer; Ann may assume the role of a developer or a project manager; ..."
- "A developer may read and write the project documentation; a project manager may create branches of a source code repository; ..."
- Sessions $S$ describe dynamic assignments of roles $\rightarrow$ a session $s\in S$ models when a user is logged in(where she may use some role(s) available to her as per $UA$):
- The session-user-mapping user: $S\rightarrow U$ associates a session with its ("owning") user
- The session-roles-mapping roles: $S\rightarrow 2^R$ associates a session with the set of roles currently assumed by that user (active roles)

Remark:
Note the difference between users in RBAC and subjects in IBAC: the latter usually represent a technical abstraction, such as an OS process, while RBAC users always model an organizational abstraction, such as an employee, a patient, etc.!
##### RBAC Access Control Function
- Authorization in practice: access rules have to be defined for operations on objects (cf. IBAC)
- IBAC approach: access control function $f:S\times O\times OP\rightarrow \{true,false\}$
- RBAC approach: implicitly defined through $P\rightarrow$ made explicit: $P\subseteq O\times OP$ is a set of permission tuples $\langleo,op\rangle$ where
In practice, organizations have more requirements that need to be expressed in their security policy:
- Roles are often hierarchical: "Any project manager is also a developer, any medical director is also a doctor, ..." $\rightarrow$ $RBAC_1 = RBAC_0 + hierarchies$
- Role association and activation are often constrained: "No purchasing manager may be head of internal auditing, no product manager may be logged in as a project manager for more than one project at a time, ..." $\rightarrow$ $RBAC_2 = RBAC_0 + constraints$
- Both may be needed: $\rightarrow$ $RBAC_3$ = consolidation: $RBAC_0 + RBAC_1 + RBAC_2$
RBAC 1 : Role Hierarchies
- Observation: Roles in organizations often overlap:
- Users in different roles havecommon permissions: "Any project manager must have the same permissions as any developer in the same project."
- Approach 1: disjoint permissions for roles proManager and proDev $\rightarrow$ any proManager user must always have proDev assigned and activated for any of her workflows $\rightarrow$ role assignment redundancy
- Approach 2: overlapping permissions: $\forall p\in P:\langlep,proDev\rangle \in PA\Rightarrow \langlep,proManager\rangle \in PA\rightarrow$ any permission for project developers must be assigned to two different roles $\rightarrow$ role definition redundancy
- Hierarchy expressed through dominance relation: $r_1\leq r_2 \Leftrightarrow r_2$ inherits any permissions from $r_1$
- Interpretation
- Reflexivity: any role consists of ("inherits") its own permissions $\forall r\in R:r\leq r$
- Antisymmetry: no two different roles may mutually inherit their respective permissions $\forall r_1 ,r_2\in R:r_1\leq r_2\wedge r_2\leq r_1\Rightarrow r_1=r_2$
- Transitivity: permissions may be inherited indirectly $\forall r_1,r_2,r_3\in R:r_1\leq r_2 \wedge r_2\leq r_3\Rightarrow r_1\leq r_3$
> - $RH\subseteq R\times R$ is a partial order that represents a role hierarchy where $\langler,r′\rangle\in RH\Leftrightarrow r\leq r′$ such that $\langleR,\leq\rangle$ is a lattice,
> - roles is defined as for $RBAC_0$, while additionally holds: $\forall r,r′\in R,\exists s\in S:r\leq r′\wedge r′\in roles(s)\Rightarrow r\in roles(s)$.
In prose: When activating any role that inherits permissions from another role, this other role isautomatically(by definition) active as well.
- $\rightarrow$ no role assignment redundancy in defining the STS
- $\rightarrow$ no role definition redundancy in defining PA
RBAC 2 : Constraints
- Observation: Assuming and activating roles in organizations is often more restricted:
- Certain roles may not beactive at the same time(same session)for any user: "A payment initiator may not be a payment authorizer at the same time (in the same session)."
- Certain roles may not be together assigned to any user: "A purchasing manager never be the same person as the head of internal auditing."
- $\rightarrow$ separation of duty (SoD)
- While SoD constraints are a more fine-grained type of security requirements to avoid mission-critical risks, there are other types represented by RBAC constraints.
- Constraint Types
- Separation of duty: mutually exclusive roles
- Quantitative constraints: maximum number of roles per user
- Temporal constraints: time/date/week/... of role activation (advanced RBAC models, e.g. Bertino et al. [2001])
- Factual constraints: assigning or activating roles for specific permissions causally depends on any roles for a certain, other permissions (e.g. only allow user $u$ to activate auditingDelegator role if audit payments permission is usable by $u$)
> - $S$ is a set of subject identifiers and $O$ is a set of object identifiers,
> - $A_S=V_S^1 \times...\times V_S^n$ is a set of subject attributes, where each attribute is an n-tuple of values from arbitrary domains $V_S^i$, $1\leq i \leq n$,
> - $A_O=V_O^1\times...\times V_O^m$ is a corresponding set of object attributes, based on values from arbitrary domains $V_O^j$, $1\leq j \leq m$,
> - $att_S:S\rightarrow A_S$ is the subject attribute assignment function,
> - $att_O:O\rightarrow A_O$ is the object attribute assignment function,
> - $OP$ is a set of operation identifiers,
> - $AAR\subseteq \Phi\times OP$ is the authorization relation.
Interpretation
- Active and passive entities are modeled by $S$ and $O$, respectively
- Attributes in $AS,AO$ are index-referenced tuples of values, which are specific to some property of subjects $V_S^i$ (e.g. age) or of objects $V_O^j$ (e. g. PEGI rating)
- Attributes are assigned to subjects and objects via $att_S,att_O$
- Access control rules w.r.t. the execution of operations in $OP$ are modeled by the $AAR$ relation $\rightarrow$ determines ACF!
- $AAR$ is based on aset of first-order logic predicates $\Phi$: $\Phi=\{\phi_1 (x_{s1},x_{o1}),\phi_2 (x_{s2},x_{o2}),...\}$. Each $\phi_i\in\Phi$ is a binary predicate (a logical statement with two arguments), where $x_{si}$ is a subject variable and $x_{oi}$ is an object variable.
##### ABAC Access Control Function
With conditions from $\Phi$ for executing operations in $OP,AAR$ determines the ACF of the model:
> ABAC ACF
>
> $f_{ABAC}:S\times O\times OP\rightarrow\{true,false\}$ where
> I work on six days in a $week:W=\{Mo,Tu,We,Th,Fr,Sa\}$. On each of these days, I can decide to procrastinate work from daywtow′: the same or a day later that week $(w\rightarrow w′)$. For a lattice $\langleW,\rightarrow\rangle$:
> Let’s assume that Saturday is exclusively reserved for work I was unable to do on Monday. Is $\langleW,\rightarrow\rangle$ still a lattice now? Why (not)?
> - $\leq$ is a dominance relation wherec $\leq d \Leftrightarrow$ information may flow from c to d,
> - $cl:S\cup O\rightarrow C$ is a classification function, and
> - $\bigoplus:C\times C\rightarrow C$ is a reclassification function.
Interpretation
- Subject set S models active entities, which information flows originate from
- Object set O models passive entities, which may receive information flows (e.g. documents)
- Classes set C used to label entities with identical information flow properties, e.g. $C=\{Physician,Patient\}$
- Classification function $cl$ assigns a class to each entity, e.g. $cl(cox)=Physician$
- Reclassification function $\bigoplus$ determines which class an entity is assigned after receiving certain a information flow; e.g. for Physician to Patient: $\bigoplus (Physician,Patient)=sup_{\{Physician,Patient\}}$
- rule "information may flow from any ward physician to an anamnesis record" $\Leftrightarrow$ Physician $\leq$ Anamnesis
- rule "information may flow from a medication record to the pharmacy" $\Leftrightarrow$ Medication $\leq$ Pharmacy
- classification cl:
- $cox=Physician$
- $carla=Medication$
We can now ...
- precisely define all information flows valid for a given policy
- define analysis goals for an IF model w.r.t.
- Correctness: $\exists$ covert information flows? (transitivity of $\leq$, automation: graph analysis tools)
- Redundancy: $\exists$ sets of subjects and objects with (transitively) equivalent information contents? (antisymmetry of $\leq$, automation: graph analysis tools)
- implement a model: through an automatically generated, isomorphic ACM(using already-present ACLs!)
#### Multilevel Security (MLS)
Motivation
- Introducing a hierarchy of information flow classes: levels of trust
- Subjects and objects are classified:
- Subjects w.r.t. their trust worthiness
- Objects w.r.t. their criticality
- Within this hierarchy, information may flow only in one direction $\rightarrow$ "secure" according to these levels!
- $\rightarrow \exists$ MLS models for different security goals!
Modeling Confidentiality Levels
- Class set: levels of confidentiality e.g. $C=\{public,confidential,secret\}$
- Dominance relation: hierarchy between confidentiality levels e.g. $\{public \leq confidential,confidential \leq secret\}$
- Classification of subjects and objects: $cl:S\cup O\rightarrow C$ e.g. $cl(BulletinBoard)=public,cl(Timetable)=confidential$
- Note: In contrast du Denning, $\leq$ in MLS models is a total order.
- Observation: L and m are isomorphic $\rightarrow$ redundancy?
- $\rightarrow$ So, why do we need both model components?
Rationale
- L is an application-oriented abstraction
- Supports convenient for model specification
- Supports easy model correctness analysis ($\rightarrow$ reachability analyses in graphs)
- $\rightarrow$ easy to specify and to analyze
- m can be directly implemented by standard OS/DBIS access control mechanisms (ACLs, Capabilities) $\rightarrow$ easy to implement
- m is determined (= restricted) by L and cl, not vice-versa!
> Rationale for L and m
> - L and cl control m
> - m provides an easy specification for model implementation
##### Consistency of L,cl, and m
We know: IF rules specificed by L and cl are implemented by an ACM m...
So: What are the conditions for m to be a correct representation of L and cl?
Intuition: An ACM m is a correct representation of a lattice L iff information flows granted by m do not exceed those defined by L and cl. $\rightarrow$ BLP security property
Consequence: If we can prove this property for a given model, then its implementation (by m) is consistent with the rules given by L and cl.
> 2. $\sigma$ is build such that for each state q reachable from $q_0$ by a finite input sequence, where $q=\langlem,cl\rangle$ and $q′=\sigma(q,\delta)=m′,cl′,\forall s\in S, o\inO,\delta\in\sum$ the following holds:
Let $q=\sigma*(q_0 ,\sigma^+),\sigma^+\in\sigma^+,q′=\delta(q,\sigma),\sigma\in\sigma,s\in S,o\in O$. With $q=\langlem,cl\rangle$ and $q′=m′,cl′$, the BLP BST for read-security is
- Proof: $(a1) \wedge (a2)= R′ \Rightarrow C′\equiv read\in m′(s,o) \Rightarrow cl′(o)\leq cl′(s)$, which exactly matches the definition of read-security for $q′$.
- Write-security: Same steps for $(b1)\wedge (b2)$.
Where Do We Stand?
- Precision: necessary and sufficient conditions for BLP security property
- Analytical power: statements about dynamic model behavior based on static analysis of the (finite and generally small) STS $\rightarrow$ tool support
- Insights: shows that BLP security is an inductive property
Problem: Larger systems: only source of access rules is the trust hierarchy $\rightarrow$ too coarse-grained!
Idea: Encode an additional, more fine-grained type of access restriction in the ACM $\rightarrow$ compartments
- Comp: set of compartments
- $co:S\cup O\rightarrow 2^{Comp}$: assigns a set of compartments to an entity as an (additional) attribute
- Good ol’ BLP: $\langleS,O,L,Q,\sigma,\delta,q_0\rangle$
- With compartments: $\langleS,O,L,Comp,Q_{co},\sigma,\delta,q_0\rangle where $Q_{co}=M\times CL\times CO$ and $CO=\{co|co:S\cup O\rightarrow 2^{Comp}\}$
- Consistency is an important property of composed models
- BLP is further extensible and refinable $\rightarrow$ starting point for later models, e. g. Biba
#### The Biba Model
BLP upside down [Biba, 1977]:

- BLP $\rightarrow$ preserves confidentiality
- Biba $\rightarrow$ preserves integrity
Applications Example: On-board Airplane Passenger Information Systems
- Goal: Provide in-flight information in cabin network
- Flight instruments data
- Outboard camera video streams
- communication pilot - tower
- Integrity: no information flow from cabin to flight deck!
- As employed in Boeing 787: common network for cabin and flight deck + software firewall + Biba implementation
Windows Vista UAC
- An application of the Biba model for OS access control:
- Integrity: Protect system files from malicious user (software) tampering
- Class hierarchy:
- system: OS level objects
- high: services
- medium: user level objects
- low: untrusted processes e. g. web browser, setup application, ...
- Consequence: every file, process, ... created by the web browser is classified low $\rightarrow$ cannot violate integrity of system- and user-objects
- Manual user involvement ($\rightarrow$ DAC portion of the policy):resolving intended exceptions, e. g. to install trusted application software
### Non-interference Models
Problem No. 1: Covert Channels
> Covert Channel [Lampson, 1973]
> Channels [...] not intended for information transfer at all, such as the service program’s effect on the system load.
- AC policies (ACM, HRU, TAM, RBAC, ABAC): colluding malware agents, escalation of common privileges
- Process 1: only read permissions on user files
- Process 2: only permission to create an internet socket
- both:communication via covert channel(e. g. swapping behavior)
- MLS policies (Denning, BLP, Biba): indirect information flow exploitation (Note: We can never prohibitany possible transitive IF ...)
- Test for existence of a file
- Volume control on smartphones
- Timing channels from server response times
Problem No. 2: Damage Range
How to substantiate a statement like: "Corruption of privileged system software will never have any impact on other system components." $\rightarrow$ Attack perimeter
Idea of NI models:
- Once more: higher level of abstraction
- Policy semantics: which domains should be isolated based on their mutual impact
Consequences:
- Easier policy modeling
- More difficult policy implementation ...($\rightarrow$ higher degree of abstraction!)
##### Example 1: Multi-application Smart Cards
- Different services, different providers, different levels of trust
Is there a sequence of actions $a^*\in A^*$ that violates $≈_{NI}$? $\rightarrow$ A model is called $NI$-secure iff there is no sequence of actions that results in an illegal domain interference. Now what does this meansprecisely...?
Before we define what NI-secure is, assume we could remove all actions from an action sequence that have no effect on a given set of domains:
> Purge Function
>
> Let $aa^*\in A^*$ be a sequence of actions consisting of a single action $a\in A\cup\{\epsilon\}$ followed by a sequence $a^*\in A^*$, where $\epsilon$ denotes an empty sequence. Let $D′\in 2^D$ be any set of domains. Then, purge: $A^*\times 2^D \rightarrow A^*$ computes a subsequence of $aa^*$ by removing such actions without an observable effect on any element of $D′:
> For a state $q\in Q$ of an NI model $\langleQ,\sigma,\delta,\lambda,q_0,D,A,dom,≈_{NI},Out\rangle$, the predicate ni-secure(q) holds iff $\forall a\in A,\forall a^*\in A^*:\lambda (\delta^*(q,a^*),a)=\lambda(\delta^*(q,purge(a^*,dom(a))),a)$
2. Running the model on the purged input sequence so that it contains only actions that, according to $≈_{NI}$, actually have impact on $dom(a)$ yields $q′_{clean}=\delta^*(q,purge(a^*,dom(a)))$
3. If $\forall a\in A:\lambda(q′,a)=\lambda(q′_{clean},a)$, than the model is called NI-secure w.r.t. q($ni-secure(q)$).
##### Comparison to HRU and IF Models
- HRU Models
- Policies describe rules that control subjects accessing objects
- Analysis goal: right proliferation
- Covert channels analysis: only based on model implementation
- IF Models
- Policies describe rules about legal information flows
- Competition: conflict relation $C\subseteq O\times O:\langleo,o′\rangle\in C\Leftrightarrow o$ and $o′$ belong to competing companies (non-reflexive, symmetric, generally not transitive)
- In terms of ABAC:object attribute $att_O:O\rightarrow 2^O$, such that $att_O(o)=\{o′\in O|\langleo,o′\rangle\in C\}$.
- If $\langleo_i,o_k\rangle\in C$: no transitive information flow $o_i \rightarrow o_j\rightarrow o_k$, i.e. consultant(s) of $o_i$ must never write to any $o_j\not=o_i$
- This is actually more restrictive than necessary: $o_j\rightarrow o_k$ and afterwards $o_i\rightarrow o_j$ would be fine! (no information can actually flow from $o_i$ to $o_k$)
- In other words: Criticality of an IF depends on existence of earlier flows.
Idea LR-CW[Sharifi and Tripunitara, 2013]: Include time as a model abstraction!
Approach:
- $\forall s\in S,o\in O$: remember, which information has flown to an entity
> - $H=\{Z_e\subseteq F|e\in S\cup O\}$ is the history set: $f\in Z_e\Leftrightarrow e$ contains information about $f(Z_e$ is the "history label" of $e$),
> - $\sigma=OP\times X$ is the input alphabet where
> - $OP=\{read,write\}$ is the set of operations,
> - $X=S\times O$ is the set of arguments of these operations,
> - $\delta:Q\times\sigma\rightarrow Q$ is the state transition function,
- Applicability: more writes allowed in comparison to Brewer-Nash (note that this still complies with the general CW policy)
- Paid for with
- Need to store individual attributes of all entities (their history labels $Z_e$)
- Dependency of write permissions on earlier actions of other subjects
- More extensions:
- Operations to modify conflict relation
- Operations to create/destroy entities
#### An MLS Model for Chinese-Wall Policies
Problems
- Modeling of conflict relation
- Modeling of consultants history
Conflict relation is
- non-reflexive: no company is a competitor of itself
- symmetric: competition is always mutual
- not necessarily transitive: any company might belong to more than one conflict class $\Rightarrow$ if a competes with b and b competes with c, then a and c might still be in different conflict classes (= no competitors) $\rightarrow$ Cannot be modeled by a lattice!
- challenge-response - authentication using public key
- Assumptions
- Each client owns an individual key pair ( $k_{pub}, k_{sec}$ )
- Server knows public keys of clients (PKI)
- Clients are not disclosing secret key
- Server reliably generates nonces
- Properties
- Client and server share no secrets
- No key exchange before communication
- No mutual trust required
- But: sender must know public key of receiver
- $\rightarrow$ PKIs
3. Sealing of Documents, e.g. Contracts (compare sealing using secret keys)
- $\exists$ just 1 owner of secret key
- $\rightarrow$ only she may seal contract
- Knowing her public key,
- $\rightarrow$ everybody can check contract’s authenticity
- $\rightarrow$ especially, everybody can prove that she was not the author
- $\rightarrow$ repudiability; see below, digital signatures
- Consequence of Symmetric vs. Asymmetric Encryption
- Symmetric: shared key, integrity and authenticity can be checked only by key holders $\rightarrow$ message authentication codes (MACs)
- Asymmetric: integrity and authenticity can be checked by anyone holding public key (because only holder of secret key could have encrypted the checksum $\rightarrow$ digital signatures
4. Key Distribution for Symmetric Schemes
- Asymmetric encryption is expensive
- Runtime > 3 orders of magnitude
- Key pairs generation
- High computational costs
- High degree of trust needed in generating organization
- Public Key Infrastructures needed for publishing public keys
- Worldwide data bases with key certificates, certifying (public key $\Leftrightarrow$ person)
- Certification authorities
- $\rightarrow$ Use asymmetric key for establishing communication
- 64 digits: more memory cells than atoms in Solar system
- Current standard $\geq 308$ digits
- Optimization: Atkin’s Sieve, $O(n^{1/2+O(1)})$
However ...
- Until today, we only believe that for computing $k_{sec}$ we need to factorize $n$; there might be a completely different way but: if we can compute $k_{sec}$ in this way, we would have solved the factorization problem
- Until today, no polynomial factorization algorithm is known
- Until today, nobody proved that such algorithm cannot exist...
Precautions in PKIs: Prepare for fast exchange of cryptosystem (e.g. based on computation of logarithm in elliptic curves)
Attack on confidentiality
- Ann with ($k_{pub}, k_{sec}$) is client of 2 servers Good und Bad:
_Bad_ $\rightarrow$Ann: {X} _kpub_ (uses{X} _kpub_ as _nonce_ )
Ann $\rightarrow$ _Bad_ : {{X} _kpub_ } _ksec_ = X (responsebyAnn)
Cause
-_nonce_ property violated
- Samekey used for 2 differentpurposes (authentication, confidentiality)
$\rightarrow$ flawed use of security mechanism
### Cryptographic Hash Functions
Goal
- Discover violation of integrity of data
- So that integrity of information is maintained
Method
- Checksum generation by cryptographic hash functions
- Checksum encryption
- Integrity check by
- Generating a new checksum
- Decryption of encrypted checksum
- Comparison of both values
Method of Operation: Map data
- Of arbitrary length
- To checksum of fixed length
such that $Text1 \not= Text2 \Rightarrow hash(Text1) \not= hash(Text2)$ with high probability
Weak and Strong Hash Functions: One-way
1. $\forall x\in X, hash:X\rightarrow Y$ is efficiently computable
2. There is no efficient algorithm that computes $x$ from $hash(x)$ $\rightarrow$ given $hash(x)$, it is practically impossible to compute an $x\not= x'$ where $hash(x‘)=hash(x)$
Strong Hash Functions: + Collision-free
1. $hash: X\rightarrow Y$ is a weak hash function
2. It is practically impossible to find $x\not= x‘$ where $hash(x)=hash(x‘)$ (however, they do exist ...)
Algorithms: best before ...
- 160 - Bit checksums
- RIPEMD-160
- For creating qualified digital signatures certified in Germany until end of 2010
- For checking until end of 2015
- Secure Hash Algorithm (SHA-1, published NIST 1993)
- Secure communication to authenticating system required (personal data)
Organizational Costs
- Reference probes are personal data $\rightarrow$ Data Protection Act
- Reaction time on security incidents
- Passwords, smartcards can be exchanged easily
- Fingers or eyes ...
Social Barriers
- Not easily accepted
- Finger prints: criminal image
- Retina
- Some weekend entertainments identifiable
- Some diseases identifiable
- Naive advertising calls for distrust
- Politically: "Biometrician undesired on national security congress"
- Technically: for many years unkeptpromise to cure weaknesses
### Cryptographic Protocols
#### SmartCards
Used For: Authentication of humans to IT systems
Verified Item: Knowledge of complex secret
- Secret part of asymmetric key pair
- Symmetric key
Verification
- Challenge/response protocols
- Goal
- Proof that secret is knows
- Contrary to password authentication, no secret exposure
Vehicle for Humans: SmartCards
- Small Computing Devices Encompassing
- Processor(s)
- RAM
- Persistent memory
- Communication interfaces
- What They Do
- Store and keep complex secrets (keys)
- Run cryptographic algorithms
- Response to challenges in challenge/response protocols
- Encryptincoming nonces
- Launch challenges to authenticate other principals
- Generate nonces, verify response
Usage... e.g. via plug-ins in browsers
Properties
- no secret is exposed
- $\rightarrow$ no trust in authenticating system required
- $\rightarrow$ no trust in network required
- Besides authentication other features possible
- $\rightarrow$ digital signatures, credit card, parking card ...
- Weak verification of card right to use card (PIN, password)
- $\rightarrow$ some cards have finger print readers
- Power supply for contactless cards
#### Authentication Protocols
Used For: Authentication between IT systems
Method: challenge/response-scheme
Based on
- symmetric key: principal and authenticating system share secret
- asymmetric key: authenticating system knows public key of principal
Authentication Using Secret Keys
The Fundamentals: 2 Scenarios
1. After one single authentication, Alice wants to use all servers in a distributed system of an organization. "Here is a nonce I encrypted using Alice‘s secret key. Prove that you are Alice by decrypting it."
2. Alice wants authentic and confidential communication with Bob. Authentication Server serves session keys to Bob and Alice
Needham-Schroeder Authentication Protocol
- for secret keys
- Goal: To establish authentic and confidential communication between 2 Principals
- Method
1. Authentication of Alice to Bob $\rightarrow$ Bob knows the other end is Alice
2. Authentication of Bob to Alice $\rightarrow$ Alice knows the other end is Bob
3. Establish a fresh secret between Alice and Bob: a shared symmetric session key $\rightarrow$ confidentiality, integrity, authenticity
> The set of functionsof an IT system that are necessary and sufficient for implementing its security properties $\rightarrow$ Isolation, Policy Enforcement, Authentication ...
> Security Architecture
> The part(s) of a system’s architecture that implement its TCB $\rightarrow$ Security policies, Security Server (PDP) and PEPs, authentication components, ...
> Security Mechanisms
> Algorithms and data structures for implementing functions of a TCB $\rightarrow$ Isolation mechanisms, communication mechanisms, authentication mechanisms, ...
$\rightarrow$ TCB $\sim$ runtime environment for security policies
Security architectures have been around for a long time ...
- Creates a fresh session key for Alice's communication with the TGS: $SessionKey_{Alice/TGS}$
- Creates Alice’s ticket for TGS and encrypts it with $K_{AS/TGS}$ (so Alice cannot modify it): $Ticket_{Alice/TGS}=\{Alice, TGS, ..., SessionKey_{Alice/TGS}\}_{K_{AS/TGS}}$
- Encrypts everything with $K_{Alice/AS}$ (so only Alice can read the session key and the TGS-Ticket) $\{TGS, Timestamp , SessionKey_{Alice/TGS}, Ticket_{Alice/TGS}\}_{K_{Alice/AS}}$
4. Alice’s workstation
- Now has ${TGS, Timestamp , SessionKey_{Alice/TGS} , Ticket_{Alice/TGS}\}_{K_{Alice/AS}}$
- Requests Alice’s password
- Computes $K_{Alice/AS}$ from password using a cryptographic hash function
- Uses it to decrypt above message from AS
- Result: Alice’s workstation has
- Session key for TGS session: $SessionKey_{Alice/TGS}$
- Ticket for TGS: $Ticket_{Alice/TGS}$
- The means to create an authenticator
#### Using a Server
Authentication (bidirectional)
2 Steps
1. Authentication of client to server
2. Authentication of server to client (optional)
1. Authentication of Client
- Assumptions
- Alice has session key
- Alice has server ticket
1. Alice assembles authenticator $A_{Alice}=\{Alice,Alice’s network address,timestamp\}_{SessionKey_{Alice/Server}}$ Only Alice can do that, because only she knows $SessionKey_{Alice/Server}$
2. Alice sends $Ticket_{Alice/Server}, A_{Alice}$ to Server
3. Server decrypts ticket and thus gets session key; thus it can decrypt $A_{Alice}$ and check:
- Freshness
- Compliance of names in ticket and authenticator
- Origin of message (as told by network interface) and network address in authenticator
2. Authentication of Servers
- Server sends $\{Timestamp+1\}_{SessionKey_{Alice/Server}}$ to Alice
- Can only be done by principal that knows $SessionKey_{Alice/Server}$
- This can only be a server that can extract the session key from the ticket $Ticket_{Alice/Server}=\{Alice,Server ,..., SessionKey_{Alice/Server}\}_{K_{TGS/Server}}$