mirror of
https://github.com/tlsnotary/pagesigner-cli.git
synced 2026-01-09 14:38:07 -05:00
update to work with PageSigner v3
This commit is contained in:
3
.gitmodules
vendored
Normal file
3
.gitmodules
vendored
Normal file
@@ -0,0 +1,3 @@
|
||||
[submodule "pagesigner"]
|
||||
path = pagesigner
|
||||
url = https://github.com/tlsnotary/pagesigner
|
||||
31
README
Normal file
31
README
Normal file
@@ -0,0 +1,31 @@
|
||||
pgsg-node.js allows you to run PageSigner with nodejs >= v.15.0
|
||||
|
||||
Clone this repo with:
|
||||
git clone --recurse-submodules https://github.com/tlsnotary/pagesigner-cli
|
||||
Install dependencies by running inside the pagesigner-cli directory:
|
||||
npm ci
|
||||
|
||||
|
||||
NB! You must supply the file with HTTP headers with --headers path/to/headers
|
||||
The most basic headers file looks like this:
|
||||
|
||||
GET /r/worldnews/ HTTP/1.1
|
||||
Host: www.reddit.com <---first /r/n
|
||||
<---second /r/n
|
||||
|
||||
As shown above, there are two carriage returns '/r/n/r/n' after the last header's last letter.
|
||||
|
||||
|
||||
|
||||
|
||||
Usage: ./pgsg-node.js <command> [arguments]
|
||||
|
||||
where <command> is one of notarize, verify
|
||||
|
||||
Examples:
|
||||
|
||||
./pgsg-node.js notarize example.com --headers headers.txt
|
||||
Notarize example.com using HTTP headers from headers.txt
|
||||
|
||||
./pgsg-node.js verify imported.pgsg
|
||||
Verify a Pagesigner session from imported.pgsg. This will create a session directory with the decrypted cleartext and the copy of the pgsg file.
|
||||
93
README.md
93
README.md
@@ -1,93 +0,0 @@
|
||||
UPDATE November 2021. pagesigner-browserless is not yet available for PageSigner v3. When it becomes available, this repo will be changed accordingly.
|
||||
|
||||
UPDATE October 2020. This repository works only for pre-v2 PageSigner. You will have to set up your own oracle server and modify the values in /src/shared/tlsnotary.ini as well as change the pubkeys in /src/shared/pubkeys.txt. Also the server being notarized must support TLS 1.0 or TLS 1.1.
|
||||
|
||||
|
||||
|
||||
|
||||
# Browserless PageSigner #
|
||||
|
||||
A set of tools to create and use PageSigner format notarization files.
|
||||
|
||||
Three scripts are currently provided: (1) notarize.py (2) auditor.py (3) parse-pgsg.py
|
||||
|
||||
* notarize - generate notarizations of webpages
|
||||
* auditor - check the validity of a pgsg file and extract the html from it
|
||||
* parse-pgsg - currently, just extracts the DER certificates from the file for verification.
|
||||
|
||||
More details on each below.
|
||||
|
||||
## 1. notarize.py ##
|
||||
|
||||
Intended to be run as a script.
|
||||
Use the script notarize.py in src/auditee
|
||||
Usage:
|
||||
```
|
||||
python notarize.py [-e headers_file] [-a] www.reddit.com/r/all
|
||||
```
|
||||
(no need for https, assumed)
|
||||
|
||||
### Important notes on usage ###
|
||||
|
||||
* HTTP request headers: -e option allows you to specify an **absolute** file path to a file that
|
||||
contains a set of http headers in json (not including a GET). Request types other than GET have not yet
|
||||
been implemented.
|
||||
|
||||
* Configure the notary: if necessary, edit the \[Notary\] section of the `src/shared/tlsnotary.ini` config file with
|
||||
the appropriate server IP and port. Do *not* edit the remaining fields unless you know what you are doing.
|
||||
|
||||
Note that currently the settings are for the main tlsnotarygroup1 pagesigner oracle. This should work fine.
|
||||
Bear in mind that this oracle server rate currently limits on a per-IP basis; for high frequency runs this may cause notarization to fail.
|
||||
|
||||
* The -a option: this flag allows you to request a check of the oracle status via AWS queries.
|
||||
If this check completes successfully (note that it will run *twice*: one for the main oracle server, then again for the signing server which holds the notarizing private key).
|
||||
The recommendation is to use this check periodically, but not for every run. The reason is (a) because the oracle check takes time (a few seconds)
|
||||
and also to avoid swamping AWS with queries. For example, in a bash script you could configure this option to be applied to 1 out of 10 queries.
|
||||
|
||||
Please see the pagesigner-oracles repo for details on the setup of the oracle server on Amazon AWS.
|
||||
|
||||
* Verifying the certificate: the main disadvantage of operating outside the browser is we can't
|
||||
directly reuse the browser's certificate store. For now, you can find the file pubkey1 in the
|
||||
session directory, which prints the pubkey of the certificate used in hex format, and manually compare it
|
||||
with that reported in your browser for the same website. Note, however, that this is a **RSA** certificate,
|
||||
so you will have to make sure that you connect to the site (in the browser) with an RSA key-exchange ciphersuite.
|
||||
A less clunky way of achieving this sanity check is sought.
|
||||
|
||||
## 2. auditor.py ##
|
||||
|
||||
Use this to extract html and examine it, in cases where import into the PageSigner addon is not possible.
|
||||
|
||||
Usage and output:
|
||||
|
||||
```
|
||||
python auditor.py myfile.pgsg
|
||||
Notary pubkey OK
|
||||
Processing data for server: incapsula.com
|
||||
Notary signature OK
|
||||
Commitment hash OK
|
||||
HTML decryption with correct HMACs OK.
|
||||
Audit passed! You can read the html at: fullpathtohtml and check the server certificate with the data provided in: fullpathtodomainfile
|
||||
```
|
||||
|
||||
If any of the above checks do not pass, the notarization file is **definitely** invalid.
|
||||
Note that the oracle's pubkey is embedded into the code; if you don't use the tlsnotarygroup1 oracle, you'll have to change it.
|
||||
To complete the checking, you should compare the contents of the created `domain_data.txt` file with the certificate public key that your browser shows for that domain.
|
||||
|
||||
## 3. parse-pgsg.py ##
|
||||
|
||||
For now this is a bare-bones script that merely outputs the certificates found in the pgsg file (which are the same
|
||||
certificates that were passed over the wire during the audit) into a series of DER encoded files 0.der, 1.der ... and fullcert.der.
|
||||
Note that 0.der should be the certificate of the site itself.
|
||||
|
||||
Usage:
|
||||
```
|
||||
python parse-pgsg.py myfile.pgsg
|
||||
```
|
||||
|
||||
If you want to check their contents, it can be easily done with openssl:
|
||||
|
||||
```
|
||||
openssl x509 -in 0.der -inform der -text -noout -fingerprint
|
||||
```
|
||||
|
||||
This will show you cert fingerprints, common name etc, expiration date etc.
|
||||
2156
package-lock.json
generated
Normal file
2156
package-lock.json
generated
Normal file
File diff suppressed because it is too large
Load Diff
17
package.json
Normal file
17
package.json
Normal file
@@ -0,0 +1,17 @@
|
||||
{
|
||||
"type": "module",
|
||||
"dependencies": {
|
||||
"@peculiar/webcrypto": "^1.2.3",
|
||||
"bigint-crypto-utils": "^2.5.3",
|
||||
"bytestreamjs": "1.0.29",
|
||||
"cbor-js": "^0.1.0",
|
||||
"cose-js": "^0.8.2",
|
||||
"fast-sha256": "^1.3.0",
|
||||
"libsodium-wrappers-sumo": "^0.7.9",
|
||||
"node-fetch": "2.6.6",
|
||||
"pkijs": "^2.2.1",
|
||||
"simple-js-ec-math": "^2.0.1",
|
||||
"tweetnacl": "^1.0.3",
|
||||
"universal-dom-parser": "0.0.3"
|
||||
}
|
||||
}
|
||||
1
pagesigner
Submodule
1
pagesigner
Submodule
Submodule pagesigner added at fe6a75e41f
237
pgsg-node.js
Executable file
237
pgsg-node.js
Executable file
@@ -0,0 +1,237 @@
|
||||
#!/usr/bin/env node
|
||||
|
||||
import {int2ba, ba2int, assert, b64decode, b64encode} from './pagesigner/core/utils.js'
|
||||
import {parse_certs} from './pagesigner/core/verifychain.js';
|
||||
import {verifyNotary, getURLFetcherDoc} from './pagesigner/core/oracles.js';
|
||||
import {FirstTimeSetup} from './pagesigner/core/FirstTimeSetup.js';
|
||||
import {globals} from './pagesigner/core/globals.js';
|
||||
|
||||
import * as Path from 'path'
|
||||
import {fileURLToPath} from 'url'
|
||||
// __dirname is the directory where we are located
|
||||
const __dirname = Path.dirname(fileURLToPath(import.meta.url))
|
||||
// this workaround allows to require() from ES6 modules, which is not allowed by default
|
||||
import { createRequire } from 'module'
|
||||
const require = createRequire(import.meta.url)
|
||||
|
||||
const pkijs = require("pkijs");
|
||||
const { Crypto } = require("@peculiar/webcrypto");
|
||||
const crypto = new Crypto();
|
||||
global.crypto = crypto;
|
||||
pkijs.setEngine("newEngine", crypto, new pkijs.CryptoEngine({ name: "", crypto: crypto, subtle: crypto.subtle }))
|
||||
|
||||
global.CBOR = require('cbor-js')
|
||||
import * as COSE from './pagesigner/core/third-party/coseverify.js'
|
||||
global.COSE = COSE
|
||||
// replace browser's fetch
|
||||
import fetch from 'node-fetch'
|
||||
global.fetch = fetch
|
||||
const http = require('http');
|
||||
// keepAliveAgent tells fetch to reuse the same source port
|
||||
global.keepAliveAgent = new http.Agent({keepAlive: true});
|
||||
// replace browser's DOMParser
|
||||
import DOMParser from 'universal-dom-parser'
|
||||
global.DOMParser = DOMParser
|
||||
const fs = require('fs')
|
||||
global.fs = fs
|
||||
global.sodium= require('libsodium-wrappers-sumo');
|
||||
global.nacl = require('tweetnacl')
|
||||
global.ECSimple = require('simple-js-ec-math')
|
||||
global.bcuNode = require('bigint-crypto-utils')
|
||||
global.fastsha256 = require('fast-sha256');
|
||||
|
||||
// this must be imported dynamically after global.bcuNode becomes available
|
||||
const TLSNotarySession = (await import('./pagesigner/core/TLSNotarySession.js')).TLSNotarySession;
|
||||
const Main = (await import('./pagesigner/core/Main.js')).Main;
|
||||
|
||||
// always return 0 when browser calls performance.now()
|
||||
global.performance = {'now':function(){return 0;}};
|
||||
// when browser asks for a resource URL, return a full path
|
||||
global.chrome = {'extension':{'getURL': (url) => Path.join(__dirname, 'pagesigner', url)}}
|
||||
|
||||
// override some methods of Socket
|
||||
import {Socket} from './pagesigner/core/Socket.js';
|
||||
const net = require('net');
|
||||
class SocketNode extends Socket{
|
||||
constructor(server, port){
|
||||
super(server, port)
|
||||
this.sock = new net.Socket();
|
||||
const that = this
|
||||
this.sock.on('data', function(d) {
|
||||
that.buffer = Buffer.concat([that.buffer, d])
|
||||
});
|
||||
}
|
||||
async connect(){
|
||||
await this.sock.connect(this.port, this.name);
|
||||
setTimeout(function() {
|
||||
if (! this.wasClosed) this.close();
|
||||
}, this.lifeTime);
|
||||
return 'ready';
|
||||
}
|
||||
async send(d) {
|
||||
var data = new Buffer.from(d)
|
||||
this.sock.write(data);
|
||||
}
|
||||
async close() {
|
||||
this.sock.destroy()
|
||||
}
|
||||
}
|
||||
global.SocketNode = SocketNode
|
||||
|
||||
// a drop-in replacement for HTML WebWorker
|
||||
const { Worker, parentPort } = require('worker_threads');
|
||||
class mWorker extends Worker{
|
||||
constructor(url){
|
||||
super(url)
|
||||
this.onmessage = function(){}
|
||||
this.on('message', function(msg){
|
||||
this.onmessage(msg)
|
||||
})
|
||||
}
|
||||
}
|
||||
global.Worker = mWorker
|
||||
|
||||
// original parseAndAssemble is called with relative paths, we convert them into absolute paths
|
||||
import './pagesigner/core/twopc/circuits/casmbundle.js'
|
||||
CASM.parseAndAssembleOld = CASM.parseAndAssemble
|
||||
CASM.parseAndAssemble = function(file){
|
||||
const fullpath = Path.join(__dirname, 'pagesigner', 'core', 'twopc', 'circuits', file)
|
||||
return CASM.parseAndAssembleOld(fullpath)
|
||||
}
|
||||
|
||||
|
||||
async function createNewSession(host, request, response, date, pgsg, is_imported=false){
|
||||
const suffix = is_imported ? "_imported" : ""
|
||||
const sessDir = Path.join(__dirname, 'saved_sessions', date + "_" + host + suffix)
|
||||
fs.mkdirSync(sessDir, { recursive: true });
|
||||
fs.writeFileSync(Path.join(sessDir, "request"), request)
|
||||
fs.writeFileSync(Path.join(sessDir, "response"), response)
|
||||
fs.writeFileSync(Path.join(sessDir, date+".pgsg"), Buffer.from(JSON.stringify(pgsg)))
|
||||
return sessDir
|
||||
}
|
||||
|
||||
|
||||
function showUsage(){
|
||||
console.log("Usage: ./pgsg-node.js <command> [options] \r\n")
|
||||
console.log("where <command> is one of notarize, verify\r\n")
|
||||
console.log("Examples:\r\n")
|
||||
console.log("./pgsg-node.js notarize example.com --headers headers.txt")
|
||||
console.log("Notarize example.com using HTTP headers from headers.txt\r\n")
|
||||
console.log("./pgsg-node.js verify imported.pgsg")
|
||||
console.log("Verify a Pagesigner session from imported.pgsg. This will create a session directory with the decrypted cleartext and a copy of the pgsg file.\r\n")
|
||||
console.log("\r\n")
|
||||
process.exit(0)
|
||||
}
|
||||
|
||||
async function setupNotary(){
|
||||
const m = new Main();
|
||||
if (globals.useNotaryNoSandbox){
|
||||
return await m.queryNotaryNoSandbox(globals.defaultNotaryIP);
|
||||
} else {
|
||||
const cacheDir = Path.join(__dirname, 'cache')
|
||||
const tnPath = Path.join(cacheDir, 'trustedNotary')
|
||||
if (fs.existsSync(tnPath)) {
|
||||
// load notary from disk
|
||||
const obj = JSON.parse(fs.readFileSync(tnPath))
|
||||
obj['URLFetcherDoc'] = b64decode(obj['URLFetcherDoc'])
|
||||
console.log('Using cached notary from ', tnPath)
|
||||
console.log('Notary IP address: ', obj.IP);
|
||||
return obj
|
||||
} else {
|
||||
// fetch and verify the URLFetcher doc
|
||||
const URLFetcherDoc = await getURLFetcherDoc(globals.defaultNotaryIP);
|
||||
const trustedPubkeyPEM = await verifyNotary(URLFetcherDoc);
|
||||
assert(trustedPubkeyPEM != undefined);
|
||||
const obj = {
|
||||
'IP': globals.defaultNotaryIP,
|
||||
'pubkeyPEM': trustedPubkeyPEM,
|
||||
'URLFetcherDoc': URLFetcherDoc
|
||||
};
|
||||
// save the notary to disk
|
||||
const objSave = {
|
||||
'IP': obj.IP,
|
||||
'pubkeyPEM': obj.pubkeyPEM,
|
||||
'URLFetcherDoc': b64encode(obj.URLFetcherDoc)
|
||||
}
|
||||
fs.writeFileSync(tnPath, Buffer.from(JSON.stringify(objSave)))
|
||||
return obj
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
async function main (){
|
||||
const argv = process.argv
|
||||
if (argv[2] === 'notarize') {
|
||||
if (argv.length !== 6 || (argv.length == 6 && argv[4] !== '--headers')){
|
||||
showUsage();
|
||||
}
|
||||
|
||||
const cacheDir = Path.join(__dirname, 'cache')
|
||||
if (! fs.existsSync(cacheDir)) {fs.mkdirSync(cacheDir)};
|
||||
const psPath = Path.join(cacheDir, 'parsedCircuits')
|
||||
const gbPath = Path.join(cacheDir, 'gatesBlob')
|
||||
|
||||
let circuits
|
||||
if (fs.existsSync(psPath)) {
|
||||
// load cached serialized circuits
|
||||
circuits = JSON.parse(fs.readFileSync(psPath))
|
||||
} else {
|
||||
// run first time setup
|
||||
circuits = await new FirstTimeSetup().start();
|
||||
for (const [k, v] of Object.entries(circuits)) {
|
||||
circuits[k]['gatesBlob'] = b64encode(circuits[k]['gatesBlob'])
|
||||
}
|
||||
fs.writeFileSync(psPath, Buffer.from(JSON.stringify(circuits)))
|
||||
}
|
||||
for (const [k, v] of Object.entries(circuits)) {
|
||||
circuits[k]['gatesBlob'] = b64decode(circuits[k]['gatesBlob'])
|
||||
}
|
||||
|
||||
// prepare root store certificates
|
||||
const rootStorePath = Path.join(__dirname, 'pagesigner', 'core', 'third-party', 'certs.txt')
|
||||
await parse_certs(fs.readFileSync(rootStorePath).toString());
|
||||
|
||||
var server = argv[3]
|
||||
var headersfile = Path.join(__dirname, argv[5])
|
||||
var headers = fs.readFileSync(headersfile).toString().replace(/\n/g, '\r\n')
|
||||
|
||||
const m = new Main();
|
||||
m.trustedOracle = await setupNotary();
|
||||
// start the actual notarization
|
||||
const session = new TLSNotarySession(
|
||||
server, 443, headers, m.trustedOracle, globals.sessionOptions, circuits, null);
|
||||
const obj = await session.start();
|
||||
obj['title'] = 'PageSigner notarization file';
|
||||
obj['version'] = 6;
|
||||
if (! globals.useNotaryNoSandbox){
|
||||
obj['URLFetcher attestation'] = m.trustedOracle.URLFetcherDoc;
|
||||
}
|
||||
const [host, request, response, date] = await m.verifyPgsgV6(obj);
|
||||
const serializedPgsg = m.serializePgsg(obj);
|
||||
const sessDir = await createNewSession(host, request, response, date, serializedPgsg)
|
||||
console.log('Session was saved in ', sessDir)
|
||||
process.exit(0)
|
||||
}
|
||||
|
||||
else if (argv[2] === 'verify') {
|
||||
if (argv.length !== 4){
|
||||
showUsage()
|
||||
}
|
||||
const pgsgBuf = fs.readFileSync(argv[3])
|
||||
const serializedPgsg = JSON.parse(pgsgBuf)
|
||||
const m = new Main();
|
||||
const pgsg = m.deserializePgsg(serializedPgsg);
|
||||
// prepare root store certificates
|
||||
const rootStorePath = Path.join(__dirname, 'pagesigner', 'core', 'third-party', 'certs.txt')
|
||||
await parse_certs(fs.readFileSync(rootStorePath).toString());
|
||||
const [host, request, response, date] = await m.verifyPgsgV6(pgsg);
|
||||
const sessDir = await createNewSession(host, request, response, date, serializedPgsg, true)
|
||||
console.log('The imported session was verified and saved in ', sessDir)
|
||||
process.exit(0)
|
||||
}
|
||||
|
||||
else {
|
||||
showUsage()
|
||||
}
|
||||
}
|
||||
main()
|
||||
@@ -1,24 +0,0 @@
|
||||
HOW TO USE:
|
||||
===========
|
||||
|
||||
First, you need an API key and API secret from Bitfinex,
|
||||
available on the website. These are alphanumeric strings.
|
||||
Write them as the first two lines in a file anywhere on disk.
|
||||
Then, run the script as:
|
||||
|
||||
python bitfinexAPI.py absolute-path-to-APIkeyfile absolute-path-to-new-headers-file
|
||||
|
||||
where the first argument is the full path for the API key file you created earlier,
|
||||
and the second argument is a new file location in which the headers can be stored
|
||||
(it can be anywhere, but make sure it's the **absolute** path).
|
||||
|
||||
For now, the script will record a proof of the balance of the account owner, which
|
||||
can be found in `src/auditee/sessions/<timestamp>/commit/html-1`. The data is seen
|
||||
at the bottom of the file in json.
|
||||
|
||||
This script is designed so it can be called from a shell script or similar, perhaps
|
||||
at regular intervals. Usage is up to your imagination.
|
||||
|
||||
Notes
|
||||
=====
|
||||
Still primitive. See the TODOs in the Python script, and read the API docs on bitfinex.com.
|
||||
@@ -1,54 +0,0 @@
|
||||
from __future__ import print_function
|
||||
import requests
|
||||
import json
|
||||
import base64
|
||||
import hashlib
|
||||
import time
|
||||
import hmac
|
||||
import sys
|
||||
from subprocess import check_output
|
||||
def build_headers(header_file, bfx_data):
|
||||
|
||||
payloadObject = {
|
||||
'request':bfx_data['call'],
|
||||
#nonce must be strictly increasing
|
||||
'nonce':str(long(time.time()*1000000)),
|
||||
'options':{}
|
||||
}
|
||||
payload_json = json.dumps(payloadObject)
|
||||
payload = base64.b64encode(payload_json)
|
||||
m = hmac.new(bfx_data['secret'], payload, hashlib.sha384)
|
||||
m = m.hexdigest()
|
||||
headers = {
|
||||
'X-BFX-APIKEY' : bfx_data['key'],
|
||||
'X-BFX-PAYLOAD' : payload,
|
||||
'X-BFX-SIGNATURE' : m
|
||||
}
|
||||
with open(header_file,'wb') as f:
|
||||
f.write(json.dumps(headers))
|
||||
|
||||
|
||||
def run_pagesigner(header_file, bfx_data):
|
||||
url = bfx_data['url']+bfx_data['call']
|
||||
#TODO tidy up file paths
|
||||
r = check_output(['python','../auditee/notarize.py','-e',header_file,url])
|
||||
#TODO flag success/failure
|
||||
print ("PageSigner notarization output: \n", r)
|
||||
|
||||
if __name__ == "__main__":
|
||||
bitfinexURLroot = 'api.bitfinex.com'
|
||||
#TODO: the first obvious extension is to
|
||||
#add other API queries other than balance
|
||||
bitfinexURLAPIcall = '/v1/balances'
|
||||
if len(sys.argv) != 3:
|
||||
print ("wrong args, see bfx_README.md")
|
||||
exit(1)
|
||||
#TODO use optparse or similar, make input more flexible, including file paths
|
||||
#and other options
|
||||
with open(sys.argv[1],'rb') as f:
|
||||
bitfinexKey = f.readline().rstrip()
|
||||
bitfinexSecret = f.readline().rstrip()
|
||||
bfx_data = {'url': bitfinexURLroot, 'call': bitfinexURLAPIcall,
|
||||
'key':bitfinexKey, 'secret': bitfinexSecret}
|
||||
build_headers(sys.argv[2], bfx_data)
|
||||
run_pagesigner(sys.argv[2], bfx_data)
|
||||
@@ -1,161 +0,0 @@
|
||||
#!/usr/bin/env python
|
||||
from __future__ import print_function
|
||||
|
||||
from base64 import b64decode, b64encode
|
||||
from hashlib import md5, sha1, sha256
|
||||
from os.path import join
|
||||
from oracles import oracle_modulus
|
||||
import binascii, hmac, os, platform, tarfile
|
||||
import random, re, sys, time, zipfile
|
||||
|
||||
#file system setup.
|
||||
data_dir = os.path.dirname(os.path.realpath(__file__))
|
||||
sys.path.append(os.path.dirname(data_dir))
|
||||
install_dir = os.path.dirname(os.path.dirname(data_dir))
|
||||
sessions_dir = join(data_dir, 'sessions')
|
||||
time_str = time.strftime('%d-%b-%Y-%H-%M-%S', time.gmtime())
|
||||
current_session_dir = join(sessions_dir, time_str)
|
||||
os.makedirs(current_session_dir)
|
||||
|
||||
global_tlsver = bytearray('\x03\x02')
|
||||
global_use_gzip = True
|
||||
global_use_slowaes = False
|
||||
global_use_paillier = False
|
||||
oracle_ba_modulus = None
|
||||
oracle_int_modulus = None
|
||||
|
||||
def extract_audit_data(audit_filename):
|
||||
audit_data = {}
|
||||
with open(audit_filename,'rb') as f:
|
||||
header = f.read(29)
|
||||
if header != 'tlsnotary notarization file\n\n':
|
||||
raise Exception("Invalid file format")
|
||||
version = f.read(2)
|
||||
if version != '\x00\x02':
|
||||
raise Exception("Incompatible file version")
|
||||
audit_data['cipher_suite'] = shared.ba2int(f.read(2))
|
||||
audit_data['client_random'] = f.read(32)
|
||||
audit_data['server_random'] = f.read(32)
|
||||
audit_data['pms1'] = f.read(24)
|
||||
audit_data['pms2'] = f.read(24)
|
||||
audit_data['certs_len'] = shared.ba2int(f.read(3))
|
||||
audit_data['certs'] = f.read(audit_data['certs_len'])
|
||||
audit_data['tlsver'] = f.read(2)
|
||||
audit_data['initial_tlsver'] = f.read(2)
|
||||
response_len = shared.ba2int(f.read(8))
|
||||
audit_data['response'] = f.read(response_len)
|
||||
IV_len = shared.ba2int(f.read(2))
|
||||
if IV_len not in [258,16]:
|
||||
print ("IV length was: ", IV_len)
|
||||
raise Exception("Wrong IV format in audit file")
|
||||
audit_data['IV'] = f.read(IV_len)
|
||||
audit_data['oracle_modulus_len'] = f.read(2) #TODO can check this
|
||||
audit_data['signature'] = f.read(len(oracle_ba_modulus))
|
||||
audit_data['commit_hash'] = f.read(32)
|
||||
audit_data['oracle_modulus'] = f.read(512)
|
||||
if audit_data['oracle_modulus'] != oracle_ba_modulus:
|
||||
print ("file mod was: ", binascii.hexlify(audit_data['oracle_modulus']))
|
||||
print ("actual was: ", binascii.hexlify(oracle_ba_modulus))
|
||||
raise Exception("Unrecognized oracle")
|
||||
audit_data['audit_time'] = f.read(4)
|
||||
return audit_data
|
||||
|
||||
#unpack and check validity of Python modules
|
||||
def first_run_check(modname,modhash):
|
||||
if not modhash: return
|
||||
mod_dir = join(data_dir, 'python', modname)
|
||||
if not os.path.exists(mod_dir):
|
||||
print ('Extracting '+modname + '.tar.gz...')
|
||||
with open(join(data_dir, 'python', modname+'.tar.gz'), 'rb') as f: tarfile_data = f.read()
|
||||
if md5(tarfile_data).hexdigest() != modhash:
|
||||
raise Exception ('Wrong hash')
|
||||
os.chdir(join(data_dir, 'python'))
|
||||
tar = tarfile.open(join(data_dir, 'python', modname+'.tar.gz'), 'r:gz')
|
||||
tar.extractall()
|
||||
tar.close()
|
||||
|
||||
if __name__ == "__main__":
|
||||
audit_filename = sys.argv[1]
|
||||
#for md5 hash, see https://pypi.python.org/pypi/<module name>/<module version>
|
||||
modules_to_load = {'rsa-3.1.4':'b6b1c80e1931d4eba8538fd5d4de1355',\
|
||||
'pyasn1-0.1.7':'2cbd80fcd4c7b1c82180d3d76fee18c8',\
|
||||
'slowaes':'','requests-2.3.0':'7449ffdc8ec9ac37bbcd286003c80f00'}
|
||||
for x,h in modules_to_load.iteritems():
|
||||
first_run_check(x,h)
|
||||
sys.path.append(join(data_dir, 'python', x))
|
||||
|
||||
import rsa
|
||||
import pyasn1
|
||||
import requests
|
||||
from pyasn1.type import univ
|
||||
from pyasn1.codec.der import encoder, decoder
|
||||
from slowaes import AESModeOfOperation
|
||||
import shared
|
||||
oracle_ba_modulus = bytearray('').join(map(chr,oracle_modulus))
|
||||
oracle_int_modulus = shared.ba2int(oracle_ba_modulus)
|
||||
|
||||
shared.load_program_config()
|
||||
|
||||
if int(shared.config.get("General","gzip_disabled")) == 1:
|
||||
global_use_gzip = False
|
||||
|
||||
audit_data = extract_audit_data(audit_filename)
|
||||
#1. Verify notary pubkey - done in extract_audit_data
|
||||
print ('Notary pubkey OK')
|
||||
#2. Verify signature
|
||||
#First, extract the cert in DER form from the notarization file
|
||||
#Then, extract from the cert the modulus and server name (common name field)
|
||||
#To do this, we need to initialise the TLSNClientSession
|
||||
audit_session = shared.TLSNClientSession(ccs=audit_data['cipher_suite'],tlsver=audit_data['initial_tlsver'])
|
||||
first_cert_len = shared.ba2int(audit_data['certs'][:3])
|
||||
server_mod, server_exp = audit_session.extract_mod_and_exp(certDER=audit_data['certs'][3:3+first_cert_len], sn=True)
|
||||
print ('Processing data for server: ', audit_session.server_name)
|
||||
data_to_be_verified = audit_data['commit_hash'] + audit_data['pms2'] + shared.bi2ba(server_mod) + audit_data['audit_time']
|
||||
data_to_be_verified = sha256(data_to_be_verified).digest()
|
||||
if not shared.verify_signature(data_to_be_verified, audit_data['signature'],oracle_int_modulus):
|
||||
print ('Audit FAILED. Signature is not verified.')
|
||||
exit()
|
||||
print ('Notary signature OK')
|
||||
#3. Verify commitment hash.
|
||||
if not sha256(audit_data['response']).digest() == audit_data['commit_hash']:
|
||||
print ('Audit FAILED. Commitment hash does not match encrypted server response.')
|
||||
exit()
|
||||
print ('Commitment hash OK')
|
||||
#4 Decrypt html and check for mac errors.
|
||||
audit_session.unexpected_server_app_data_count = shared.ba2int(audit_data['response'][0])
|
||||
audit_session.tlsver = audit_data['tlsver']
|
||||
audit_session.client_random = audit_data['client_random']
|
||||
audit_session.server_random = audit_data['server_random']
|
||||
audit_session.pms1 = audit_data['pms1']
|
||||
audit_session.pms2 = audit_data['pms2']
|
||||
audit_session.p_auditee = shared.tls_10_prf('master secret'+audit_session.client_random+audit_session.server_random,
|
||||
first_half=audit_session.pms1)[0]
|
||||
audit_session.p_auditor = shared.tls_10_prf('master secret'+audit_session.client_random+audit_session.server_random,
|
||||
second_half=audit_session.pms2)[1]
|
||||
audit_session.set_master_secret_half()
|
||||
audit_session.do_key_expansion()
|
||||
audit_session.store_server_app_data_records(audit_data['response'][1:])
|
||||
audit_session.IV_after_finished = (map(ord,audit_data['IV'][:256]),ord(audit_data['IV'][256]), \
|
||||
ord(audit_data['IV'][257])) if audit_data['cipher_suite'] in [4,5] else audit_data['IV']
|
||||
plaintext, bad_mac = audit_session.process_server_app_data_records(is_for_auditor=True)
|
||||
if bad_mac:
|
||||
print ('Audit FAILED. Decrypted data has bad HMACs.')
|
||||
print ('HTML decryption with correct HMACs OK.')
|
||||
plaintext = shared.dechunk_http(plaintext)
|
||||
plaintext = shared.gunzip_http(plaintext)
|
||||
#5 Display html + success.
|
||||
with open(join(current_session_dir,'audited.html'),'wb') as f:
|
||||
f.write(plaintext)
|
||||
#print out the info about the domain
|
||||
n_hexlified = binascii.hexlify(shared.bi2ba(server_mod))
|
||||
#pubkey in the format 09 56 23 ....
|
||||
n_write = " ".join(n_hexlified[i:i+2] for i in range(0, len(n_hexlified), 2))
|
||||
with open(join(current_session_dir,'domain_data.txt'), 'wb') as f:
|
||||
f.write('Server name: '+audit_session.server_name + '\n\n'+'Server pubkey:' + '\n\n' + n_write+'\n')
|
||||
|
||||
print ("Audit passed! You can read the html at: ",
|
||||
join(current_session_dir,'audited.html'),
|
||||
'and check the server certificate with the data provided in ',
|
||||
join(current_session_dir,'domain_data.txt'))
|
||||
|
||||
|
||||
@@ -1,412 +0,0 @@
|
||||
#!/usr/bin/env python
|
||||
from __future__ import print_function
|
||||
|
||||
from base64 import b64decode, b64encode
|
||||
from hashlib import md5, sha1, sha256
|
||||
from os.path import join
|
||||
import binascii, hmac, os, platform, tarfile
|
||||
import Queue, random, re, shutil, signal, sys, time
|
||||
import SimpleHTTPServer, socket, threading, zipfile
|
||||
import string, json
|
||||
from optparse import OptionParser
|
||||
from oracles import check_oracle, oracle, oracle_modulus
|
||||
try: import wingdbstub
|
||||
except: pass
|
||||
|
||||
#file system setup.
|
||||
data_dir = os.path.dirname(os.path.realpath(__file__))
|
||||
sys.path.append(os.path.dirname(data_dir))
|
||||
install_dir = os.path.dirname(os.path.dirname(data_dir))
|
||||
sessions_dir = join(data_dir, 'sessions')
|
||||
time_str = time.strftime('%d-%b-%Y-%H-%M-%S', time.gmtime())
|
||||
current_session_dir = join(sessions_dir, time_str)
|
||||
os.makedirs(current_session_dir)
|
||||
|
||||
#Globals
|
||||
audit_no = 0 #we may be auditing multiple URLs. This var keeps track of how many
|
||||
#successful audits there were so far and is used to index html files audited.
|
||||
#Default values from the config file. Will be overridden after configfile is parsed
|
||||
global_tlsver = bytearray('\x03\x02')
|
||||
global_use_gzip = True
|
||||
global_use_slowaes = False
|
||||
global_use_paillier = False
|
||||
random_uid = ''
|
||||
|
||||
def probe_server_modulus(server):
|
||||
probe_session = shared.TLSNClientSession(server, tlsver=global_tlsver)
|
||||
print ('ssl port is: ', probe_session.ssl_port)
|
||||
tls_sock = shared.create_sock(probe_session.server_name,probe_session.ssl_port)
|
||||
probe_session.start_handshake(tls_sock)
|
||||
server_mod, server_exp = probe_session.extract_mod_and_exp()
|
||||
tls_sock.close()
|
||||
return shared.bi2ba(server_mod)
|
||||
|
||||
|
||||
def start_audit(server_name, headers, server_modulus):
|
||||
global global_tlsver
|
||||
global global_use_gzip
|
||||
global global_use_slowaes
|
||||
tlsn_session = shared.TLSNClientSession(server_name, tlsver=global_tlsver)
|
||||
tlsn_session.server_modulus = shared.ba2int(server_modulus)
|
||||
tlsn_session.server_mod_length = shared.bi2ba(len(server_modulus))
|
||||
print ('Preparing encrypted pre-master secret')
|
||||
prepare_pms(tlsn_session)
|
||||
|
||||
for i in range(10):
|
||||
try:
|
||||
print ('Performing handshake with server')
|
||||
tls_sock = shared.create_sock(tlsn_session.server_name,tlsn_session.ssl_port)
|
||||
tlsn_session.start_handshake(tls_sock)
|
||||
retval = negotiate_crippled_secrets(tlsn_session, tls_sock)
|
||||
if not retval == 'success':
|
||||
raise shared.TLSNSSLError('Failed to negotiate secrets: '+retval)
|
||||
#TODO: cert checking; how to do it for browserless mode?
|
||||
#========================================================
|
||||
#before sending any data to server compare this connection's cert to the
|
||||
#one which FF already validated earlier
|
||||
#if sha256(tlsn_session.server_certificate.asn1cert).hexdigest() != certhash:
|
||||
# raise Exception('Certificate mismatch')
|
||||
#========================================================
|
||||
print ('Getting data from server')
|
||||
response = make_tlsn_request(headers,tlsn_session,tls_sock)
|
||||
#prefix response with number of to-be-ignored records,
|
||||
#note: more than 256 unexpected records will cause a failure of audit. Just as well!
|
||||
response = shared.bi2ba(tlsn_session.unexpected_server_app_data_count,fixed=1) + response
|
||||
break
|
||||
except shared.TLSNSSLError:
|
||||
shared.ssl_dump(tlsn_session)
|
||||
raise
|
||||
except Exception as e:
|
||||
print ('Exception caught while getting data from server, retrying...', e)
|
||||
if i == 9:
|
||||
raise Exception('Notarization failed')
|
||||
continue
|
||||
|
||||
global audit_no
|
||||
audit_no += 1 #we want to increase only after server responded with data
|
||||
sf = str(audit_no)
|
||||
|
||||
commit_hash, pms2, signature, audit_time = commit_session(tlsn_session, response,sf)
|
||||
with open(join(current_session_dir,'sigfile'+sf),'wb') as f:
|
||||
f.write(signature)
|
||||
with open(join(current_session_dir,'commit_hash_pms2_servermod'+sf),'wb') as f:
|
||||
f.write(commit_hash+pms2+shared.bi2ba(tlsn_session.server_modulus))
|
||||
|
||||
msg = sha256(commit_hash+pms2+shared.bi2ba(tlsn_session.server_modulus)+audit_time).digest()
|
||||
oracle_ba_modulus = bytearray('').join(map(chr,oracle_modulus))
|
||||
oracle_int_modulus = shared.ba2int(oracle_ba_modulus)
|
||||
if not shared.verify_signature(msg, signature, oracle_int_modulus):
|
||||
raise Exception("Notarization FAILED, notary signature invalid.")
|
||||
|
||||
print ('Verified OK')
|
||||
audit_data = 'tlsnotary notarization file\n\n'
|
||||
audit_data += '\x00\x02' #2 version bytes
|
||||
audit_data += shared.bi2ba(tlsn_session.chosen_cipher_suite,fixed=2) # 2 bytes
|
||||
audit_data += tlsn_session.client_random + tlsn_session.server_random # 64 bytes
|
||||
audit_data += tlsn_session.pms1 + pms2 #48 bytes
|
||||
audit_data += shared.bi2ba(len(tlsn_session.server_certificate.certs),fixed=3)
|
||||
audit_data += tlsn_session.server_certificate.certs
|
||||
audit_data += tlsn_session.tlsver #2 bytes
|
||||
audit_data += tlsn_session.initial_tlsver #2 bytes
|
||||
audit_data += shared.bi2ba(len(response),fixed=8) #8 bytes
|
||||
audit_data += response #note that it includes unexpected pre-request app data, 10s of kB
|
||||
IV = tlsn_session.IV_after_finished if tlsn_session.chosen_cipher_suite in [47,53] \
|
||||
else shared.rc4_state_to_bytearray(tlsn_session.IV_after_finished)
|
||||
audit_data += shared.bi2ba(len(IV),fixed=2) #2 bytes
|
||||
audit_data += IV #16 bytes or 258 bytes for RC4.
|
||||
audit_data += shared.bi2ba(len(oracle_ba_modulus),fixed=2)
|
||||
audit_data += signature #512 bytes RSA PKCS 1 v1.5 padding
|
||||
audit_data += commit_hash #32 bytes sha256 hash
|
||||
audit_data += oracle_ba_modulus
|
||||
audit_data += audit_time
|
||||
|
||||
with open(join(current_session_dir,sf+".pgsg"),"wb") as f:
|
||||
f.write(audit_data)
|
||||
|
||||
#for later verification, write out the server modulus in hex
|
||||
#(e.g. to be compared against what's in browser)
|
||||
n_hexlified = binascii.hexlify(shared.bi2ba(tlsn_session.server_modulus))
|
||||
#pubkey in the format 09 56 23 ....
|
||||
n_write = " ".join(n_hexlified[i:i+2] for i in range(0, len(n_hexlified), 2))
|
||||
with open(join(current_session_dir,'pubkey'+sf), 'wb') as f: f.write(n_write)
|
||||
|
||||
print ("\n\n NOTARIZATION SUCCEEDED. \n ",
|
||||
"You can pass the file(s) " , join(current_session_dir, "1.pgsg"),
|
||||
" to an auditor for verification, or import into the PageSigner extension.")
|
||||
|
||||
rv = decrypt_html(pms2, tlsn_session, sf)
|
||||
html_paths = b64encode(rv[1])
|
||||
return True
|
||||
|
||||
#Because there is a 1 in ? chance that the encrypted PMS will contain zero bytes in its
|
||||
#padding, we first try the encrypted PMS with a reliable site and see if it gets rejected.
|
||||
def prepare_pms(tlsn_session):
|
||||
n = shared.bi2ba(tlsn_session.server_modulus)
|
||||
rs_choice = random.choice(shared.reliable_sites.keys())
|
||||
for i in range(10): #keep trying until reliable site check succeeds
|
||||
try:
|
||||
pms_session = shared.TLSNClientSession(rs_choice,shared.reliable_sites[rs_choice][0], ccs=53, tlsver=global_tlsver)
|
||||
if not pms_session:
|
||||
raise Exception("Client session construction failed in prepare_pms")
|
||||
tls_sock = shared.create_sock(pms_session.server_name,pms_session.ssl_port)
|
||||
pms_session.start_handshake(tls_sock)
|
||||
reply = send_and_recv('rcr_rsr_rsname_n',
|
||||
pms_session.client_random+pms_session.server_random+rs_choice[:5]+n)
|
||||
if reply[0] != 'success':
|
||||
raise Exception ('Failed to receive a reply for rcr_rsr_rsname_n:')
|
||||
if not reply[1]=='rrsapms_rhmac_rsapms':
|
||||
raise Exception ('bad reply. Expected rrsapms_rhmac_rsapms:')
|
||||
reply_data = reply[2]
|
||||
rrsapms2 = reply_data[:256]
|
||||
pms_session.p_auditor = reply_data[256:304]
|
||||
rsapms2 = reply_data[304:]
|
||||
response = pms_session.complete_handshake(tls_sock,rrsapms2)
|
||||
tls_sock.close()
|
||||
if not response:
|
||||
print ("PMS trial failed")
|
||||
continue
|
||||
#judge success/fail based on whether a properly encoded
|
||||
#Change Cipher Spec record is returned by the server (we could
|
||||
#also check the server finished, but it isn't necessary)
|
||||
if not response.count(shared.TLSRecord(shared.chcis,f='\x01', tlsver=global_tlsver).serialized):
|
||||
print ("PMS trial failed, retrying. (",binascii.hexlify(response),")")
|
||||
continue
|
||||
tlsn_session.auditee_secret = pms_session.auditee_secret
|
||||
tlsn_session.auditee_padding_secret = pms_session.auditee_padding_secret
|
||||
tlsn_session.enc_second_half_pms = shared.ba2int(rsapms2)
|
||||
tlsn_session.set_enc_first_half_pms()
|
||||
tlsn_session.set_encrypted_pms()
|
||||
return
|
||||
except shared.TLSNSSLError:
|
||||
shared.ssl_dump(pms_session,fn='preparepms_ssldump')
|
||||
shared.ssl_dump(tlsn_session)
|
||||
raise
|
||||
#except Exception,e:
|
||||
# print ('Exception caught in prepare_pms, retrying...', e)
|
||||
# continue
|
||||
raise Exception ('Could not prepare PMS with ', rs_choice, ' after 10 tries. Please '+\
|
||||
'double check that you are using a valid public key modulus for this site; '+\
|
||||
'it may have expired.')
|
||||
|
||||
def send_and_recv (cmd, dat,timeout=5):
|
||||
headers = {'Request':cmd,"Data":b64encode(dat),"UID":random_uid}
|
||||
url = 'http://'+shared.config.get("Notary","server_name")+":"+shared.config.get("Notary","server_port")
|
||||
r = requests.head(url,headers=headers)
|
||||
r_response_headers = r.headers #case insensitive dict
|
||||
received_cmd, received_dat = (r_response_headers['response'],r_response_headers['data'])
|
||||
return ('success', received_cmd, b64decode(received_dat))
|
||||
|
||||
#reconstruct correct http headers
|
||||
#for passing to TLSNotary custom ssl session
|
||||
#TODO not yet implemented in browserless mode; should
|
||||
#add standard headers, esp. gzip according to prefs
|
||||
def parse_headers(headers):
|
||||
header_lines = headers.split('\r\n') #no new line issues; it was constructed like that
|
||||
server = header_lines[1].split(':')[1].strip()
|
||||
if not global_use_gzip:
|
||||
modified_headers = '\r\n'.join([x for x in header_lines if 'gzip' not in x])
|
||||
else:
|
||||
modified_headers = '\r\n'.join(header_lines)
|
||||
return (server,modified_headers)
|
||||
|
||||
|
||||
def negotiate_crippled_secrets(tlsn_session, tls_sock):
|
||||
'''Negotiate with auditor in order to create valid session keys
|
||||
(except server mac is garbage as auditor withholds it)'''
|
||||
assert tlsn_session.handshake_hash_md5
|
||||
assert tlsn_session.handshake_hash_sha
|
||||
tlsn_session.set_auditee_secret()
|
||||
cs_cr_sr_hmacms_verifymd5sha = chr(tlsn_session.chosen_cipher_suite) + tlsn_session.client_random + \
|
||||
tlsn_session.server_random + tlsn_session.p_auditee[:24] + tlsn_session.handshake_hash_md5 + \
|
||||
tlsn_session.handshake_hash_sha
|
||||
reply = send_and_recv('cs_cr_sr_hmacms_verifymd5sha',cs_cr_sr_hmacms_verifymd5sha)
|
||||
if reply[0] != 'success':
|
||||
raise Exception ('Failed to receive a reply for cs_cr_sr_hmacms_verifymd5sha:')
|
||||
if not reply[1]=='hmacms_hmacek_hmacverify':
|
||||
raise Exception ('bad reply. Expected hmacms_hmacek_hmacverify but got', reply[1])
|
||||
reply_data = reply[2]
|
||||
expanded_key_len = shared.tlsn_cipher_suites[tlsn_session.chosen_cipher_suite][-1]
|
||||
if len(reply_data) != 24+expanded_key_len+12:
|
||||
raise Exception('unexpected reply length in negotiate_crippled_secrets')
|
||||
hmacms = reply_data[:24]
|
||||
hmacek = reply_data[24:24 + expanded_key_len]
|
||||
hmacverify = reply_data[24 + expanded_key_len:24 + expanded_key_len+12]
|
||||
tlsn_session.set_master_secret_half(half=2,provided_p_value = hmacms)
|
||||
tlsn_session.p_master_secret_auditor = hmacek
|
||||
tlsn_session.do_key_expansion()
|
||||
tlsn_session.send_client_finished(tls_sock,provided_p_value=hmacverify)
|
||||
sha_digest2,md5_digest2 = tlsn_session.set_handshake_hashes(server=True)
|
||||
reply = send_and_recv('verify_md5sha2',md5_digest2+sha_digest2)
|
||||
if reply[0] != 'success':
|
||||
raise Exception("Failed to receive a reply for verify_md5sha2")
|
||||
if not reply[1]=='verify_hmac2':
|
||||
raise Exception("bad reply. Expected verify_hmac2:")
|
||||
if not tlsn_session.check_server_ccs_finished(provided_p_value = reply[2]):
|
||||
raise Exception ("Could not finish handshake with server successfully. Audit aborted")
|
||||
return 'success'
|
||||
|
||||
def make_tlsn_request(headers,tlsn_session,tls_sock):
|
||||
'''Send TLS request including http headers and receive server response.'''
|
||||
try:
|
||||
tlsn_session.build_request(tls_sock,headers)
|
||||
response = shared.recv_socket(tls_sock) #not handshake flag means we wait on timeout
|
||||
if not response:
|
||||
raise Exception ("Received no response to request, cannot continue audit.")
|
||||
tlsn_session.store_server_app_data_records(response)
|
||||
except shared.TLSNSSLError:
|
||||
shared.ssl_dump(tlsn_session)
|
||||
raise
|
||||
|
||||
tls_sock.close()
|
||||
#we return the full record set, not only the response to our request
|
||||
return tlsn_session.unexpected_server_app_data_raw + response
|
||||
|
||||
def commit_session(tlsn_session,response,sf):
|
||||
'''Commit the encrypted server response and other data to auditor'''
|
||||
commit_dir = join(current_session_dir, 'commit')
|
||||
if not os.path.exists(commit_dir): os.makedirs(commit_dir)
|
||||
#Serialization of RC4 'IV' requires concatenating the box,x,y elements of the RC4 state tuple
|
||||
IV = shared.rc4_state_to_bytearray(tlsn_session.IV_after_finished) \
|
||||
if tlsn_session.chosen_cipher_suite in [4,5] else tlsn_session.IV_after_finished
|
||||
stuff_to_be_committed = {'response':response,'IV':IV,
|
||||
'cs':str(tlsn_session.chosen_cipher_suite),
|
||||
'pms_ee':tlsn_session.pms1,'domain':tlsn_session.server_name,
|
||||
'certificate.der':tlsn_session.server_certificate.asn1cert,
|
||||
'origtlsver':tlsn_session.initial_tlsver, 'tlsver':tlsn_session.tlsver}
|
||||
for k,v in stuff_to_be_committed.iteritems():
|
||||
with open(join(commit_dir,k+sf),'wb') as f: f.write(v)
|
||||
commit_hash = sha256(response).digest()
|
||||
reply = send_and_recv('commit_hash',commit_hash)
|
||||
#TODO: changed response from webserver
|
||||
if reply[0] != 'success':
|
||||
raise Exception ('Failed to receive a reply')
|
||||
if not reply[1]=='pms2':
|
||||
raise Exception ('bad reply. Expected pms2')
|
||||
return (commit_hash, reply[2][:24], reply[2][24:536], reply[2][536:540] )
|
||||
|
||||
|
||||
def decrypt_html(pms2, tlsn_session,sf):
|
||||
'''Receive correct server mac key and then decrypt server response (html),
|
||||
(includes authentication of response). Submit resulting html for browser
|
||||
for display (optionally render by stripping http headers).'''
|
||||
print ("\nStarting decryption of content, may take a few seconds...")
|
||||
try:
|
||||
tlsn_session.auditor_secret = pms2[:tlsn_session.n_auditor_entropy]
|
||||
tlsn_session.set_auditor_secret()
|
||||
tlsn_session.set_master_secret_half() #without arguments sets the whole MS
|
||||
tlsn_session.do_key_expansion() #also resets encryption connection state
|
||||
except shared.TLSNSSLError:
|
||||
shared.ssl_dump(tlsn_session)
|
||||
raise
|
||||
#either using slowAES or a RC4 ciphersuite
|
||||
try:
|
||||
plaintext,bad_mac = tlsn_session.process_server_app_data_records()
|
||||
except shared.TLSNSSLError:
|
||||
shared.ssl_dump(tlsn_session)
|
||||
raise
|
||||
if bad_mac:
|
||||
raise Exception("ERROR! Audit not valid! Plaintext is not authenticated.")
|
||||
return decrypt_html_stage2(plaintext, tlsn_session, sf)
|
||||
|
||||
|
||||
def decrypt_html_stage2(plaintext, tlsn_session, sf):
|
||||
plaintext = shared.dechunk_http(plaintext)
|
||||
if global_use_gzip:
|
||||
plaintext = shared.gunzip_http(plaintext)
|
||||
#write a session dump for checking even in case of success
|
||||
with open(join(current_session_dir,'session_dump'+sf),'wb') as f: f.write(tlsn_session.dump())
|
||||
commit_dir = join(current_session_dir, 'commit')
|
||||
html_path = join(commit_dir,'html-'+sf)
|
||||
with open(html_path,'wb') as f: f.write('\xef\xbb\xbf'+plaintext) #see "Byte order mark"
|
||||
if not int(shared.config.get("General","prevent_render")):
|
||||
html_path = join(commit_dir,'forbrowser-'+sf+'.html')
|
||||
with open(html_path,'wb') as f:
|
||||
f.write('\r\n\r\n'.join(plaintext.split('\r\n\r\n')[1:]))
|
||||
print ("Decryption complete.")
|
||||
return ('success',html_path)
|
||||
|
||||
def get_headers(hpath):
|
||||
#assumed in json format
|
||||
with open(hpath,'rb') as f:
|
||||
json_headers = json.loads(f.read())
|
||||
print (json_headers)
|
||||
headers_as_string = ''
|
||||
for h in json_headers:
|
||||
headers_as_string += bytearray(h,'utf-8') +':'+bytearray(json_headers[h],'utf-8')+' \r\n'
|
||||
return headers_as_string
|
||||
|
||||
#unpack and check validity of Python modules
|
||||
def first_run_check(modname,modhash):
|
||||
if not modhash: return
|
||||
mod_dir = join(data_dir, 'python', modname)
|
||||
if not os.path.exists(mod_dir):
|
||||
print ('Extracting '+modname + '.tar.gz...')
|
||||
with open(join(data_dir, 'python', modname+'.tar.gz'), 'rb') as f: tarfile_data = f.read()
|
||||
if md5(tarfile_data).hexdigest() != modhash:
|
||||
raise Exception ('Wrong hash')
|
||||
os.chdir(join(data_dir, 'python'))
|
||||
tar = tarfile.open(join(data_dir, 'python', modname+'.tar.gz'), 'r:gz')
|
||||
tar.extractall()
|
||||
tar.close()
|
||||
|
||||
if __name__ == "__main__":
|
||||
#for md5 hash, see https://pypi.python.org/pypi/<module name>/<module version>
|
||||
modules_to_load = {'rsa-3.1.4':'b6b1c80e1931d4eba8538fd5d4de1355',\
|
||||
'pyasn1-0.1.7':'2cbd80fcd4c7b1c82180d3d76fee18c8',\
|
||||
'slowaes':'','requests-2.3.0':'7449ffdc8ec9ac37bbcd286003c80f00'}
|
||||
for x,h in modules_to_load.iteritems():
|
||||
first_run_check(x,h)
|
||||
sys.path.append(join(data_dir, 'python', x))
|
||||
|
||||
import rsa
|
||||
import pyasn1
|
||||
import requests
|
||||
from pyasn1.type import univ
|
||||
from pyasn1.codec.der import encoder, decoder
|
||||
from slowaes import AESModeOfOperation
|
||||
import shared
|
||||
shared.load_program_config()
|
||||
shared.import_reliable_sites(join(install_dir,'src','shared'))
|
||||
random_uid = ''.join(random.choice(string.ascii_lowercase+string.digits) for x in range(10))
|
||||
#override default config values
|
||||
if int(shared.config.get("General","tls_11")) == 0:
|
||||
global_tlsver = bytearray('\x03\x01')
|
||||
if int(shared.config.get("General","decrypt_with_slowaes")) == 1:
|
||||
global_use_slowaes = True
|
||||
if int(shared.config.get("General","gzip_disabled")) == 1:
|
||||
global_use_gzip = False
|
||||
if int(shared.config.get("General","use_paillier_scheme")) == 1:
|
||||
global_use_paillier = True
|
||||
|
||||
parser = OptionParser(usage='usage: %prog [options] url',
|
||||
description='Automated notarization of the response to an https'
|
||||
+ ' request made to the url \'url\' , with https:// omitted.'
|
||||
)
|
||||
parser.add_option('-e', '--header-file', action="store", type="string", dest='header_path',
|
||||
help='if specified, the path to the file containing the HTTP headers to'
|
||||
+' be used in the request, in json format.')
|
||||
parser.add_option('-a', '--aws-query-check', action='store_true', dest='awscheck',
|
||||
help='if set, %prog will perform a check of the PageSigner AWS oracle to verify it.'
|
||||
+ 'This takes a few seconds.')
|
||||
(options, args) = parser.parse_args()
|
||||
if len(args) != 1:
|
||||
parser.error('Need a url to notarize')
|
||||
exit(1)
|
||||
|
||||
url_raw = args[0]
|
||||
if options.awscheck:
|
||||
check_oracle(oracle)
|
||||
host = url_raw.split('/')[0]
|
||||
url = '/'.join(url_raw.split('/')[1:])
|
||||
print ('using host', host)
|
||||
server_mod = probe_server_modulus(host)
|
||||
headers = "GET" + " /" + url + " HTTP/1.1" + "\r\n" + "Host: " + host + "\r\n"
|
||||
x = get_headers(options.header_path) if options.header_path else ''
|
||||
headers += x + "\r\n"
|
||||
|
||||
if start_audit(host, headers, server_mod):
|
||||
print ('successfully finished')
|
||||
exit(0)
|
||||
else:
|
||||
print ('failed to complete notarization')
|
||||
exit(1)
|
||||
@@ -1,205 +0,0 @@
|
||||
import re
|
||||
from xml.dom import minidom
|
||||
import urllib2
|
||||
from base64 import b64decode,b64encode
|
||||
|
||||
|
||||
snapshotID = 'snap-03bae56722ceec3f0'
|
||||
imageID = 'ami-1f447c65'
|
||||
|
||||
oracle_modulus = [186,187,68,57,92,215,243,62,188,248,16,13,3,29,40,217,208,206,78,13,202,184,82,121,26,51,203,41,169,11,4,102,228,127,110,117,170,48,210,212,160,51,175,246,110,178,43,106,94,255,69,0,217,91,225,7,84,133,193,43,177,254,75,191,109,50,212,190,177,61,64,230,188,105,56,252,40,3,91,190,117,1,52,30,210,137,136,13,216,110,83,21,164,56,248,215,33,159,129,149,85,236,130,194,79,227,184,135,133,61,85,201,243,225,121,233,36,84,207,218,86,68,99,21,150,252,28,220,4,93,81,57,214,94,147,56,234,236,0,178,93,39,48,143,21,120,241,33,73,239,185,255,255,79,112,194,72,226,84,158,182,96,159,33,111,57,212,27,23,133,223,152,101,240,98,181,94,38,147,195,187,245,226,158,11,102,91,91,47,146,178,65,180,73,176,209,32,27,99,183,254,161,115,38,186,31,132,165,252,189,226,72,152,219,177,52,47,178,121,45,30,143,78,142,223,133,112,136,72,165,166,225,18,62,249,119,157,198,68,114,69,199,32,121,201,72,159,13,37,66,160,210,83,163,131,128,54,178,219,5,74,94,214,244,43,123,140,156,192,89,120,211,61,192,76,70,176,122,247,198,21,220,79,212,200,192,88,126,200,115,71,102,66,92,102,60,179,213,125,123,86,195,67,204,71,222,249,46,242,179,11,111,12,158,91,189,215,72,190,15,165,11,102,51,1,91,116,127,31,12,55,193,249,170,15,231,13,189,60,73,8,239,238,18,44,131,78,190,164,46,41,169,139,43,230,105,2,170,231,202,203,126,74,202,172,112,217,194,26,202,140,71,183,45,239,213,254,213,139,27,95,163,172,27,176,189,233,59,181,49,225,220,125,90,182,120,183,236,62,100,170,130,122,202,206,193,77,130,250,167,187,238,39,197,216,183,56,203,72,122,168,64,217,225,8,233,13,164,224,23,255,239,230,44,90,31,149,106,207,28,9,249,154,163,84,231,149,167,59,194,193,41,106,239,30,137,188,78,45,66,30,224,233,181,132,146,106,227,135,229,106,71,168,69,149,167,154,150,106,29,130,114,109,11,66,120,42,128,247,166,248,152,103,131,56,88,37,46,19,240,110,135,15,234,44,39,87,65,232,105,2,163]
|
||||
|
||||
oracle = {'name':'tlsnotarygroup5',
|
||||
"IP":"54.158.251.14",
|
||||
"port":"10011",
|
||||
'DI':'https://ec2.us-east-1.amazonaws.com/?AWSAccessKeyId=AKIAIHZGACNJKBHFWOTQ&Action=DescribeInstances&Expires=2025-01-01&InstanceId=i-0858c02ad9a33c579&SignatureMethod=HmacSHA256&SignatureVersion=2&Version=2014-10-01&Signature=AWkxF%2FlBVL%2FBl2WhQC62qGJ80qhL%2B%2B%2FJXvSp8mm5sIg%3D',
|
||||
'DV':'https://ec2.us-east-1.amazonaws.com/?AWSAccessKeyId=AKIAIHZGACNJKBHFWOTQ&Action=DescribeVolumes&Expires=2025-01-01&SignatureMethod=HmacSHA256&SignatureVersion=2&Version=2014-10-01&VolumeId=vol-056223d4e1ce55d9c&Signature=DCYnV1vNqE3cyTm6bmtNS1idGdBT7DcbeLtZfcm3ljo%3D',
|
||||
'GCO':'https://ec2.us-east-1.amazonaws.com/?AWSAccessKeyId=AKIAIHZGACNJKBHFWOTQ&Action=GetConsoleOutput&Expires=2025-01-01&InstanceId=i-0858c02ad9a33c579&SignatureMethod=HmacSHA256&SignatureVersion=2&Version=2014-10-01&Signature=I%2F1kp7oSli9GvYrrP5HD52D6nOy7yCq9dowaDomSAOQ%3D',
|
||||
'GU':'https://iam.amazonaws.com/?AWSAccessKeyId=AKIAIHZGACNJKBHFWOTQ&Action=GetUser&Expires=2025-01-01&SignatureMethod=HmacSHA256&SignatureVersion=2&Version=2010-05-08&Signature=N%2BsdNA6z3QReVsHsf7RV4uZLzS5Pqi0n3QSfqBAMs8o%3D',
|
||||
'DIA':'https://ec2.us-east-1.amazonaws.com/?AWSAccessKeyId=AKIAIHZGACNJKBHFWOTQ&Action=DescribeInstanceAttribute&Attribute=userData&Expires=2025-01-01&InstanceId=i-0858c02ad9a33c579&SignatureMethod=HmacSHA256&SignatureVersion=2&Version=2014-10-01&Signature=ENM%2Bw9WkB4U4kYDMN6kowJhZenuCEX3c1G7xSuu6GZA%3D',
|
||||
'instanceId': 'i-0858c02ad9a33c579',
|
||||
"modulus":[186,187,68,57,92,215,243,62,188,248,16,13,3,29,40,217,208,206,78,13,202,184,82,121,26,51,203,41,169,11,4,102,228,127,110,117,170,48,210,212,160,51,175,246,110,178,43,106,94,255,69,0,217,91,225,7,84,133,193,43,177,254,75,191,109,50,212,190,177,61,64,230,188,105,56,252,40,3,91,190,117,1,52,30,210,137,136,13,216,110,83,21,164,56,248,215,33,159,129,149,85,236,130,194,79,227,184,135,133,61,85,201,243,225,121,233,36,84,207,218,86,68,99,21,150,252,28,220,4,93,81,57,214,94,147,56,234,236,0,178,93,39,48,143,21,120,241,33,73,239,185,255,255,79,112,194,72,226,84,158,182,96,159,33,111,57,212,27,23,133,223,152,101,240,98,181,94,38,147,195,187,245,226,158,11,102,91,91,47,146,178,65,180,73,176,209,32,27,99,183,254,161,115,38,186,31,132,165,252,189,226,72,152,219,177,52,47,178,121,45,30,143,78,142,223,133,112,136,72,165,166,225,18,62,249,119,157,198,68,114,69,199,32,121,201,72,159,13,37,66,160,210,83,163,131,128,54,178,219,5,74,94,214,244,43,123,140,156,192,89,120,211,61,192,76,70,176,122,247,198,21,220,79,212,200,192,88,126,200,115,71,102,66,92,102,60,179,213,125,123,86,195,67,204,71,222,249,46,242,179,11,111,12,158,91,189,215,72,190,15,165,11,102,51,1,91,116,127,31,12,55,193,249,170,15,231,13,189,60,73,8,239,238,18,44,131,78,190,164,46,41,169,139,43,230,105,2,170,231,202,203,126,74,202,172,112,217,194,26,202,140,71,183,45,239,213,254,213,139,27,95,163,172,27,176,189,233,59,181,49,225,220,125,90,182,120,183,236,62,100,170,130,122,202,206,193,77,130,250,167,187,238,39,197,216,183,56,203,72,122,168,64,217,225,8,233,13,164,224,23,255,239,230,44,90,31,149,106,207,28,9,249,154,163,84,231,149,167,59,194,193,41,106,239,30,137,188,78,45,66,30,224,233,181,132,146,106,227,135,229,106,71,168,69,149,167,154,150,106,29,130,114,109,11,66,120,42,128,247,166,248,152,103,131,56,88,37,46,19,240,110,135,15,234,44,39,87,65,232,105,2,163]
|
||||
}
|
||||
|
||||
|
||||
def get_xhr(url):
|
||||
xml = urllib2.urlopen(url).read()
|
||||
return minidom.parseString(xml)
|
||||
|
||||
def getChildNodes(x):
|
||||
return [a for a in x[0].childNodes if not a.nodeName=='#text']
|
||||
|
||||
#assuming both events happened on the same day, get the time
|
||||
#difference between them in seconds
|
||||
#the time string looks like "2015-04-15T19:00:59.000Z"
|
||||
def getSecondsDelta (later, sooner):
|
||||
assert (len(later) == 24)
|
||||
if (later[:11] != sooner[:11]):
|
||||
return 999999; #not on the same day
|
||||
laterTime = later[11:19].split(':')
|
||||
soonerTime = sooner[11:19].split(':')
|
||||
laterSecs = int(laterTime[0])*3600+int(laterTime[1])*60+int(laterTime[2])
|
||||
soonerSecs = int(soonerTime[0])*3600+int(soonerTime[1])*60+int(soonerTime[2])
|
||||
return laterSecs - soonerSecs
|
||||
|
||||
|
||||
def modulus_from_pubkey(pem_pubkey):
|
||||
b64_str = ''
|
||||
lines = pem_pubkey.split('\n')
|
||||
#omit header and footer lines
|
||||
for i in range(1,len(lines)-1):
|
||||
b64_str += lines[i]
|
||||
der = b64decode(b64_str)
|
||||
#last 5 bytes are 2 DER bytes and 3 bytes exponent, our pubkey is the preceding 512 bytes
|
||||
pubkey = der[len(der)-517:len(der)-5]
|
||||
return pubkey
|
||||
|
||||
def checkDescribeInstances(xmlDoc, instanceId, IP):
|
||||
try:
|
||||
rs = xmlDoc.getElementsByTagName('reservationSet')
|
||||
assert rs.length == 1
|
||||
rs_items = getChildNodes(rs)
|
||||
assert len(rs_items) == 1
|
||||
ownerId = rs_items[0].getElementsByTagName('ownerId')[0].firstChild.data
|
||||
isets = rs_items[0].getElementsByTagName('instancesSet')
|
||||
assert isets.length == 1
|
||||
instances = getChildNodes(isets)
|
||||
assert len(instances) == 1
|
||||
parent = instances[0]
|
||||
assert parent.getElementsByTagName('instanceId')[0].firstChild.data == instanceId
|
||||
assert parent.getElementsByTagName('imageId')[0].firstChild.data == imageID
|
||||
assert parent.getElementsByTagName('instanceState')[0].getElementsByTagName('name')[0].firstChild.data == 'running'
|
||||
launchTime = parent.getElementsByTagName('launchTime')[0].firstChild.data
|
||||
assert parent.getElementsByTagName('ipAddress')[0].firstChild.data == IP
|
||||
assert parent.getElementsByTagName('rootDeviceType')[0].firstChild.data == 'ebs'
|
||||
assert parent.getElementsByTagName('rootDeviceName')[0].firstChild.data == '/dev/xvda'
|
||||
devices = parent.getElementsByTagName('blockDeviceMapping')[0].getElementsByTagName('item')
|
||||
assert devices.length == 1
|
||||
assert devices[0].getElementsByTagName('deviceName')[0].firstChild.data == '/dev/xvda'
|
||||
assert devices[0].getElementsByTagName('ebs')[0].getElementsByTagName('status')[0].firstChild.data == 'attached'
|
||||
volAttachTime = devices[0].getElementsByTagName('ebs')[0].getElementsByTagName('attachTime')[0].firstChild.data
|
||||
volumeId = devices[0].getElementsByTagName('ebs')[0].getElementsByTagName('volumeId')[0].firstChild.data
|
||||
#get seconds from "2015-04-15T19:00:59.000Z"
|
||||
assert getSecondsDelta(volAttachTime, launchTime) <= 3
|
||||
assert parent.getElementsByTagName('virtualizationType')[0].firstChild.data == 'hvm'
|
||||
return {'ownerId':ownerId, 'volumeId':volumeId, 'volAttachTime':volAttachTime, 'launchTime':launchTime}
|
||||
except:
|
||||
return False
|
||||
|
||||
|
||||
def checkDescribeVolumes(xmlDoc, instanceId, volumeId, volAttachTime):
|
||||
try:
|
||||
v = xmlDoc.getElementsByTagName('volumeSet')
|
||||
volumes = getChildNodes(v)
|
||||
assert len(volumes) == 1
|
||||
volume = volumes[0]
|
||||
assert volume.getElementsByTagName('volumeId')[0].firstChild.data == volumeId
|
||||
assert volume.getElementsByTagName('snapshotId')[0].firstChild.data == snapshotID
|
||||
assert volume.getElementsByTagName('status')[0].firstChild.data == 'in-use'
|
||||
volCreateTime = volume.getElementsByTagName('createTime')[0].firstChild.data
|
||||
attVolumes = volume.getElementsByTagName('attachmentSet')[0].getElementsByTagName('item')
|
||||
assert attVolumes.length == 1
|
||||
attVolume = attVolumes[0];
|
||||
assert attVolume.getElementsByTagName('volumeId')[0].firstChild.data == volumeId
|
||||
assert attVolume.getElementsByTagName('instanceId')[0].firstChild.data == instanceId
|
||||
assert attVolume.getElementsByTagName('device')[0].firstChild.data == '/dev/xvda'
|
||||
assert attVolume.getElementsByTagName('status')[0].firstChild.data == 'attached'
|
||||
attTime = attVolume.getElementsByTagName('attachTime')[0].firstChild.data
|
||||
assert volAttachTime == attTime
|
||||
#Crucial: volume was created from snapshot and attached at the same instant
|
||||
#this guarantees that there was no time window to modify it
|
||||
assert getSecondsDelta(attTime, volCreateTime) == 0
|
||||
return True
|
||||
except:
|
||||
return False
|
||||
|
||||
|
||||
|
||||
def checkGetConsoleOutput(xmlDoc, instanceId, launchTime):
|
||||
try:
|
||||
assert xmlDoc.getElementsByTagName('instanceId')[0].firstChild.data == instanceId
|
||||
timestamp = xmlDoc.getElementsByTagName('timestamp')[0].firstChild.data
|
||||
#prevent funny business: last consoleLog entry no later than 5 minutes after instance starts
|
||||
assert getSecondsDelta(timestamp, launchTime) <= 300
|
||||
b64data = xmlDoc.getElementsByTagName('output')[0].firstChild.data
|
||||
logstr = b64decode(b64data)
|
||||
sigmark = 'PageSigner public key for verification'
|
||||
pkstartmark = '-----BEGIN PUBLIC KEY-----'
|
||||
pkendmark = '-----END PUBLIC KEY-----'
|
||||
|
||||
mark_start = logstr.index(sigmark)
|
||||
assert mark_start != -1
|
||||
pubkey_start = mark_start + logstr[mark_start:].index(pkstartmark)
|
||||
pubkey_end = pubkey_start+ logstr[pubkey_start:].index(pkendmark) + len(pkendmark)
|
||||
pk = logstr[pubkey_start:pubkey_end]
|
||||
assert len(pk) > 0
|
||||
return pk
|
||||
except:
|
||||
return False
|
||||
|
||||
|
||||
|
||||
|
||||
# "userData" allows to pass an arbitrary script to the instance at launch. It MUST be empty.
|
||||
# This is a sanity check because the instance is stripped of the code which parses userData.
|
||||
def checkDescribeInstanceAttribute(xmlDoc, instanceId):
|
||||
try:
|
||||
assert xmlDoc.getElementsByTagName('instanceId')[0].firstChild.data == instanceId
|
||||
assert not xmlDoc.getElementsByTagName('userData')[0].firstChild
|
||||
return True
|
||||
except:
|
||||
return False
|
||||
|
||||
def checkGetUser(xmlDoc, ownerId):
|
||||
#try:
|
||||
assert xmlDoc.getElementsByTagName('UserId')[0].firstChild.data == ownerId
|
||||
assert xmlDoc.getElementsByTagName('Arn')[0].firstChild.data[-(len(ownerId) + len(':root')):] == ownerId+':root'
|
||||
#except:
|
||||
#return False
|
||||
return True
|
||||
|
||||
def check_oracle(o):
|
||||
print ('Verifying PageSigner AWS oracle server, this may take up to a minute...')
|
||||
xmlDoc = get_xhr(o['DI'])
|
||||
CDIresult = checkDescribeInstances(xmlDoc, o['instanceId'], o['IP'])
|
||||
if not CDIresult:
|
||||
raise Exception('checkDescribeInstances')
|
||||
print ('check 1 of 5 successful')
|
||||
|
||||
xmlDoc = get_xhr(o['DV'])
|
||||
CDVresult = checkDescribeVolumes(xmlDoc, o['instanceId'], CDIresult['volumeId'], CDIresult['volAttachTime'])
|
||||
if not CDVresult:
|
||||
raise Exception('checkDescribeVolumes')
|
||||
print ('check 2 of 5 successful')
|
||||
|
||||
xmlDoc = get_xhr(o['GU'])
|
||||
CGUresult = checkGetUser(xmlDoc, CDIresult['ownerId'])
|
||||
if not CGUresult:
|
||||
raise Exception('checkGetUser')
|
||||
print ('check 3 of 5 successful')
|
||||
|
||||
xmlDoc = get_xhr(o['GCO'])
|
||||
GCOresult = checkGetConsoleOutput(xmlDoc, o['instanceId'], CDIresult['launchTime'])
|
||||
if not GCOresult:
|
||||
raise Exception('checkGetConsoleOutput')
|
||||
print ('check 4 of 5 successful')
|
||||
|
||||
sigmod_binary = bytearray('').join(map(chr,o['modulus']))
|
||||
if modulus_from_pubkey(GCOresult) != sigmod_binary:
|
||||
raise Exception("modulus from pubkey")
|
||||
|
||||
xmlDoc = get_xhr(o['DIA'])
|
||||
DIAresult = checkDescribeInstanceAttribute(xmlDoc, o['instanceId'])
|
||||
if not DIAresult:
|
||||
raise Exception('checkDescribeInstanceAttribute')
|
||||
print ('check 5 of 5 successful')
|
||||
|
||||
mark = 'AWSAccessKeyId=';
|
||||
ids = [];
|
||||
#"AWSAccessKeyId" should be the same to prove that the queries are made on behalf of AWS user "root".
|
||||
#The attacker can be a user with limited privileges for whom the API would report only partial information.
|
||||
for url in [o['DI'], o['DV'], o['GU'], o['GCO'], o['DIA']]:
|
||||
start = url.index(mark)+len(mark);
|
||||
ids.append(url[start:start + url[start:].index('&')])
|
||||
assert len(set(ids)) == 1
|
||||
print('oracle verification successfully finished')
|
||||
@@ -1,104 +0,0 @@
|
||||
#!/usr/bin/env python
|
||||
from __future__ import print_function
|
||||
import tarfile
|
||||
from hashlib import md5, sha1, sha256
|
||||
from os.path import join
|
||||
import sys, os
|
||||
data_dir = os.path.dirname(os.path.realpath(__file__))
|
||||
sys.path.append(os.path.dirname(data_dir))
|
||||
install_dir = os.path.dirname(os.path.dirname(data_dir))
|
||||
|
||||
def extract_audit_data(audit_filename):
|
||||
audit_data = {}
|
||||
with open(audit_filename,'rb') as f:
|
||||
header = f.read(29)
|
||||
print (header)
|
||||
if header != 'tlsnotary notarization file\n\n':
|
||||
raise Exception("Invalid file format")
|
||||
version = f.read(2)
|
||||
if version != '\x00\x02':
|
||||
raise Exception("Incompatible file version")
|
||||
audit_data['cipher_suite'] = shared.ba2int(f.read(2))
|
||||
audit_data['client_random'] = f.read(32)
|
||||
audit_data['server_random'] = f.read(32)
|
||||
audit_data['pms1'] = f.read(24)
|
||||
audit_data['pms2'] = f.read(24)
|
||||
full_cert = f.read(3)
|
||||
chain_serialized_len = shared.ba2int(full_cert)
|
||||
chain_serialized = f.read(chain_serialized_len)
|
||||
full_cert += chain_serialized
|
||||
audit_data['tlsver'] = f.read(2)
|
||||
audit_data['initial_tlsver'] = f.read(2)
|
||||
response_len = shared.ba2int(f.read(8))
|
||||
audit_data['response'] = f.read(response_len)
|
||||
IV_len = shared.ba2int(f.read(2))
|
||||
if IV_len not in [258,16]:
|
||||
print ("IV length was: ", IV_len)
|
||||
raise Exception("Wrong IV format in audit file")
|
||||
audit_data['IV'] = f.read(IV_len)
|
||||
sig_len = shared.ba2int(f.read(2))
|
||||
audit_data['signature'] = f.read(sig_len)
|
||||
audit_data['commit_hash'] = f.read(32)
|
||||
audit_data['pubkey_pem'] = f.read(sig_len)
|
||||
|
||||
offset = 0
|
||||
chain=[]
|
||||
while (offset < chain_serialized_len):
|
||||
l = shared.ba2int(chain_serialized[offset:offset+3])
|
||||
offset += 3
|
||||
cert = chain_serialized[offset:offset+l]
|
||||
offset += l
|
||||
chain.append(cert)
|
||||
|
||||
audit_data['certs'] = chain
|
||||
audit_data['fullcert'] = full_cert
|
||||
return audit_data
|
||||
|
||||
#unpack and check validity of Python modules
|
||||
def first_run_check(modname,modhash):
|
||||
if not modhash: return
|
||||
mod_dir = join(data_dir, 'python', modname)
|
||||
if not os.path.exists(mod_dir):
|
||||
print ('Extracting '+modname + '.tar.gz...')
|
||||
with open(join(data_dir, 'python', modname+'.tar.gz'), 'rb') as f: tarfile_data = f.read()
|
||||
if md5(tarfile_data).hexdigest() != modhash:
|
||||
raise Exception ('Wrong hash')
|
||||
os.chdir(join(data_dir, 'python'))
|
||||
tar = tarfile.open(join(data_dir, 'python', modname+'.tar.gz'), 'r:gz')
|
||||
tar.extractall()
|
||||
tar.close()
|
||||
|
||||
if __name__ == "__main__":
|
||||
#for md5 hash, see https://pypi.python.org/pypi/<module name>/<module version>
|
||||
modules_to_load = {'rsa-3.1.4':'b6b1c80e1931d4eba8538fd5d4de1355',\
|
||||
'pyasn1-0.1.7':'2cbd80fcd4c7b1c82180d3d76fee18c8',\
|
||||
'slowaes':'','requests-2.3.0':'7449ffdc8ec9ac37bbcd286003c80f00'}
|
||||
for x,h in modules_to_load.iteritems():
|
||||
first_run_check(x,h)
|
||||
sys.path.append(join(data_dir, 'python', x))
|
||||
|
||||
import rsa
|
||||
import pyasn1
|
||||
import requests
|
||||
from pyasn1.type import univ
|
||||
from pyasn1.codec.der import encoder, decoder
|
||||
from slowaes import AESModeOfOperation
|
||||
import shared
|
||||
shared.load_program_config()
|
||||
shared.import_reliable_sites(join(install_dir,'src','shared'))
|
||||
|
||||
print (sys.argv)
|
||||
|
||||
if (len(sys.argv)!=2):
|
||||
raise Exception("Invalid argument")
|
||||
audit_data = extract_audit_data(sys.argv[1])
|
||||
for i,c in enumerate(audit_data['certs']):
|
||||
with open(str(i)+'.der','wb') as f:
|
||||
f.write(c)
|
||||
|
||||
with open('fullcert','wb') as f:
|
||||
f.write(shared.bi2ba(len(audit_data['fullcert']),fixed=3))
|
||||
f.write(audit_data['fullcert'])
|
||||
|
||||
|
||||
|
||||
Binary file not shown.
Binary file not shown.
Binary file not shown.
@@ -1,656 +0,0 @@
|
||||
#!/usr/bin/python
|
||||
#
|
||||
# aes.py: implements AES - Advanced Encryption Standard
|
||||
# from the SlowAES project, http://code.google.com/p/slowaes/
|
||||
#
|
||||
# Copyright (c) 2008 Josh Davis ( http://www.josh-davis.org ),
|
||||
# Alex Martelli ( http://www.aleax.it )
|
||||
#
|
||||
# Ported from C code written by Laurent Haan ( http://www.progressive-coding.com )
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0
|
||||
# http://www.apache.org/licenses/
|
||||
#
|
||||
import os
|
||||
import sys
|
||||
import math
|
||||
|
||||
def append_PKCS7_padding(s):
|
||||
"""return s padded to a multiple of 16-bytes by PKCS7 padding"""
|
||||
numpads = 16 - (len(s)%16)
|
||||
return s + numpads*chr(numpads)
|
||||
|
||||
def strip_PKCS7_padding(s):
|
||||
"""return s stripped of PKCS7 padding"""
|
||||
if len(s)%16 or not s:
|
||||
raise ValueError("String of len %d can't be PCKS7-padded" % len(s))
|
||||
numpads = ord(s[-1])
|
||||
if numpads > 16:
|
||||
raise ValueError("String ending with %r can't be PCKS7-padded" % s[-1])
|
||||
return s[:-numpads]
|
||||
|
||||
class AES(object):
|
||||
# valid key sizes
|
||||
keySize = dict(SIZE_128=16, SIZE_192=24, SIZE_256=32)
|
||||
|
||||
# Rijndael S-box
|
||||
sbox = [0x63, 0x7c, 0x77, 0x7b, 0xf2, 0x6b, 0x6f, 0xc5, 0x30, 0x01, 0x67,
|
||||
0x2b, 0xfe, 0xd7, 0xab, 0x76, 0xca, 0x82, 0xc9, 0x7d, 0xfa, 0x59,
|
||||
0x47, 0xf0, 0xad, 0xd4, 0xa2, 0xaf, 0x9c, 0xa4, 0x72, 0xc0, 0xb7,
|
||||
0xfd, 0x93, 0x26, 0x36, 0x3f, 0xf7, 0xcc, 0x34, 0xa5, 0xe5, 0xf1,
|
||||
0x71, 0xd8, 0x31, 0x15, 0x04, 0xc7, 0x23, 0xc3, 0x18, 0x96, 0x05,
|
||||
0x9a, 0x07, 0x12, 0x80, 0xe2, 0xeb, 0x27, 0xb2, 0x75, 0x09, 0x83,
|
||||
0x2c, 0x1a, 0x1b, 0x6e, 0x5a, 0xa0, 0x52, 0x3b, 0xd6, 0xb3, 0x29,
|
||||
0xe3, 0x2f, 0x84, 0x53, 0xd1, 0x00, 0xed, 0x20, 0xfc, 0xb1, 0x5b,
|
||||
0x6a, 0xcb, 0xbe, 0x39, 0x4a, 0x4c, 0x58, 0xcf, 0xd0, 0xef, 0xaa,
|
||||
0xfb, 0x43, 0x4d, 0x33, 0x85, 0x45, 0xf9, 0x02, 0x7f, 0x50, 0x3c,
|
||||
0x9f, 0xa8, 0x51, 0xa3, 0x40, 0x8f, 0x92, 0x9d, 0x38, 0xf5, 0xbc,
|
||||
0xb6, 0xda, 0x21, 0x10, 0xff, 0xf3, 0xd2, 0xcd, 0x0c, 0x13, 0xec,
|
||||
0x5f, 0x97, 0x44, 0x17, 0xc4, 0xa7, 0x7e, 0x3d, 0x64, 0x5d, 0x19,
|
||||
0x73, 0x60, 0x81, 0x4f, 0xdc, 0x22, 0x2a, 0x90, 0x88, 0x46, 0xee,
|
||||
0xb8, 0x14, 0xde, 0x5e, 0x0b, 0xdb, 0xe0, 0x32, 0x3a, 0x0a, 0x49,
|
||||
0x06, 0x24, 0x5c, 0xc2, 0xd3, 0xac, 0x62, 0x91, 0x95, 0xe4, 0x79,
|
||||
0xe7, 0xc8, 0x37, 0x6d, 0x8d, 0xd5, 0x4e, 0xa9, 0x6c, 0x56, 0xf4,
|
||||
0xea, 0x65, 0x7a, 0xae, 0x08, 0xba, 0x78, 0x25, 0x2e, 0x1c, 0xa6,
|
||||
0xb4, 0xc6, 0xe8, 0xdd, 0x74, 0x1f, 0x4b, 0xbd, 0x8b, 0x8a, 0x70,
|
||||
0x3e, 0xb5, 0x66, 0x48, 0x03, 0xf6, 0x0e, 0x61, 0x35, 0x57, 0xb9,
|
||||
0x86, 0xc1, 0x1d, 0x9e, 0xe1, 0xf8, 0x98, 0x11, 0x69, 0xd9, 0x8e,
|
||||
0x94, 0x9b, 0x1e, 0x87, 0xe9, 0xce, 0x55, 0x28, 0xdf, 0x8c, 0xa1,
|
||||
0x89, 0x0d, 0xbf, 0xe6, 0x42, 0x68, 0x41, 0x99, 0x2d, 0x0f, 0xb0,
|
||||
0x54, 0xbb, 0x16]
|
||||
|
||||
# Rijndael Inverted S-box
|
||||
rsbox = [0x52, 0x09, 0x6a, 0xd5, 0x30, 0x36, 0xa5, 0x38, 0xbf, 0x40, 0xa3,
|
||||
0x9e, 0x81, 0xf3, 0xd7, 0xfb , 0x7c, 0xe3, 0x39, 0x82, 0x9b, 0x2f,
|
||||
0xff, 0x87, 0x34, 0x8e, 0x43, 0x44, 0xc4, 0xde, 0xe9, 0xcb , 0x54,
|
||||
0x7b, 0x94, 0x32, 0xa6, 0xc2, 0x23, 0x3d, 0xee, 0x4c, 0x95, 0x0b,
|
||||
0x42, 0xfa, 0xc3, 0x4e , 0x08, 0x2e, 0xa1, 0x66, 0x28, 0xd9, 0x24,
|
||||
0xb2, 0x76, 0x5b, 0xa2, 0x49, 0x6d, 0x8b, 0xd1, 0x25 , 0x72, 0xf8,
|
||||
0xf6, 0x64, 0x86, 0x68, 0x98, 0x16, 0xd4, 0xa4, 0x5c, 0xcc, 0x5d,
|
||||
0x65, 0xb6, 0x92 , 0x6c, 0x70, 0x48, 0x50, 0xfd, 0xed, 0xb9, 0xda,
|
||||
0x5e, 0x15, 0x46, 0x57, 0xa7, 0x8d, 0x9d, 0x84 , 0x90, 0xd8, 0xab,
|
||||
0x00, 0x8c, 0xbc, 0xd3, 0x0a, 0xf7, 0xe4, 0x58, 0x05, 0xb8, 0xb3,
|
||||
0x45, 0x06 , 0xd0, 0x2c, 0x1e, 0x8f, 0xca, 0x3f, 0x0f, 0x02, 0xc1,
|
||||
0xaf, 0xbd, 0x03, 0x01, 0x13, 0x8a, 0x6b , 0x3a, 0x91, 0x11, 0x41,
|
||||
0x4f, 0x67, 0xdc, 0xea, 0x97, 0xf2, 0xcf, 0xce, 0xf0, 0xb4, 0xe6,
|
||||
0x73 , 0x96, 0xac, 0x74, 0x22, 0xe7, 0xad, 0x35, 0x85, 0xe2, 0xf9,
|
||||
0x37, 0xe8, 0x1c, 0x75, 0xdf, 0x6e , 0x47, 0xf1, 0x1a, 0x71, 0x1d,
|
||||
0x29, 0xc5, 0x89, 0x6f, 0xb7, 0x62, 0x0e, 0xaa, 0x18, 0xbe, 0x1b ,
|
||||
0xfc, 0x56, 0x3e, 0x4b, 0xc6, 0xd2, 0x79, 0x20, 0x9a, 0xdb, 0xc0,
|
||||
0xfe, 0x78, 0xcd, 0x5a, 0xf4 , 0x1f, 0xdd, 0xa8, 0x33, 0x88, 0x07,
|
||||
0xc7, 0x31, 0xb1, 0x12, 0x10, 0x59, 0x27, 0x80, 0xec, 0x5f , 0x60,
|
||||
0x51, 0x7f, 0xa9, 0x19, 0xb5, 0x4a, 0x0d, 0x2d, 0xe5, 0x7a, 0x9f,
|
||||
0x93, 0xc9, 0x9c, 0xef , 0xa0, 0xe0, 0x3b, 0x4d, 0xae, 0x2a, 0xf5,
|
||||
0xb0, 0xc8, 0xeb, 0xbb, 0x3c, 0x83, 0x53, 0x99, 0x61 , 0x17, 0x2b,
|
||||
0x04, 0x7e, 0xba, 0x77, 0xd6, 0x26, 0xe1, 0x69, 0x14, 0x63, 0x55,
|
||||
0x21, 0x0c, 0x7d]
|
||||
|
||||
def getSBoxValue(self,num):
|
||||
"""Retrieves a given S-Box Value"""
|
||||
return self.sbox[num]
|
||||
|
||||
def getSBoxInvert(self,num):
|
||||
"""Retrieves a given Inverted S-Box Value"""
|
||||
return self.rsbox[num]
|
||||
|
||||
def rotate(self, word):
|
||||
""" Rijndael's key schedule rotate operation.
|
||||
|
||||
Rotate a word eight bits to the left: eg, rotate(1d2c3a4f) == 2c3a4f1d
|
||||
Word is an char list of size 4 (32 bits overall).
|
||||
"""
|
||||
return word[1:] + word[:1]
|
||||
|
||||
# Rijndael Rcon
|
||||
Rcon = [0x8d, 0x01, 0x02, 0x04, 0x08, 0x10, 0x20, 0x40, 0x80, 0x1b, 0x36,
|
||||
0x6c, 0xd8, 0xab, 0x4d, 0x9a, 0x2f, 0x5e, 0xbc, 0x63, 0xc6, 0x97,
|
||||
0x35, 0x6a, 0xd4, 0xb3, 0x7d, 0xfa, 0xef, 0xc5, 0x91, 0x39, 0x72,
|
||||
0xe4, 0xd3, 0xbd, 0x61, 0xc2, 0x9f, 0x25, 0x4a, 0x94, 0x33, 0x66,
|
||||
0xcc, 0x83, 0x1d, 0x3a, 0x74, 0xe8, 0xcb, 0x8d, 0x01, 0x02, 0x04,
|
||||
0x08, 0x10, 0x20, 0x40, 0x80, 0x1b, 0x36, 0x6c, 0xd8, 0xab, 0x4d,
|
||||
0x9a, 0x2f, 0x5e, 0xbc, 0x63, 0xc6, 0x97, 0x35, 0x6a, 0xd4, 0xb3,
|
||||
0x7d, 0xfa, 0xef, 0xc5, 0x91, 0x39, 0x72, 0xe4, 0xd3, 0xbd, 0x61,
|
||||
0xc2, 0x9f, 0x25, 0x4a, 0x94, 0x33, 0x66, 0xcc, 0x83, 0x1d, 0x3a,
|
||||
0x74, 0xe8, 0xcb, 0x8d, 0x01, 0x02, 0x04, 0x08, 0x10, 0x20, 0x40,
|
||||
0x80, 0x1b, 0x36, 0x6c, 0xd8, 0xab, 0x4d, 0x9a, 0x2f, 0x5e, 0xbc,
|
||||
0x63, 0xc6, 0x97, 0x35, 0x6a, 0xd4, 0xb3, 0x7d, 0xfa, 0xef, 0xc5,
|
||||
0x91, 0x39, 0x72, 0xe4, 0xd3, 0xbd, 0x61, 0xc2, 0x9f, 0x25, 0x4a,
|
||||
0x94, 0x33, 0x66, 0xcc, 0x83, 0x1d, 0x3a, 0x74, 0xe8, 0xcb, 0x8d,
|
||||
0x01, 0x02, 0x04, 0x08, 0x10, 0x20, 0x40, 0x80, 0x1b, 0x36, 0x6c,
|
||||
0xd8, 0xab, 0x4d, 0x9a, 0x2f, 0x5e, 0xbc, 0x63, 0xc6, 0x97, 0x35,
|
||||
0x6a, 0xd4, 0xb3, 0x7d, 0xfa, 0xef, 0xc5, 0x91, 0x39, 0x72, 0xe4,
|
||||
0xd3, 0xbd, 0x61, 0xc2, 0x9f, 0x25, 0x4a, 0x94, 0x33, 0x66, 0xcc,
|
||||
0x83, 0x1d, 0x3a, 0x74, 0xe8, 0xcb, 0x8d, 0x01, 0x02, 0x04, 0x08,
|
||||
0x10, 0x20, 0x40, 0x80, 0x1b, 0x36, 0x6c, 0xd8, 0xab, 0x4d, 0x9a,
|
||||
0x2f, 0x5e, 0xbc, 0x63, 0xc6, 0x97, 0x35, 0x6a, 0xd4, 0xb3, 0x7d,
|
||||
0xfa, 0xef, 0xc5, 0x91, 0x39, 0x72, 0xe4, 0xd3, 0xbd, 0x61, 0xc2,
|
||||
0x9f, 0x25, 0x4a, 0x94, 0x33, 0x66, 0xcc, 0x83, 0x1d, 0x3a, 0x74,
|
||||
0xe8, 0xcb ]
|
||||
|
||||
def getRconValue(self, num):
|
||||
"""Retrieves a given Rcon Value"""
|
||||
return self.Rcon[num]
|
||||
|
||||
def core(self, word, iteration):
|
||||
"""Key schedule core."""
|
||||
# rotate the 32-bit word 8 bits to the left
|
||||
word = self.rotate(word)
|
||||
# apply S-Box substitution on all 4 parts of the 32-bit word
|
||||
for i in range(4):
|
||||
word[i] = self.getSBoxValue(word[i])
|
||||
# XOR the output of the rcon operation with i to the first part
|
||||
# (leftmost) only
|
||||
word[0] = word[0] ^ self.getRconValue(iteration)
|
||||
return word
|
||||
|
||||
def expandKey(self, key, size, expandedKeySize):
|
||||
"""Rijndael's key expansion.
|
||||
|
||||
Expands an 128,192,256 key into an 176,208,240 bytes key
|
||||
|
||||
expandedKey is a char list of large enough size,
|
||||
key is the non-expanded key.
|
||||
"""
|
||||
# current expanded keySize, in bytes
|
||||
currentSize = 0
|
||||
rconIteration = 1
|
||||
expandedKey = [0] * expandedKeySize
|
||||
|
||||
# set the 16, 24, 32 bytes of the expanded key to the input key
|
||||
for j in range(size):
|
||||
expandedKey[j] = key[j]
|
||||
currentSize += size
|
||||
|
||||
while currentSize < expandedKeySize:
|
||||
# assign the previous 4 bytes to the temporary value t
|
||||
t = expandedKey[currentSize-4:currentSize]
|
||||
|
||||
# every 16,24,32 bytes we apply the core schedule to t
|
||||
# and increment rconIteration afterwards
|
||||
if currentSize % size == 0:
|
||||
t = self.core(t, rconIteration)
|
||||
rconIteration += 1
|
||||
# For 256-bit keys, we add an extra sbox to the calculation
|
||||
if size == self.keySize["SIZE_256"] and ((currentSize % size) == 16):
|
||||
for l in range(4): t[l] = self.getSBoxValue(t[l])
|
||||
|
||||
# We XOR t with the four-byte block 16,24,32 bytes before the new
|
||||
# expanded key. This becomes the next four bytes in the expanded
|
||||
# key.
|
||||
for m in range(4):
|
||||
expandedKey[currentSize] = expandedKey[currentSize - size] ^ \
|
||||
t[m]
|
||||
currentSize += 1
|
||||
|
||||
return expandedKey
|
||||
|
||||
def addRoundKey(self, state, roundKey):
|
||||
"""Adds (XORs) the round key to the state."""
|
||||
for i in range(16):
|
||||
state[i] ^= roundKey[i]
|
||||
return state
|
||||
|
||||
def createRoundKey(self, expandedKey, roundKeyPointer):
|
||||
"""Create a round key.
|
||||
Creates a round key from the given expanded key and the
|
||||
position within the expanded key.
|
||||
"""
|
||||
roundKey = [0] * 16
|
||||
for i in range(4):
|
||||
for j in range(4):
|
||||
roundKey[j*4+i] = expandedKey[roundKeyPointer + i*4 + j]
|
||||
return roundKey
|
||||
|
||||
def galois_multiplication(self, a, b):
|
||||
"""Galois multiplication of 8 bit characters a and b."""
|
||||
p = 0
|
||||
for counter in range(8):
|
||||
if b & 1: p ^= a
|
||||
hi_bit_set = a & 0x80
|
||||
a <<= 1
|
||||
# keep a 8 bit
|
||||
a &= 0xFF
|
||||
if hi_bit_set:
|
||||
a ^= 0x1b
|
||||
b >>= 1
|
||||
return p
|
||||
|
||||
#
|
||||
# substitute all the values from the state with the value in the SBox
|
||||
# using the state value as index for the SBox
|
||||
#
|
||||
def subBytes(self, state, isInv):
|
||||
if isInv: getter = self.getSBoxInvert
|
||||
else: getter = self.getSBoxValue
|
||||
for i in range(16): state[i] = getter(state[i])
|
||||
return state
|
||||
|
||||
# iterate over the 4 rows and call shiftRow() with that row
|
||||
def shiftRows(self, state, isInv):
|
||||
for i in range(4):
|
||||
state = self.shiftRow(state, i*4, i, isInv)
|
||||
return state
|
||||
|
||||
# each iteration shifts the row to the left by 1
|
||||
def shiftRow(self, state, statePointer, nbr, isInv):
|
||||
for i in range(nbr):
|
||||
if isInv:
|
||||
state[statePointer:statePointer+4] = \
|
||||
state[statePointer+3:statePointer+4] + \
|
||||
state[statePointer:statePointer+3]
|
||||
else:
|
||||
state[statePointer:statePointer+4] = \
|
||||
state[statePointer+1:statePointer+4] + \
|
||||
state[statePointer:statePointer+1]
|
||||
return state
|
||||
|
||||
# galois multiplication of the 4x4 matrix
|
||||
def mixColumns(self, state, isInv):
|
||||
# iterate over the 4 columns
|
||||
for i in range(4):
|
||||
# construct one column by slicing over the 4 rows
|
||||
column = state[i:i+16:4]
|
||||
# apply the mixColumn on one column
|
||||
column = self.mixColumn(column, isInv)
|
||||
# put the values back into the state
|
||||
state[i:i+16:4] = column
|
||||
|
||||
return state
|
||||
|
||||
# galois multiplication of 1 column of the 4x4 matrix
|
||||
def mixColumn(self, column, isInv):
|
||||
if isInv: mult = [14, 9, 13, 11]
|
||||
else: mult = [2, 1, 1, 3]
|
||||
cpy = list(column)
|
||||
g = self.galois_multiplication
|
||||
|
||||
column[0] = g(cpy[0], mult[0]) ^ g(cpy[3], mult[1]) ^ \
|
||||
g(cpy[2], mult[2]) ^ g(cpy[1], mult[3])
|
||||
column[1] = g(cpy[1], mult[0]) ^ g(cpy[0], mult[1]) ^ \
|
||||
g(cpy[3], mult[2]) ^ g(cpy[2], mult[3])
|
||||
column[2] = g(cpy[2], mult[0]) ^ g(cpy[1], mult[1]) ^ \
|
||||
g(cpy[0], mult[2]) ^ g(cpy[3], mult[3])
|
||||
column[3] = g(cpy[3], mult[0]) ^ g(cpy[2], mult[1]) ^ \
|
||||
g(cpy[1], mult[2]) ^ g(cpy[0], mult[3])
|
||||
return column
|
||||
|
||||
# applies the 4 operations of the forward round in sequence
|
||||
def aes_round(self, state, roundKey):
|
||||
state = self.subBytes(state, False)
|
||||
state = self.shiftRows(state, False)
|
||||
state = self.mixColumns(state, False)
|
||||
state = self.addRoundKey(state, roundKey)
|
||||
return state
|
||||
|
||||
# applies the 4 operations of the inverse round in sequence
|
||||
def aes_invRound(self, state, roundKey):
|
||||
state = self.shiftRows(state, True)
|
||||
state = self.subBytes(state, True)
|
||||
state = self.addRoundKey(state, roundKey)
|
||||
state = self.mixColumns(state, True)
|
||||
return state
|
||||
|
||||
# Perform the initial operations, the standard round, and the final
|
||||
# operations of the forward aes, creating a round key for each round
|
||||
def aes_main(self, state, expandedKey, nbrRounds):
|
||||
state = self.addRoundKey(state, self.createRoundKey(expandedKey, 0))
|
||||
i = 1
|
||||
while i < nbrRounds:
|
||||
state = self.aes_round(state,
|
||||
self.createRoundKey(expandedKey, 16*i))
|
||||
i += 1
|
||||
state = self.subBytes(state, False)
|
||||
state = self.shiftRows(state, False)
|
||||
state = self.addRoundKey(state,
|
||||
self.createRoundKey(expandedKey, 16*nbrRounds))
|
||||
return state
|
||||
|
||||
# Perform the initial operations, the standard round, and the final
|
||||
# operations of the inverse aes, creating a round key for each round
|
||||
def aes_invMain(self, state, expandedKey, nbrRounds):
|
||||
state = self.addRoundKey(state,
|
||||
self.createRoundKey(expandedKey, 16*nbrRounds))
|
||||
i = nbrRounds - 1
|
||||
while i > 0:
|
||||
state = self.aes_invRound(state,
|
||||
self.createRoundKey(expandedKey, 16*i))
|
||||
i -= 1
|
||||
state = self.shiftRows(state, True)
|
||||
state = self.subBytes(state, True)
|
||||
state = self.addRoundKey(state, self.createRoundKey(expandedKey, 0))
|
||||
return state
|
||||
|
||||
# encrypts a 128 bit input block against the given key of size specified
|
||||
def encrypt(self, iput, key, size):
|
||||
output = [0] * 16
|
||||
# the number of rounds
|
||||
nbrRounds = 0
|
||||
# the 128 bit block to encode
|
||||
block = [0] * 16
|
||||
# set the number of rounds
|
||||
if size == self.keySize["SIZE_128"]: nbrRounds = 10
|
||||
elif size == self.keySize["SIZE_192"]: nbrRounds = 12
|
||||
elif size == self.keySize["SIZE_256"]: nbrRounds = 14
|
||||
else: return None
|
||||
|
||||
# the expanded keySize
|
||||
expandedKeySize = 16*(nbrRounds+1)
|
||||
|
||||
# Set the block values, for the block:
|
||||
# a0,0 a0,1 a0,2 a0,3
|
||||
# a1,0 a1,1 a1,2 a1,3
|
||||
# a2,0 a2,1 a2,2 a2,3
|
||||
# a3,0 a3,1 a3,2 a3,3
|
||||
# the mapping order is a0,0 a1,0 a2,0 a3,0 a0,1 a1,1 ... a2,3 a3,3
|
||||
#
|
||||
# iterate over the columns
|
||||
for i in range(4):
|
||||
# iterate over the rows
|
||||
for j in range(4):
|
||||
block[(i+(j*4))] = iput[(i*4)+j]
|
||||
|
||||
# expand the key into an 176, 208, 240 bytes key
|
||||
# the expanded key
|
||||
expandedKey = self.expandKey(key, size, expandedKeySize)
|
||||
|
||||
# encrypt the block using the expandedKey
|
||||
block = self.aes_main(block, expandedKey, nbrRounds)
|
||||
|
||||
# unmap the block again into the output
|
||||
for k in range(4):
|
||||
# iterate over the rows
|
||||
for l in range(4):
|
||||
output[(k*4)+l] = block[(k+(l*4))]
|
||||
return output
|
||||
|
||||
# decrypts a 128 bit input block against the given key of size specified
|
||||
def decrypt(self, iput, key, size):
|
||||
output = [0] * 16
|
||||
# the number of rounds
|
||||
nbrRounds = 0
|
||||
# the 128 bit block to decode
|
||||
block = [0] * 16
|
||||
# set the number of rounds
|
||||
if size == self.keySize["SIZE_128"]: nbrRounds = 10
|
||||
elif size == self.keySize["SIZE_192"]: nbrRounds = 12
|
||||
elif size == self.keySize["SIZE_256"]: nbrRounds = 14
|
||||
else: return None
|
||||
|
||||
# the expanded keySize
|
||||
expandedKeySize = 16*(nbrRounds+1)
|
||||
|
||||
# Set the block values, for the block:
|
||||
# a0,0 a0,1 a0,2 a0,3
|
||||
# a1,0 a1,1 a1,2 a1,3
|
||||
# a2,0 a2,1 a2,2 a2,3
|
||||
# a3,0 a3,1 a3,2 a3,3
|
||||
# the mapping order is a0,0 a1,0 a2,0 a3,0 a0,1 a1,1 ... a2,3 a3,3
|
||||
|
||||
# iterate over the columns
|
||||
for i in range(4):
|
||||
# iterate over the rows
|
||||
for j in range(4):
|
||||
block[(i+(j*4))] = iput[(i*4)+j]
|
||||
# expand the key into an 176, 208, 240 bytes key
|
||||
expandedKey = self.expandKey(key, size, expandedKeySize)
|
||||
# decrypt the block using the expandedKey
|
||||
block = self.aes_invMain(block, expandedKey, nbrRounds)
|
||||
# unmap the block again into the output
|
||||
for k in range(4):
|
||||
# iterate over the rows
|
||||
for l in range(4):
|
||||
output[(k*4)+l] = block[(k+(l*4))]
|
||||
return output
|
||||
|
||||
|
||||
class AESModeOfOperation(object):
|
||||
|
||||
aes = AES()
|
||||
|
||||
# structure of supported modes of operation
|
||||
modeOfOperation = dict(OFB=0, CFB=1, CBC=2)
|
||||
|
||||
# converts a 16 character string into a number array
|
||||
def convertString(self, string, start, end, mode):
|
||||
if end - start > 16: end = start + 16
|
||||
if mode == self.modeOfOperation["CBC"]: ar = [0] * 16
|
||||
else: ar = []
|
||||
|
||||
i = start
|
||||
j = 0
|
||||
while len(ar) < end - start:
|
||||
ar.append(0)
|
||||
while i < end:
|
||||
ar[j] = ord(string[i])
|
||||
j += 1
|
||||
i += 1
|
||||
return ar
|
||||
|
||||
# Mode of Operation Encryption
|
||||
# stringIn - Input String
|
||||
# mode - mode of type modeOfOperation
|
||||
# hexKey - a hex key of the bit length size
|
||||
# size - the bit length of the key
|
||||
# hexIV - the 128 bit hex Initilization Vector
|
||||
def encrypt(self, stringIn, mode, key, size, IV):
|
||||
if len(key) % size:
|
||||
return None
|
||||
if len(IV) % 16:
|
||||
return None
|
||||
# the AES input/output
|
||||
plaintext = []
|
||||
iput = [0] * 16
|
||||
output = []
|
||||
ciphertext = [0] * 16
|
||||
# the output cipher string
|
||||
cipherOut = []
|
||||
# char firstRound
|
||||
firstRound = True
|
||||
if stringIn != None:
|
||||
for j in range(int(math.ceil(float(len(stringIn))/16))):
|
||||
start = j*16
|
||||
end = j*16+16
|
||||
if end > len(stringIn):
|
||||
end = len(stringIn)
|
||||
plaintext = self.convertString(stringIn, start, end, mode)
|
||||
# print 'PT@%s:%s' % (j, plaintext)
|
||||
if mode == self.modeOfOperation["CFB"]:
|
||||
if firstRound:
|
||||
output = self.aes.encrypt(IV, key, size)
|
||||
firstRound = False
|
||||
else:
|
||||
output = self.aes.encrypt(iput, key, size)
|
||||
for i in range(16):
|
||||
if len(plaintext)-1 < i:
|
||||
ciphertext[i] = 0 ^ output[i]
|
||||
elif len(output)-1 < i:
|
||||
ciphertext[i] = plaintext[i] ^ 0
|
||||
elif len(plaintext)-1 < i and len(output) < i:
|
||||
ciphertext[i] = 0 ^ 0
|
||||
else:
|
||||
ciphertext[i] = plaintext[i] ^ output[i]
|
||||
for k in range(end-start):
|
||||
cipherOut.append(ciphertext[k])
|
||||
iput = ciphertext
|
||||
elif mode == self.modeOfOperation["OFB"]:
|
||||
if firstRound:
|
||||
output = self.aes.encrypt(IV, key, size)
|
||||
firstRound = False
|
||||
else:
|
||||
output = self.aes.encrypt(iput, key, size)
|
||||
for i in range(16):
|
||||
if len(plaintext)-1 < i:
|
||||
ciphertext[i] = 0 ^ output[i]
|
||||
elif len(output)-1 < i:
|
||||
ciphertext[i] = plaintext[i] ^ 0
|
||||
elif len(plaintext)-1 < i and len(output) < i:
|
||||
ciphertext[i] = 0 ^ 0
|
||||
else:
|
||||
ciphertext[i] = plaintext[i] ^ output[i]
|
||||
for k in range(end-start):
|
||||
cipherOut.append(ciphertext[k])
|
||||
iput = output
|
||||
elif mode == self.modeOfOperation["CBC"]:
|
||||
for i in range(16):
|
||||
if firstRound:
|
||||
iput[i] = plaintext[i] ^ IV[i]
|
||||
else:
|
||||
iput[i] = plaintext[i] ^ ciphertext[i]
|
||||
# print 'IP@%s:%s' % (j, iput)
|
||||
firstRound = False
|
||||
ciphertext = self.aes.encrypt(iput, key, size)
|
||||
# always 16 bytes because of the padding for CBC
|
||||
for k in range(16):
|
||||
cipherOut.append(ciphertext[k])
|
||||
return mode, len(stringIn), cipherOut
|
||||
|
||||
# Mode of Operation Decryption
|
||||
# cipherIn - Encrypted String
|
||||
# originalsize - The unencrypted string length - required for CBC
|
||||
# mode - mode of type modeOfOperation
|
||||
# key - a number array of the bit length size
|
||||
# size - the bit length of the key
|
||||
# IV - the 128 bit number array Initilization Vector
|
||||
def decrypt(self, cipherIn, originalsize, mode, key, size, IV):
|
||||
# cipherIn = unescCtrlChars(cipherIn)
|
||||
if len(key) % size:
|
||||
return None
|
||||
if len(IV) % 16:
|
||||
return None
|
||||
# the AES input/output
|
||||
ciphertext = []
|
||||
iput = []
|
||||
output = []
|
||||
plaintext = [0] * 16
|
||||
# the output plain text string
|
||||
stringOut = ''
|
||||
# char firstRound
|
||||
firstRound = True
|
||||
if cipherIn != None:
|
||||
for j in range(int(math.ceil(float(len(cipherIn))/16))):
|
||||
start = j*16
|
||||
end = j*16+16
|
||||
if j*16+16 > len(cipherIn):
|
||||
end = len(cipherIn)
|
||||
ciphertext = cipherIn[start:end]
|
||||
if mode == self.modeOfOperation["CFB"]:
|
||||
if firstRound:
|
||||
output = self.aes.encrypt(IV, key, size)
|
||||
firstRound = False
|
||||
else:
|
||||
output = self.aes.encrypt(iput, key, size)
|
||||
for i in range(16):
|
||||
if len(output)-1 < i:
|
||||
plaintext[i] = 0 ^ ciphertext[i]
|
||||
elif len(ciphertext)-1 < i:
|
||||
plaintext[i] = output[i] ^ 0
|
||||
elif len(output)-1 < i and len(ciphertext) < i:
|
||||
plaintext[i] = 0 ^ 0
|
||||
else:
|
||||
plaintext[i] = output[i] ^ ciphertext[i]
|
||||
for k in range(end-start):
|
||||
stringOut += chr(plaintext[k])
|
||||
iput = ciphertext
|
||||
elif mode == self.modeOfOperation["OFB"]:
|
||||
if firstRound:
|
||||
output = self.aes.encrypt(IV, key, size)
|
||||
firstRound = False
|
||||
else:
|
||||
output = self.aes.encrypt(iput, key, size)
|
||||
for i in range(16):
|
||||
if len(output)-1 < i:
|
||||
plaintext[i] = 0 ^ ciphertext[i]
|
||||
elif len(ciphertext)-1 < i:
|
||||
plaintext[i] = output[i] ^ 0
|
||||
elif len(output)-1 < i and len(ciphertext) < i:
|
||||
plaintext[i] = 0 ^ 0
|
||||
else:
|
||||
plaintext[i] = output[i] ^ ciphertext[i]
|
||||
for k in range(end-start):
|
||||
stringOut += chr(plaintext[k])
|
||||
iput = output
|
||||
elif mode == self.modeOfOperation["CBC"]:
|
||||
output = self.aes.decrypt(ciphertext, key, size)
|
||||
for i in range(16):
|
||||
if firstRound:
|
||||
plaintext[i] = IV[i] ^ output[i]
|
||||
else:
|
||||
plaintext[i] = iput[i] ^ output[i]
|
||||
firstRound = False
|
||||
if originalsize is not None and originalsize < end:
|
||||
for k in range(originalsize-start):
|
||||
stringOut += chr(plaintext[k])
|
||||
else:
|
||||
for k in range(end-start):
|
||||
stringOut += chr(plaintext[k])
|
||||
iput = ciphertext
|
||||
return stringOut
|
||||
|
||||
|
||||
def encryptData(key, data, mode=AESModeOfOperation.modeOfOperation["CBC"]):
|
||||
"""encrypt `data` using `key`
|
||||
|
||||
`key` should be a string of bytes.
|
||||
|
||||
returned cipher is a string of bytes prepended with the initialization
|
||||
vector.
|
||||
|
||||
"""
|
||||
key = map(ord, key)
|
||||
if mode == AESModeOfOperation.modeOfOperation["CBC"]:
|
||||
data = append_PKCS7_padding(data)
|
||||
keysize = len(key)
|
||||
assert keysize in AES.keySize.values(), 'invalid key size: %s' % keysize
|
||||
# create a new iv using random data
|
||||
iv = [ord(i) for i in os.urandom(16)]
|
||||
moo = AESModeOfOperation()
|
||||
(mode, length, ciph) = moo.encrypt(data, mode, key, keysize, iv)
|
||||
# With padding, the original length does not need to be known. It's a bad
|
||||
# idea to store the original message length.
|
||||
# prepend the iv.
|
||||
return ''.join(map(chr, iv)) + ''.join(map(chr, ciph))
|
||||
|
||||
def decryptData(key, data, mode=AESModeOfOperation.modeOfOperation["CBC"]):
|
||||
"""decrypt `data` using `key`
|
||||
|
||||
`key` should be a string of bytes.
|
||||
|
||||
`data` should have the initialization vector prepended as a string of
|
||||
ordinal values.
|
||||
|
||||
"""
|
||||
|
||||
key = map(ord, key)
|
||||
keysize = len(key)
|
||||
assert keysize in AES.keySize.values(), 'invalid key size: %s' % keysize
|
||||
# iv is first 16 bytes
|
||||
iv = map(ord, data[:16])
|
||||
data = map(ord, data[16:])
|
||||
moo = AESModeOfOperation()
|
||||
decr = moo.decrypt(data, None, mode, key, keysize, iv)
|
||||
if mode == AESModeOfOperation.modeOfOperation["CBC"]:
|
||||
decr = strip_PKCS7_padding(decr)
|
||||
return decr
|
||||
|
||||
def generateRandomKey(keysize):
|
||||
"""Generates a key from random data of length `keysize`.
|
||||
|
||||
The returned key is a string of bytes.
|
||||
|
||||
"""
|
||||
if keysize not in (16, 24, 32):
|
||||
emsg = 'Invalid keysize, %s. Should be one of (16, 24, 32).'
|
||||
raise ValueError, emsg % keysize
|
||||
return os.urandom(keysize)
|
||||
|
||||
if __name__ == "__main__":
|
||||
moo = AESModeOfOperation()
|
||||
cleartext = "This is a test!"
|
||||
cypherkey = [143,194,34,208,145,203,230,143,177,246,97,206,145,92,255,84]
|
||||
iv = [103,35,148,239,76,213,47,118,255,222,123,176,106,134,98,92]
|
||||
mode, orig_len, ciph = moo.encrypt(cleartext, moo.modeOfOperation["CBC"],
|
||||
cypherkey, moo.aes.keySize["SIZE_128"], iv)
|
||||
print 'm=%s, ol=%s (%s), ciph=%s' % (mode, orig_len, len(cleartext), ciph)
|
||||
decr = moo.decrypt(ciph, orig_len, mode, cypherkey,
|
||||
moo.aes.keySize["SIZE_128"], iv)
|
||||
print decr
|
||||
@@ -1,2 +0,0 @@
|
||||
from tlsn_common import *
|
||||
from tlsn_ssl import *
|
||||
@@ -1,209 +0,0 @@
|
||||
#This is a comment.
|
||||
#We use such reliable site here which have 2048 bit RSA modulus
|
||||
#and which don't serve different certificates depending on
|
||||
#your location. This rules out using very large sites like google/facebook/twitter.
|
||||
#Also make sure they don't expire too soon.
|
||||
#Some websites e.g. apple.com, microsoft.com will not even respond with a TLS alert.
|
||||
#You can copy-paste the modulus straight from Firefox's
|
||||
#View Certificate - Details - Subject's Public Key
|
||||
#There must be a blank line between entries.
|
||||
#
|
||||
#Name=whatsapp.com
|
||||
#Expires=09/09/2019
|
||||
#Modulus=
|
||||
d3 6a fd 68 f8 ec f1 3a 2b 6d 01 7c 92 f7 98 37
|
||||
f6 fc 3a 19 91 4a 19 a8 7f 85 a7 6b 46 b2 e1 a3
|
||||
66 f6 63 4f e7 90 83 cf 00 00 a7 7a 68 48 38 04
|
||||
0a e7 c3 d5 96 f4 2a 95 21 b5 04 e4 c3 23 cd 56
|
||||
81 92 cd 34 0f 32 67 7e 64 d9 41 52 08 2e 43 2d
|
||||
2a d3 c2 95 7a f6 20 36 10 fa 36 f2 18 2e e5 f1
|
||||
6a 40 79 ae 38 5d 1a 24 8e 64 a1 85 a2 c4 df dd
|
||||
89 9b a9 02 e9 cc a4 5c 4a bc a2 5a 89 c5 74 6d
|
||||
14 e5 7e 71 b8 97 f9 d0 d1 06 f0 cc a4 cb 66 d5
|
||||
a8 8c e1 25 27 4b 3f 80 db 1c 14 ca ec b8 35 89
|
||||
f6 e6 2e 36 e3 3b 9a 9c 03 94 18 e2 e9 1d 66 4d
|
||||
4f 74 d2 58 16 9b 44 cd 9d 82 53 ad 85 10 bf 73
|
||||
3a bf 1c 9e 45 cc a7 8a c0 3e ab c7 31 ce ef 60
|
||||
ac 19 9e 03 bd ab 7d 4c 08 b6 d2 8a 0b 3a 84 cf
|
||||
50 a8 69 a0 d7 29 04 65 5f 8e 65 4f 3d a4 5c 64
|
||||
8d 81 0f e3 61 e0 b0 6b 9f 2d 29 94 ce 74 fd 73
|
||||
#
|
||||
#Name=wordpress.org
|
||||
#Expires=12/15/2020
|
||||
#Modulus=
|
||||
b7 ee 9f ad 32 5f 3b 77 0b d7 f6 f1 9b 4d 88 15
|
||||
90 78 92 84 e0 4e 77 f0 9a a3 cf 25 11 d9 bf 00
|
||||
14 12 20 bd 96 a6 6f da 53 e7 c5 0b 40 8c a6 44
|
||||
7a 2f 53 4c 54 90 44 b2 22 87 9d 02 01 93 53 f1
|
||||
c7 ba 1a bb 8d 4d 8f 79 bc b2 c9 00 71 78 0a c8
|
||||
ba 9b 93 99 7f 1e 98 93 f9 66 ce c4 65 03 07 53
|
||||
f3 91 8b 9e 5e 23 72 01 f4 b9 4f db 65 3c 33 d4
|
||||
c9 42 6b 17 43 71 5b b6 cc 02 b5 3e 72 e3 fb 07
|
||||
04 2f 01 8f 89 fa ad 2e 76 15 4a 19 4b 3f 36 b6
|
||||
f7 95 2f 4c e6 95 9a 9f 95 61 c5 af 4b 02 ed 9f
|
||||
e3 a2 a9 77 78 d3 93 d6 a3 73 35 dd 8a 21 e7 f3
|
||||
f2 1d a8 c7 74 17 43 5a 10 38 86 b3 ef 1e f8 db
|
||||
7c ac 79 cd ca 7f 6e ea 18 dd d4 4b 1e 83 87 fa
|
||||
71 34 93 74 17 ad a1 03 4d 8d 8c cc e7 4c 0c a4
|
||||
2b 8a a8 51 af cd 86 b3 01 0d a2 3b 77 e4 ce 33
|
||||
ec c0 3d 29 1d 45 c2 84 94 74 66 da 8a a1 b0 f5
|
||||
#
|
||||
#Name=vimeo.com
|
||||
#Expires=03/20/2020
|
||||
#Modulus=
|
||||
c7 08 83 5a 6d b9 8c 67 5e db bb 3f 14 cb 7b 93
|
||||
ba 48 c7 bb 0d 8a 4b a5 df cc 02 8a 9b 80 93 65
|
||||
b0 c7 e5 0f 04 3a 48 b9 ec ff 3e dc 8c 7a 90 cc
|
||||
cf e5 22 79 30 3b a5 87 8b dc ee 1b 49 cb ea 91
|
||||
57 67 1d fc 88 71 29 80 51 1d 6f 49 33 a2 34 b6
|
||||
78 6c 33 7c ff ec 89 60 c6 ca 46 c1 b7 92 00 cc
|
||||
74 bf 33 03 58 e0 42 eb 29 a1 22 52 16 36 c1 18
|
||||
ad 30 3b af 5c cf 77 77 fe 01 3e 05 8a 42 6d be
|
||||
5d 60 b0 78 44 c8 06 5d a8 88 2d 65 34 32 36 a8
|
||||
3c 97 3b 3d e7 1f ca cf 67 fa fc 4e 76 36 ee 35
|
||||
37 7b bf ba e3 f9 51 00 e9 79 49 21 e6 24 07 cc
|
||||
89 11 dc 54 b7 dc e4 93 9c c9 8a ef 42 d4 77 b2
|
||||
eb bc 67 82 21 de 57 d2 bb a4 bd b9 2f 49 b9 96
|
||||
f2 54 72 02 d2 7d 77 c4 33 a2 db 0f 36 37 89 a2
|
||||
e9 55 17 97 20 64 e7 00 b1 3e f6 53 db 95 22 2d
|
||||
0a 1d b9 23 61 42 2b 44 e6 5b 8a 1a f6 f5 6c fb
|
||||
#
|
||||
#Name=w3.org
|
||||
#Expires=06/02/2019
|
||||
#Modulus=
|
||||
b7 e1 f2 60 0c e4 cf 2d 23 ed 29 10 a0 b4 53 be
|
||||
35 ed 8d 3b f9 8f b5 45 41 ce ee 51 ed d5 2b 42
|
||||
fe ea 04 12 d5 f5 1a ad 71 b1 4a 8e 53 21 90 04
|
||||
d0 b4 2c 9d 08 b3 0d 5a 6e 45 19 1b de 96 78 0d
|
||||
22 5d 0a 5e 85 bd dc a0 fb 45 e8 45 04 9f 58 06
|
||||
3e 82 f7 e7 f8 68 05 e2 69 43 87 ce 14 e6 d0 aa
|
||||
0e 5f d5 51 c2 55 0e fd 95 5e 3f 44 22 7c f8 87
|
||||
96 00 c6 7a cb f8 43 a8 fc 45 c7 69 53 c5 8a a9
|
||||
86 78 27 d7 de f4 61 a3 e3 af 7e fa df 56 e2 a0
|
||||
5b cd 53 a8 88 32 98 73 40 68 db ea 9a 76 33 ae
|
||||
25 ae b5 7a c6 ab ce ae 3c a9 fc 73 df a4 74 52
|
||||
90 a2 51 e0 95 eb 0f 59 cd ea 46 47 88 48 0a ea
|
||||
1a 91 ee 8e 2c a4 ae 1c a1 4a 2a ce b4 ad 23 d9
|
||||
d5 7a 36 3a 2b 9d 7c 34 64 9a c7 43 b0 1b 19 64
|
||||
23 7c fa ee 38 57 6a c3 3b 61 b4 6c 28 ea 8e b4
|
||||
41 ca 54 79 e3 e3 97 90 74 f7 49 8f 5d 06 45 2b
|
||||
#
|
||||
#Name=digg.com
|
||||
#Expires=06/08/2019
|
||||
#Modulus=
|
||||
e1 21 04 3d f2 6d a8 f2 cc e9 b6 1a b7 22 71 c3
|
||||
a3 19 4d 2f d0 b0 eb be 86 84 0e e1 fa 07 63 a1
|
||||
77 81 50 fa b5 4f f1 ce 39 cd 48 33 8c 70 0b 7e
|
||||
93 8f 00 24 1c 4c fe 94 45 cc 0d bc de 26 d3 74
|
||||
5e 70 38 52 9e bc 67 57 89 b1 1b af 3d c0 38 cf
|
||||
23 27 cc 69 36 83 03 dc ec 33 67 21 d4 19 e3 02
|
||||
df a8 76 86 95 9d d5 f1 5a 04 0e 08 1e 68 f7 9d
|
||||
c3 c4 ec ae 1d 50 3b 79 ab fd 3f 8d 2b 3c 70 c5
|
||||
1e 6c 5e 0c b7 d6 fa a9 c1 2e 88 e7 53 31 32 6a
|
||||
dc 98 c1 e4 25 0b 6d aa cb 56 33 9b 4a d7 38 39
|
||||
d5 36 52 27 53 c0 8f f6 d2 97 6f d1 40 33 21 7a
|
||||
5d f0 6b 69 ef c1 8e ae 8c 71 1b b8 46 bd ed c0
|
||||
86 4e 74 31 96 93 7b fc d2 0a cc ba f2 31 78 f6
|
||||
58 e7 0d fa ad b1 35 c0 d8 3d be 88 55 08 92 87
|
||||
3b b5 88 61 37 df 2a b0 69 5b 3a cf a3 28 c6 3a
|
||||
49 da 6d cf 08 1c 95 75 cb 99 3c 2d 91 19 d4 8b
|
||||
#
|
||||
#Name=dropbox.com
|
||||
#Expires=11/02/2020
|
||||
#Modulus=
|
||||
bc 9e 8d 01 9d d4 c0 ad 20 40 eb d5 45 eb 66 84
|
||||
2a 28 be 22 21 6f 2b 8a 14 74 4c 40 17 c5 be fd
|
||||
33 c4 1c cf 4a e8 02 7a 63 f5 bf 6a 5e 72 55 ba
|
||||
11 e3 b8 ef ef 82 17 67 ab 25 f3 45 7a 56 b7 1c
|
||||
15 32 43 57 6f a0 16 d0 57 08 c7 c3 e3 13 6d 03
|
||||
d7 2f 85 73 46 3e 1c 19 62 1f 5a 93 50 15 ce 83
|
||||
4c 1e 50 90 95 0e 14 bf 1e 64 14 ec 02 0d 63 31
|
||||
de 46 39 1c 53 d0 4e 17 8a 07 bb ca bd 4c 88 3f
|
||||
4f 8b df e1 6b be f6 3a f5 8b bd bb 77 2a ee 16
|
||||
81 86 71 d4 77 cf 0e 46 c9 ae fa 28 9a 97 44 3b
|
||||
e3 49 ad 58 f5 bc 2f 2e 08 c3 8e 0e b6 a3 17 0d
|
||||
f8 87 37 3c 8a 07 39 6c aa 42 91 b8 b2 60 93 d7
|
||||
0b 3d fb d0 0b dc 56 6a 07 cf cf b1 9f b8 54 ba
|
||||
73 90 93 0b 6b 33 d6 4c ec ad e3 fe af fd da b9
|
||||
f4 0b 48 f5 c6 f4 c2 33 c2 46 6e d0 11 7d fe 9b
|
||||
19 b9 1f dd 33 3f 8b 92 38 38 f9 d8 a4 66 39 3b
|
||||
#
|
||||
#Name=archive.org
|
||||
#Expires=02/22/2020
|
||||
#Modulus=
|
||||
d2 ba 32 ad 20 0d 8a e5 05 34 ff 93 af 34 83 45
|
||||
11 e3 95 87 2d 53 f2 14 d9 a4 af e1 ac 4a 86 e8
|
||||
e0 fb 08 9c d0 66 2e c7 f9 e2 a2 9b a8 d3 dd d8
|
||||
38 3e f1 67 89 6c 69 db fa 36 bb 2b 73 66 e8 5f
|
||||
a2 df c0 62 bf 14 35 43 e3 3b 3c 28 d3 76 5d 09
|
||||
93 a1 22 db 61 cf b6 8a 62 6f e0 43 23 4f 2e 28
|
||||
b1 9f c5 31 9e bf b1 a4 b7 a3 b8 a5 e8 64 59 01
|
||||
bf 18 db b2 37 b4 c3 42 58 65 79 73 85 16 ad 7c
|
||||
6e c4 25 ab b0 7c 84 7d a8 be b3 68 8c d9 3a 67
|
||||
ed b8 bc 71 1a 68 c8 05 fe 5c 53 72 ff d5 ba fc
|
||||
e2 da 0b 08 66 22 34 c7 b9 76 6d c4 9b 98 5e 0e
|
||||
ec 68 d6 8b e8 5f 54 0a ef 51 8f 89 a9 93 60 ac
|
||||
ee 1a 45 06 a4 4e 19 c6 38 35 59 ef d1 4d 22 cd
|
||||
01 fe 51 26 9e c7 1b fb e8 6d 46 86 01 dc b5 23
|
||||
1a 63 5a 9c 1b 08 6a 36 4b b7 58 ea bc f8 b4 49
|
||||
c6 51 f5 5e b0 ca 55 36 3c df 2a 16 24 4e e1 ab
|
||||
#
|
||||
#Name=mail.ru
|
||||
#Expires=10/08/2020
|
||||
#Modulus=
|
||||
a5 46 ec 16 f0 19 9b ca 25 64 a5 97 8e 87 79 ee
|
||||
ed c3 83 81 84 c2 a2 19 cb 3b e9 e8 f7 28 7e b8
|
||||
8e 52 07 b8 dc 08 75 14 15 1f 69 c5 5c 77 91 10
|
||||
ac e9 29 36 97 93 ca 05 5b 6c db e6 1d 78 b4 91
|
||||
41 ce 10 89 c9 aa fa 1f fe e6 21 0e f4 03 ab 70
|
||||
a9 94 64 c1 2b 29 5a c9 de 1c 1b f8 f4 95 20 99
|
||||
59 79 97 f1 52 de 55 0b c0 c9 73 21 ad 28 5a 8a
|
||||
96 ee 87 cf 09 2d d1 5a e5 57 20 74 3a fe a5 ac
|
||||
38 d5 26 b3 61 9a 5f 32 44 d4 ef bd 68 a4 9d 1d
|
||||
a4 a9 3f 05 af 23 8f ce 70 a5 72 58 36 3f 50 c5
|
||||
8b 38 5a e0 e9 2c 37 c9 8a 8e 91 5e 77 c9 75 0c
|
||||
43 bd 1d 70 33 70 41 e2 eb f4 e9 6f 9b bf 37 dc
|
||||
18 af 43 04 93 02 d7 5c 97 b7 a5 7c 96 f1 5f 0c
|
||||
d7 4c ed b5 b6 db 4f ff 4b 5a 21 cb 70 d5 cf 75
|
||||
cb 06 c9 a3 31 eb 36 fe 79 17 a5 2d c8 0f 23 67
|
||||
3d 9c 56 9d 82 b2 f3 bb b5 7e 2b 9a 7b c2 65 69
|
||||
#
|
||||
#Name=360.cn
|
||||
#Expires=02/12/2019
|
||||
#Modulus=
|
||||
81 5d df fd 0f 33 bb ff 34 9a 64 d4 64 2f af 7d
|
||||
77 e9 ac 27 a7 33 3d d1 cc 3e cb c0 ab fb ae 1c
|
||||
c4 de 96 38 39 a6 ab ed 42 83 35 68 55 d4 91 fd
|
||||
15 38 94 4a fb ee 84 f1 5f 21 28 a3 e3 62 79 d3
|
||||
25 f9 88 34 4a aa c9 ff d5 fa 17 51 75 f4 57 56
|
||||
8d a3 d9 66 db bc 68 02 e8 e4 a1 25 78 3f 37 b7
|
||||
ed b0 8a ca b8 34 d7 b9 df 09 51 a2 9a 86 e6 b5
|
||||
37 4c 9b 5b 4d 66 b5 0a 8c 17 b9 70 16 e5 82 c8
|
||||
7e b9 11 1a 93 19 b5 ed d8 f3 e7 41 2e 3f 22 88
|
||||
31 fa 30 b8 6b 9e 08 e1 0f ec af 70 02 d9 b6 0a
|
||||
49 af 45 66 43 02 be 75 89 af db f1 5e 7f f0 7a
|
||||
07 06 a7 0c 46 46 45 d4 cd 1f ae 16 f5 d3 26 08
|
||||
5a d5 95 23 91 04 31 a9 d7 9a 4d 61 cb 23 b1 42
|
||||
e5 25 b2 72 0f d8 54 86 6c 0f 98 e5 59 db fb 08
|
||||
34 4e 82 1d 96 0e 77 36 f3 f5 91 c0 67 45 de 33
|
||||
82 0d 4f 22 d3 e8 0e f6 1b 90 82 2f bc 23 ef fd
|
||||
#
|
||||
#Name=netflix.com
|
||||
#Expires=25/07/2019
|
||||
#Modulus=
|
||||
a6 4b e4 63 b2 47 cb c5 3b 11 63 b7 6a af b9 1d
|
||||
e0 90 c2 2c 56 2e 31 97 47 6a e4 f8 1e e5 a6 82
|
||||
96 c0 8d 27 f4 db ab ae e0 2a 5a eb 11 6d b7 26
|
||||
30 4c e9 6a f6 fe 0b 50 ab 1f f6 b2 2c ce c7 68
|
||||
c3 bf 38 48 aa a6 5f 6d 99 a2 22 69 1b e3 ae 10
|
||||
ea 0f 25 36 be a8 50 a2 ba 0f 10 28 67 8a 20 b3
|
||||
11 76 fd 84 65 06 11 31 91 4a ba 8f d5 4f 42 22
|
||||
2f c3 f7 46 a3 15 8e 4a 6d b9 30 4e d3 cc e0 35
|
||||
f3 27 b8 4e 56 54 ba f4 62 3e 6f 13 fa 19 2d 0c
|
||||
d1 f0 42 32 bb 76 76 92 40 1d b0 4b 9d 87 55 d9
|
||||
83 86 c4 e3 60 54 ba fc 56 39 b5 46 ed 72 44 46
|
||||
cd 54 1b d0 fb cc f2 75 44 f8 8d 21 be 5c 03 cb
|
||||
88 d2 3b 59 78 3e 1b c3 7b c0 d7 23 be f1 bd d5
|
||||
21 71 65 3d 8d 5b 51 88 21 fe 0f 2e b4 37 d8 18
|
||||
5e 22 7b bb 7c fd 07 05 d2 b9 9c ad a7 90 2f 35
|
||||
6a bc 85 17 fa b0 28 aa ec 54 3f 3d e5 81 00 97
|
||||
@@ -1,187 +0,0 @@
|
||||
from __future__ import print_function
|
||||
from ConfigParser import SafeConfigParser
|
||||
from SocketServer import ThreadingMixIn
|
||||
from struct import pack
|
||||
import os, binascii, itertools, re, random
|
||||
import threading, BaseHTTPServer
|
||||
import select, time, socket
|
||||
from subprocess import check_output
|
||||
#General utility objects used by both auditor and auditee.
|
||||
|
||||
config = SafeConfigParser()
|
||||
config_location = os.path.join(os.path.dirname(os.path.realpath(__file__)),'tlsnotary.ini')
|
||||
|
||||
#required_options = {'Notary':['notary_server','notary_port']}
|
||||
required_options = {}
|
||||
reliable_sites = {}
|
||||
|
||||
|
||||
def verify_signature(msg, signature, modulus):
|
||||
'''RSA verification is sig^e mod n, drop the padding and get the last 32 bytes
|
||||
Args: msg as sha256 digest, signature as bytearray, modulus as (big) int
|
||||
'''
|
||||
sig = ba2int(signature)
|
||||
exponent = 65537
|
||||
result = pow(sig,exponent,modulus)
|
||||
padded_hash = bi2ba(result,fixed=512) #4096 bit key
|
||||
unpadded_hash = padded_hash[512-32:]
|
||||
if msg==unpadded_hash:
|
||||
return True
|
||||
else:
|
||||
return False
|
||||
|
||||
def load_program_config():
|
||||
|
||||
loadedFiles = config.read([config_location])
|
||||
#detailed sanity checking :
|
||||
#did the file exist?
|
||||
if len(loadedFiles) != 1:
|
||||
raise Exception("Could not find config file: "+config_location)
|
||||
#check for sections
|
||||
for s in required_options:
|
||||
if s not in config.sections():
|
||||
raise Exception("Config file does not contain the required section: "+s)
|
||||
#then check for specific options
|
||||
for k,v in required_options.iteritems():
|
||||
for o in v:
|
||||
if o not in config.options(k):
|
||||
raise Exception("Config file does not contain the required option: "+o)
|
||||
|
||||
|
||||
def import_reliable_sites(d):
|
||||
'''Read in the site names and ssl ports from the config file,
|
||||
and then read in the corresponding pubkeys in browser hex format from
|
||||
the file pubkeys.txt in directory d. Then combine this data into the reliable_sites global dict'''
|
||||
sites = [x.strip() for x in config.get('SSL','reliable_sites').split(',')]
|
||||
ports = [int(x.strip()) for x in config.get('SSL','reliable_sites_ssl_ports').split(',')]
|
||||
assert len(sites) == len(ports), "Error, tlsnotary.ini file contains a mismatch between reliable sites and ports"
|
||||
#import hardcoded pubkeys
|
||||
with open(os.path.join(d,'pubkeys.txt'),'rb') as f: plines = f.readlines()
|
||||
raw_pubkeys= []
|
||||
pubkeys = []
|
||||
while len(plines):
|
||||
next_raw_pubkey = list(itertools.takewhile(lambda x: x.startswith('#') != True,plines))
|
||||
k = len(next_raw_pubkey)
|
||||
plines = plines[k+1:]
|
||||
if k > 0 : raw_pubkeys.append(''.join(next_raw_pubkey))
|
||||
for rp in raw_pubkeys:
|
||||
pubkeys.append(re.sub(r'\s+','',rp))
|
||||
for i,site in enumerate(sites):
|
||||
reliable_sites[site] = [ports[i]]
|
||||
reliable_sites[site].append(pubkeys[i])
|
||||
|
||||
def check_complete_records(d):
|
||||
'''Given a response d from a server,
|
||||
we want to know if its contents represents
|
||||
a complete set of records, however many.'''
|
||||
l = ba2int(d[3:5])
|
||||
if len(d)< l+5: return False
|
||||
elif len(d)==l+5: return True
|
||||
else: return check_complete_records(d[l+5:])
|
||||
|
||||
def create_sock(server,prt):
|
||||
returned_sock = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
|
||||
returned_sock.settimeout(int(config.get("General","tcp_socket_timeout")))
|
||||
returned_sock.connect((server, prt))
|
||||
return returned_sock
|
||||
|
||||
def recv_socket(sckt,is_handshake=False):
|
||||
last_time_data_was_seen_from_server = 0
|
||||
data_from_server_seen = False
|
||||
databuffer=''
|
||||
while True:
|
||||
rlist, wlist, xlist = select.select((sckt,), (), (sckt,), 1)
|
||||
if len(rlist) == len(xlist) == 0: #timeout
|
||||
#TODO dont rely on a fixed timeout
|
||||
delta = int(time.time()) - last_time_data_was_seen_from_server
|
||||
if not data_from_server_seen: continue
|
||||
if delta < int(config.get("General","server_response_timeout")): continue
|
||||
return databuffer #we timed out on the socket read
|
||||
if len(xlist) > 0:
|
||||
print ('Socket exceptional condition. Terminating connection')
|
||||
return ''
|
||||
if len(rlist) == 0:
|
||||
print ('Python internal socket error: rlist should not be empty. Please investigate. Terminating connection')
|
||||
return ''
|
||||
for rsckt in rlist:
|
||||
data = rsckt.recv(1024*32)
|
||||
if not data:
|
||||
if not databuffer:
|
||||
raise Exception ("Server closed the socket and sent no data")
|
||||
else:
|
||||
return databuffer
|
||||
data_from_server_seen = True
|
||||
databuffer += data
|
||||
if is_handshake:
|
||||
if check_complete_records(databuffer): return databuffer #else, just continue loop
|
||||
last_time_data_was_seen_from_server = int(time.time())
|
||||
|
||||
def bi2ba(bigint,fixed=None):
|
||||
m_bytes = []
|
||||
while bigint != 0:
|
||||
b = bigint%256
|
||||
m_bytes.insert( 0, b )
|
||||
bigint //= 256
|
||||
if fixed:
|
||||
padding = fixed - len(m_bytes)
|
||||
if padding > 0: m_bytes = [0]*padding + m_bytes
|
||||
return bytearray(m_bytes)
|
||||
|
||||
|
||||
def xor(a,b):
|
||||
return bytearray([ord(a) ^ ord(b) for a,b in zip(a,b)])
|
||||
|
||||
def bigint_to_list(bigint):
|
||||
m_bytes = []
|
||||
while bigint != 0:
|
||||
b = bigint%256
|
||||
m_bytes.insert( 0, b )
|
||||
bigint //= 256
|
||||
return m_bytes
|
||||
|
||||
#convert bytearray into int
|
||||
def ba2int(byte_array):
|
||||
return int(str(byte_array).encode('hex'), 16)
|
||||
|
||||
|
||||
def gunzip_http(http_data):
|
||||
import gzip
|
||||
import StringIO
|
||||
http_header = http_data[:http_data.find('\r\n\r\n')+len('\r\n\r\n')]
|
||||
#\s* below means any amount of whitespaces
|
||||
if re.search(r'content-encoding:\s*deflate', http_header, re.IGNORECASE):
|
||||
#TODO manually resend the request with compression disabled
|
||||
raise Exception('Please set gzip_disabled = 1 in tlsnotary.ini and rerun the audit')
|
||||
if not re.search(r'content-encoding:\s*gzip', http_header, re.IGNORECASE):
|
||||
return http_data #nothing to gunzip
|
||||
http_body = http_data[len(http_header):]
|
||||
ungzipped = http_header
|
||||
gzipped = StringIO.StringIO(http_body)
|
||||
f = gzip.GzipFile(fileobj=gzipped, mode="rb")
|
||||
ungzipped += f.read()
|
||||
return ungzipped
|
||||
|
||||
|
||||
def dechunk_http(http_data):
|
||||
'''Dechunk only if http_data is chunked otherwise return http_data unmodified'''
|
||||
http_header = http_data[:http_data.find('\r\n\r\n')+len('\r\n\r\n')]
|
||||
#\s* below means any amount of whitespaces
|
||||
if not re.search(r'transfer-encoding:\s*chunked', http_header, re.IGNORECASE):
|
||||
return http_data #nothing to dechunk
|
||||
http_body = http_data[len(http_header):]
|
||||
|
||||
dechunked = http_header
|
||||
cur_offset = 0
|
||||
chunk_len = -1 #initialize with a non-zero value
|
||||
while True:
|
||||
new_offset = http_body[cur_offset:].find('\r\n')
|
||||
if new_offset==-1: #pre-caution against endless looping
|
||||
#pinterest.com is known to not send the last 0 chunk when HTTP gzip is disabled
|
||||
return dechunked
|
||||
chunk_len_hex = http_body[cur_offset:cur_offset+new_offset]
|
||||
chunk_len = int(chunk_len_hex, 16)
|
||||
if chunk_len ==0: break #for properly-formed html we should break here
|
||||
cur_offset += new_offset+len('\r\n')
|
||||
dechunked += http_body[cur_offset:cur_offset+chunk_len]
|
||||
cur_offset += chunk_len+len('\r\n')
|
||||
return dechunked
|
||||
File diff suppressed because it is too large
Load Diff
@@ -1,22 +0,0 @@
|
||||
[Notary]
|
||||
server_name = 54.158.251.14
|
||||
server_port = 10011
|
||||
|
||||
[SSL]
|
||||
#NOTE: Do *not* modify this list of websites unless
|
||||
#your auditor tells you to. They are meant
|
||||
#to be highly trusted websites that both sides can use
|
||||
#to test out the secret negotiation process. If you
|
||||
#modify this list, your initial connection to the auditor
|
||||
#may fail.
|
||||
reliable_sites = whatsapp.com,wordpress.org,vimeo.com,w3.org,digg.com,dropbox.com,archive.org,mail.ru,360.cn,netflix.com
|
||||
reliable_sites_ssl_ports = 443,443,443,443,443,443,443,443,443,443
|
||||
[General]
|
||||
msg_chunk_size = 110
|
||||
tcp_socket_timeout = 20
|
||||
server_response_timeout = 3
|
||||
gzip_disabled = 0
|
||||
prevent_render = 0
|
||||
decrypt_with_slowaes = 1
|
||||
use_paillier_scheme = 0
|
||||
tls_11 = 1
|
||||
Reference in New Issue
Block a user