diff --git a/.github/ISSUE_TEMPLATE/bug_report.md b/.github/ISSUE_TEMPLATE/bug_report.md
new file mode 100644
index 000000000..f904babb5
--- /dev/null
+++ b/.github/ISSUE_TEMPLATE/bug_report.md
@@ -0,0 +1,42 @@
+---
+name: Bug report
+about: Create a reproducible bug report.
+labels: bug, triage
+---
+
+## Summary
+
+What happened/what you expected to happen?
+
+## Description
+
+- versions affected:
+- python version:
+- config (optional: HW, OS):
+- workaround (optional): if you’ve a way to workaround the issue
+- proposed fix (optional): if you’ve a way to fix the issue
+
+Step by step procedure someone should follow to trigger the bug:
+
+minimal POC to trigger the bug
+
+
+```python
+print("Minimal POC to reproduce the bug")
+```
+
+
+
+
+## Artifacts
+
+Attach all generated artifacts here (generated in the `.artifacts` directory by default, see documentation for more detailed instructions).
+
+Logs or output
+
+
+```console
+```
+
+
+
diff --git a/.github/ISSUE_TEMPLATE/config.yml b/.github/ISSUE_TEMPLATE/config.yml
new file mode 100644
index 000000000..0086358db
--- /dev/null
+++ b/.github/ISSUE_TEMPLATE/config.yml
@@ -0,0 +1 @@
+blank_issues_enabled: true
diff --git a/.github/ISSUE_TEMPLATE/features.md b/.github/ISSUE_TEMPLATE/features.md
new file mode 100644
index 000000000..85f57d9d2
--- /dev/null
+++ b/.github/ISSUE_TEMPLATE/features.md
@@ -0,0 +1,17 @@
+---
+name: Feature
+about: Suggest a new feature.
+labels: feature
+---
+
+## Summary
+
+Concise summary of the feature to be added. With a usecase example if applicable.
+
+## Problem to solve
+
+What are you trying to achieve that is not possible in the current features set.
+
+## Proposals
+
+What would be your proposals to implement it.
diff --git a/.github/ISSUE_TEMPLATE/new_operator.md b/.github/ISSUE_TEMPLATE/new_operator.md
new file mode 100644
index 000000000..d72c6aab2
--- /dev/null
+++ b/.github/ISSUE_TEMPLATE/new_operator.md
@@ -0,0 +1,24 @@
+---
+name: New Operator
+about: Organise the support of a new operator or notion in the framework.
+labels: feature
+title: Support of **please-fill** in Concrete Numpy
+---
+
+## Umbrella
+
+This issue is about the support of **please-fill**, which is a new operator or notion that we plan to have in the framework.
+
+This is subdivided in different subtasks (please click on the right of the list items to create subtasks):
+- [ ] Tracing of **please-fill**
+- [ ] Evaluating **please-fill**
+- [ ] Lowering **please-fill** and compilation
+- [ ] Correctness of execution of **please-fill**
+
+Also, some more work on our side (please click on the right of the list items to create subtasks):
+- [ ] Benchmark of **please-fill**
+- [ ] Tutorial of **please-fill** (if needed)
+
+In parallel, the issue to the compiler team (please click on the right of the list items to create subtasks):
+- [ ] Feature request from ConcreteML: **please-fill**
+
diff --git a/.github/ISSUE_TEMPLATE/refactor.md b/.github/ISSUE_TEMPLATE/refactor.md
new file mode 100644
index 000000000..0084792d9
--- /dev/null
+++ b/.github/ISSUE_TEMPLATE/refactor.md
@@ -0,0 +1,19 @@
+---
+name: Refactor
+about: Suggest a code refactor.
+labels: refactor
+---
+
+## Proposal
+
+What would be your refactor proposal?
+
+## Impact
+
+List all files/modules/projects impacted by this refactor.
+
+Files impacted
+
+file.py
+
+
diff --git a/.github/ISSUE_TEMPLATE/release.md b/.github/ISSUE_TEMPLATE/release.md
new file mode 100644
index 000000000..ebadba434
--- /dev/null
+++ b/.github/ISSUE_TEMPLATE/release.md
@@ -0,0 +1,37 @@
+---
+name: Release
+about: Issue template to prepare a release step by step.
+title: "Release vX.Y.Z (or vX.Y.Z-rc?)"
+---
+
+Please check all steps if it was either done/already done, at the end of a release all check boxes must have been checked.
+
+Release check-list:
+
+If it was not already done:
+- [ ] Choose the version number, e.g. `vX.Y.Z` (can be `vX.Y.Z-rc?` for Release Candidates) following semantic versioning: https://semver.org/
+- [ ] Update the project version to `X.Y.Z` (or `X.Y.Z-rc?`) by running:
+
+```bash
+VERSION=X.Y.Z make set_version
+# or
+VERSION=X.Y.Z-rc? make set_version
+```
+
+Then:
+- [ ] For non RC releases: check the release milestone issues, cut out what can't be completed in time and change the milestones for these issues
+- [ ] Checkout the commit for release
+- [ ] Call `make release`, which creates a signed tag (requires GPG keys setup, see [here](https://docs.github.com/en/github/authenticating-to-github/managing-commit-signature-verification)) and pushes it
+- [ ] Wait for the release workflow to finish and check everything went well.
+
+To continue the release cycle:
+- [ ] Choose the version number for next release, e.g. `vA.B.C` (can be `vA.B.C-rc?` for Release Candidates) following semantic versioning: https://semver.org/
+- [ ] Update the project version to `A.B.C` (or `A.B.C-rc?`) by running:
+
+```bash
+VERSION=A.B.C make set_version
+# or
+VERSION=A.B.C-rc? make set_version
+```
+
+All done!
diff --git a/.github/PULL_REQUEST_TEMPLATE/pull_request_template.md b/.github/PULL_REQUEST_TEMPLATE/pull_request_template.md
new file mode 100644
index 000000000..54b1a07b5
--- /dev/null
+++ b/.github/PULL_REQUEST_TEMPLATE/pull_request_template.md
@@ -0,0 +1,12 @@
+---
+name: Pull Request
+about: Create a Pull Request.
+---
+
+
diff --git a/.github/dependabot.yaml b/.github/dependabot.yaml
new file mode 100644
index 000000000..c2bba0ea8
--- /dev/null
+++ b/.github/dependabot.yaml
@@ -0,0 +1,23 @@
+version: 2
+registries:
+ ghcr-io: # Define access for a private registry
+ type: docker-registry
+ url: ghcr.io
+ username: ${{secrets.BOT_USERNAME}}
+ password: ${{secrets.BOT_TOKEN}}
+
+updates:
+ - package-ecosystem: "github-actions"
+ directory: "/"
+ schedule:
+ # Check for updates to GitHub Actions every sunday
+ interval: "weekly"
+ day: "sunday"
+
+ - package-ecosystem: "docker"
+ directory: "/docker"
+ registries:
+ - ghcr-io # Allow version updates for dependencies in this registry
+ schedule:
+ interval: "weekly"
+ day: "sunday"
diff --git a/.github/workflows/continuous-integration.yaml b/.github/workflows/continuous-integration.yaml
new file mode 100644
index 000000000..d11593b52
--- /dev/null
+++ b/.github/workflows/continuous-integration.yaml
@@ -0,0 +1,749 @@
+name: concrete-numpy CI Pipeline
+on:
+ pull_request:
+ push:
+ branches:
+ - main
+ tags:
+ - "v*"
+
+ schedule:
+ # * is a special character in YAML so you have to quote this string
+ # At 22:00 on Sunday
+ # Timezone is UTC, so Paris time is +2 during the summer and +1 during winter
+ - cron: '0 22 * * 0'
+
+concurrency:
+ group: ${{ github.ref }}
+ cancel-in-progress: true
+
+env:
+ ACTION_RUN_URL: ${{ github.server_url }}/${{ github.repository }}/actions/runs/${{ github.run_id }}
+ IS_PR: ${{ github.event_name == 'pull_request' }}
+ IS_WEEKLY: ${{ github.event_name == 'schedule' }}
+ IS_RELEASE: ${{ github.event_name == 'push' && startsWith(github.ref, 'refs/tags/') }}
+ IS_PUSH_TO_MAIN: ${{ github.event_name == 'push' && github.ref == 'refs/heads/main' }}
+ AGENT_TOOLSDIRECTORY: /opt/hostedtoolcache
+ RUNNER_TOOL_CACHE: /opt/hostedtoolcache
+
+jobs:
+ matrix-preparation:
+ runs-on: ubuntu-22.04
+ outputs:
+ linux-matrix: ${{ steps.set-matrix.outputs.linux-matrix }}
+ macos-matrix: ${{ steps.set-matrix.outputs.macos-matrix }}
+ needs-37-linux-runner: ${{ steps.set-matrix.outputs.needs-37-linux-runner }}
+ needs-38-linux-runner: ${{ steps.set-matrix.outputs.needs-38-linux-runner }}
+ needs-39-linux-runner: ${{ steps.set-matrix.outputs.needs-39-linux-runner }}
+ needs-310-linux-runner: ${{ steps.set-matrix.outputs.needs-310-linux-runner }}
+ steps:
+ - name: Checkout code
+ uses: actions/checkout@755da8c3cf115ac066823e79a1e1788f8940201b
+
+ - name: Set matrix
+ id: set-matrix
+ run: |
+ BUILD_TYPE=
+ if [[ "${IS_PR}" == "true" ]]; then
+ BUILD_TYPE="pr"
+ elif [[ "${IS_WEEKLY}" == "true" ]]; then
+ BUILD_TYPE="weekly"
+ elif [[ "${IS_RELEASE}" == "true" ]]; then
+ BUILD_TYPE="release"
+ elif [[ "${IS_PUSH_TO_MAIN}" == "true" ]]; then
+ BUILD_TYPE="push_to_main"
+ else
+ echo "Unknown BUILD_TYPE! Aborting"
+ exit 1
+ fi
+ MATRIX_JSON=$(mktemp --suffix=.json)
+ echo "Prepared build matrix:"
+ python3 ./script/actions_utils/generate_test_matrix.py \
+ --output-json "${MATRIX_JSON}" \
+ --build-type "${BUILD_TYPE}"
+ LINUX_MATRIX=$(jq -rc '. | map(select(.os_kind=="linux"))' "${MATRIX_JSON}")
+ MACOS_MATRIX=$(jq -rc '. | map(select(.os_kind=="macos"))' "${MATRIX_JSON}")
+
+ echo "Linux Matrix:"
+ echo "${LINUX_MATRIX}" | jq '.'
+
+ echo "macOS Matrix:"
+ echo "${MACOS_MATRIX}" | jq '.'
+
+ echo "::set-output name=linux-matrix::${LINUX_MATRIX}"
+ echo "::set-output name=macos-matrix::${MACOS_MATRIX}"
+
+ NEEDS_LINUX_37_RUNNER=$(echo "${LINUX_MATRIX}" | \
+ jq -rc '. | map(select(.os_kind=="linux" and .python_version=="3.7")) | length > 0')
+ NEEDS_LINUX_38_RUNNER=$(echo "${LINUX_MATRIX}" | \
+ jq -rc '. | map(select(.os_kind=="linux" and .python_version=="3.8")) | length > 0')
+ NEEDS_LINUX_39_RUNNER=$(echo "${LINUX_MATRIX}" | \
+ jq -rc '. | map(select(.os_kind=="linux" and .python_version=="3.9")) | length > 0')
+ NEEDS_LINUX_310_RUNNER=$(echo "${LINUX_MATRIX}" | \
+ jq -rc '. | map(select(.os_kind=="linux" and .python_version=="3.10")) | length > 0')
+
+ echo "Needs Linux 3.7 runner:"
+ echo "${NEEDS_LINUX_37_RUNNER}"
+
+ echo "Needs Linux 3.8 runner:"
+ echo "${NEEDS_LINUX_38_RUNNER}"
+
+ echo "Needs Linux 3.9 runner:"
+ echo "${NEEDS_LINUX_39_RUNNER}"
+
+ echo "Needs Linux 3.10 runner:"
+ echo "${NEEDS_LINUX_310_RUNNER}"
+
+ echo "::set-output name=needs-37-linux-runner::${NEEDS_LINUX_37_RUNNER}"
+ echo "::set-output name=needs-38-linux-runner::${NEEDS_LINUX_38_RUNNER}"
+ echo "::set-output name=needs-39-linux-runner::${NEEDS_LINUX_39_RUNNER}"
+ echo "::set-output name=needs-310-linux-runner::${NEEDS_LINUX_310_RUNNER}"
+
+ start-runner-linux:
+ needs: [matrix-preparation]
+ name: Start EC2 runner
+ runs-on: ubuntu-22.04
+ outputs:
+ label-37: ${{ steps.start-ec2-runner-37.outputs.label }}
+ ec2-instance-id-37: ${{ steps.start-ec2-runner-37.outputs.ec2-instance-id || '' }}
+ label-38: ${{ steps.start-ec2-runner-38.outputs.label }}
+ ec2-instance-id-38: ${{ steps.start-ec2-runner-38.outputs.ec2-instance-id || '' }}
+ label-39: ${{ steps.start-ec2-runner-39.outputs.label }}
+ ec2-instance-id-39: ${{ steps.start-ec2-runner-39.outputs.ec2-instance-id || '' }}
+ label-310: ${{ steps.start-ec2-runner-310.outputs.label }}
+ ec2-instance-id-310: ${{ steps.start-ec2-runner-310.outputs.ec2-instance-id || '' }}
+ matrix: ${{ steps.update-linux-matrix.outputs.linux-matrix }}
+ steps:
+ - name: Checkout Code
+ uses: actions/checkout@755da8c3cf115ac066823e79a1e1788f8940201b
+
+ - name: Configure AWS credentials
+ uses: aws-actions/configure-aws-credentials@67fbcbb121271f7775d2e7715933280b06314838
+ with:
+ aws-access-key-id: ${{ secrets.AWS_ACCESS_KEY_ID }}
+ aws-secret-access-key: ${{ secrets.AWS_SECRET_ACCESS_KEY }}
+ aws-region: ${{ secrets.AWS_REGION }}
+
+ - name: Start EC2 runner python 37
+ id: start-ec2-runner-37
+ if: ${{ !cancelled() && fromJSON(needs.matrix-preparation.outputs.needs-37-linux-runner) }}
+ uses: machulav/ec2-github-runner@4e0303de215db88e1c489e07a15ca4d867f488ea
+ with:
+ mode: start
+ github-token: ${{ secrets.EC2_RUNNER_BOT_TOKEN }}
+ ec2-image-id: ${{ secrets.AWS_EC2_AMI }}
+ ec2-instance-type: ${{ secrets.AWS_EC2_INSTANCE_TYPE }}
+ subnet-id: ${{ secrets.AWS_EC2_SUBNET_ID }}
+ security-group-id: ${{ secrets.AWS_EC2_SECURITY_GROUP_ID }}
+
+ - name: Start EC2 runner python 38
+ id: start-ec2-runner-38
+ if: ${{ !cancelled() && fromJSON(needs.matrix-preparation.outputs.needs-38-linux-runner) }}
+ uses: machulav/ec2-github-runner@4e0303de215db88e1c489e07a15ca4d867f488ea
+ with:
+ mode: start
+ github-token: ${{ secrets.EC2_RUNNER_BOT_TOKEN }}
+ ec2-image-id: ${{ secrets.AWS_EC2_AMI }}
+ ec2-instance-type: ${{ secrets.AWS_EC2_INSTANCE_TYPE }}
+ subnet-id: ${{ secrets.AWS_EC2_SUBNET_ID }}
+ security-group-id: ${{ secrets.AWS_EC2_SECURITY_GROUP_ID }}
+
+ - name: Start EC2 runner python 39
+ id: start-ec2-runner-39
+ if: ${{ !cancelled() && fromJSON(needs.matrix-preparation.outputs.needs-39-linux-runner) }}
+ uses: machulav/ec2-github-runner@4e0303de215db88e1c489e07a15ca4d867f488ea
+ with:
+ mode: start
+ github-token: ${{ secrets.EC2_RUNNER_BOT_TOKEN }}
+ ec2-image-id: ${{ secrets.AWS_EC2_AMI }}
+ ec2-instance-type: ${{ secrets.AWS_EC2_INSTANCE_TYPE }}
+ subnet-id: ${{ secrets.AWS_EC2_SUBNET_ID }}
+ security-group-id: ${{ secrets.AWS_EC2_SECURITY_GROUP_ID }}
+
+ - name: Start EC2 runner python 310
+ id: start-ec2-runner-310
+ if: ${{ !cancelled() && fromJSON(needs.matrix-preparation.outputs.needs-310-linux-runner) }}
+ uses: machulav/ec2-github-runner@4e0303de215db88e1c489e07a15ca4d867f488ea
+ with:
+ mode: start
+ github-token: ${{ secrets.EC2_RUNNER_BOT_TOKEN }}
+ ec2-image-id: ${{ secrets.AWS_EC2_AMI }}
+ ec2-instance-type: ${{ secrets.AWS_EC2_INSTANCE_TYPE }}
+ subnet-id: ${{ secrets.AWS_EC2_SUBNET_ID }}
+ security-group-id: ${{ secrets.AWS_EC2_SECURITY_GROUP_ID }}
+
+ - name: Update Linux runs_on Matrix
+ id: update-linux-matrix
+ env:
+ MATRIX: ${{ needs.matrix-preparation.outputs.linux-matrix }}
+ run: |
+ MATRIX=$(echo "${MATRIX}" | jq -rc \
+ '(. | map(select(.os_kind=="linux" and .python_version=="3.7") |= . + {"runs_on": "${{ steps.start-ec2-runner-37.outputs.label }}"}) )')
+ MATRIX=$(echo "${MATRIX}" | jq -rc \
+ '(. | map(select(.os_kind=="linux" and .python_version=="3.8") |= . + {"runs_on": "${{ steps.start-ec2-runner-38.outputs.label }}"}) )')
+ MATRIX=$(echo "${MATRIX}" | jq -rc \
+ '(. | map(select(.os_kind=="linux" and .python_version=="3.9") |= . + {"runs_on": "${{ steps.start-ec2-runner-39.outputs.label }}"}) )')
+ MATRIX=$(echo "${MATRIX}" | jq -rc \
+ '(. | map(select(.os_kind=="linux" and .python_version=="3.10") |= . + {"runs_on": "${{ steps.start-ec2-runner-310.outputs.label }}"}) )')
+
+ echo "Updated matrix:"
+ echo "${MATRIX}"
+
+ echo "::set-output name=linux-matrix::${MATRIX}"
+
+ build-linux:
+ needs: [start-runner-linux]
+
+ runs-on: ${{ matrix.runs_on }}
+ # Run in a clean container
+ container:
+ image: ubuntu:22.04
+ defaults:
+ run:
+ shell: bash
+ strategy:
+ fail-fast: false
+ matrix: ${{ fromJSON(format('{{"include":{0}}}', needs.start-runner-linux.outputs.matrix)) }}
+ env:
+ IS_REF_BUILD: ${{ matrix.python_version == '3.8' }}
+
+ steps:
+ - name: Docker container related setup and git installation
+ run: |
+ TZ=Europe/Paris
+ echo "TZ=${TZ}" >> "$GITHUB_ENV"
+ ln -snf /usr/share/zoneinfo/${TZ} /etc/localtime && echo ${TZ} > /etc/timezone
+ sed -i 's|^deb http://archive|deb http://fr.archive|g' /etc/apt/sources.list
+ apt update && apt install git git-lfs -y
+
+ - name: Checkout Code
+ uses: actions/checkout@755da8c3cf115ac066823e79a1e1788f8940201b
+ # Fetch-detph 0 to have all commits for changelog generation
+ with:
+ fetch-depth: 0
+
+ - name: Set up Python ${{ matrix.python_version }}
+ uses: actions/setup-python@5ccb29d8773c3f3f653e1705f474dfaa8a06a912
+ with:
+ python-version: ${{ matrix.python_version }}
+
+ - name: Install dependencies
+ id: install-deps
+ run: |
+ ./script/make_utils/setup_os_deps.sh
+ make setup_env
+
+ - name: Check commits first line format
+ id: ccfl
+ if: ${{ fromJSON(env.IS_PR) && steps.install-deps.outcome == 'success' && !cancelled() }}
+ uses: gsactions/commit-message-checker@16fa2d5de096ae0d35626443bcd24f1e756cafee
+ with:
+ pattern: '^((feat|fix|chore|refactor|style|test|docs)(\((bounds|helpers|data_types|debugging|extensions|fhe_circuit|mlir|graph|optimization|representation|tracing|values|benchmarks|ci|scripts|compilation|execution|deps)\))?\:) .+$'
+ flags: 'gs'
+ error: "Your first line has to contain a commit type and scope like \"feat(my_feature): msg\".\
+ Pattern: '^((feat|fix|chore|refactor|style|test|docs)(\\((bounds|helpers|data_types|debugging|extensions|fhe_circuit|mlir|graph|optimization|representation|tracing|values|benchmarks|ci|scripts|compilation|execution|deps)\\))?\\:)'"
+ excludeDescription: 'true' # optional: this excludes the description body of a pull request
+ excludeTitle: 'true' # optional: this excludes the title of a pull request
+ checkAllCommitMessages: 'true' # optional: this checks all commits associated with a pull request
+ accessToken: ${{ secrets.GITHUB_TOKEN }} # github access token is only required if checkAllCommitMessages is true
+
+ - name: Check commits line length
+ id: ccll
+ if: ${{ fromJSON(env.IS_PR) && steps.install-deps.outcome == 'success' && !cancelled() }}
+ uses: gsactions/commit-message-checker@16fa2d5de096ae0d35626443bcd24f1e756cafee
+ with:
+ pattern: '(^.{0,74}$\r?\n?){0,20}'
+ flags: 'gm'
+ error: 'The maximum line length of 74 characters is exceeded.'
+ excludeDescription: 'true' # optional: this excludes the description body of a pull request
+ excludeTitle: 'true' # optional: this excludes the title of a pull request
+ checkAllCommitMessages: 'true' # optional: this checks all commits associated with a pull request
+ accessToken: ${{ secrets.GITHUB_TOKEN }} # github access token is only required if checkAllCommitMessages is true
+
+ - name: Commit conformance
+ id: commit-conformance
+ if: ${{ steps.install-deps.outcome == 'success' && !cancelled() }}
+ env:
+ CCFL_OK: ${{ (fromJSON(env.IS_PR) && steps.ccfl.outcome == 'success') || steps.ccfl.outcome == 'skipped' }}
+ CCLL_OK: ${{ (fromJSON(env.IS_PR) && steps.ccll.outcome == 'success') || steps.ccll.outcome == 'skipped' }}
+ run: |
+ if [[ "${CCFL_OK}" != "true" || "${CCLL_OK}" != "true" ]]; then
+ echo "Issues with commits. First line ok: ${CCFL_OK}. Line length ok: ${CCLL_OK}."
+ exit 1
+ fi
+
+ - name: Source code Conformance
+ id: cs
+ if: ${{ steps.install-deps.outcome == 'success' && !cancelled() }}
+ # pcc launches an internal target with proper flags
+ run: |
+ make pcc
+
+ - name: Generate release changelog
+ id: changelog
+ if: ${{ fromJSON(env.IS_RELEASE) && steps.install-deps.outcome == 'success' && !cancelled() }}
+ run: |
+ GIT_TAG=$(echo "${{ github.ref }}" | sed 's/refs\/tags\///g')
+ CHANGELOG_FILE="CHANGELOG_${GIT_TAG}.md"
+ echo "::set-output name=changelog-file::${CHANGELOG_FILE}"
+ poetry run python ./script/make_utils/changelog_helper.py \
+ --to-ref "${GIT_TAG}" \
+ --to-ref-must-have-tag \
+ --ancestor-must-have-tag > "${CHANGELOG_FILE}"
+
+ - name: Conformance status
+ id: conformance
+ if: ${{ always() && !cancelled() }}
+ env:
+ CONFORMANCE_STATUS: ${{ steps.commit-conformance.outcome == 'success' && steps.cs.outcome == 'success' }}
+ run: |
+ if [[ "${CONFORMANCE_STATUS}" != "true" ]]; then
+ echo "Conformance failed, check logs"
+ exit 1
+ fi
+
+ - name: Upload changelog artifacts
+ if: ${{ fromJSON(env.IS_REF_BUILD) && steps.changelog.outcome == 'success' && !cancelled() }}
+ uses: actions/upload-artifact@83fd05a356d7e2593de66fc9913b3002723633cb
+ with:
+ name: changelog
+ path: ${{ steps.changelog.outputs.changelog-file }}
+
+ # Create packages before tests, to be able to get them if some unexpected test failure happens
+ # Build the package only once, as we don't have binary dependency this can be used on Linux
+ # and macOS as long as the dependencies are available
+ - name: Build wheel
+ id: build-wheel
+ if: ${{ fromJSON(env.IS_REF_BUILD) && steps.conformance.outcome == 'success' && !cancelled() }}
+ run: |
+ rm -rf dist
+ poetry build -f wheel
+
+ - name: Upload wheel artifact
+ if: ${{ fromJSON(env.IS_REF_BUILD) && steps.build-wheel.outcome == 'success' }}
+ uses: actions/upload-artifact@83fd05a356d7e2593de66fc9913b3002723633cb
+ with:
+ name: py3-wheel
+ path: dist/*.whl
+
+ - name: PyTest Source Code
+ id: pytest
+ if: ${{ steps.conformance.outcome == 'success' && !cancelled() }}
+ run: |
+ make pytest
+
+# - name: PyTest CodeBlocks
+# if: ${{ steps.conformance.outcome == 'success' && !cancelled() }}
+# run: |
+# make pytest_codeblocks
+#
+# - name: PyTest Notebooks
+# if: ${{ fromJSON(env.IS_WEEKLY) && steps.conformance.outcome == 'success' && !cancelled() }}
+# run: |
+# make pytest_nb
+
+ # Compute coverage only on reference build
+ - name: Test coverage
+ id: coverage
+ if: ${{ always() && fromJSON(env.IS_REF_BUILD) && steps.pytest.outcome != 'skipped' && !cancelled() }}
+ run: |
+ ./script/actions_utils/coverage.sh .global-coverage.json
+
+ - name: Comment with coverage
+ uses: marocchino/sticky-pull-request-comment@3d60a5b2dae89d44e0c6ddc69dd7536aec2071cd
+ if: ${{ steps.coverage.outcome != 'skipped' && !cancelled() }}
+ continue-on-error: true
+ with:
+ path: diff-coverage.txt
+ recreate: true
+
+ # This is to manage build matrices and have a single status point for PRs
+ # This can be updated to take macOS into account but is impractical for private repos because of
+ # long builds and therefore expensive macOS testing
+ linux-build-status:
+ name: Linux build status
+ needs: [build-linux]
+ runs-on: ubuntu-22.04
+ if: ${{ always() }}
+ steps:
+ - name: Fail on unsuccessful Linux build
+ shell: bash
+ run: |
+ if [[ ${{ needs.build-linux.result }} != "success" ]]; then
+ exit 1
+ fi
+
+ - name: Slack Notification
+ if: ${{ always() && !success() }}
+ continue-on-error: true
+ uses: rtCamp/action-slack-notify@12e36fc18b0689399306c2e0b3e0f2978b7f1ee7
+ env:
+ SLACK_CHANNEL: ${{ secrets.SLACK_CHANNEL }}
+ SLACK_ICON: https://pbs.twimg.com/profile_images/1274014582265298945/OjBKP9kn_400x400.png
+ SLACK_COLOR: ${{ needs.build-linux.result }}
+ SLACK_MESSAGE: "Build finished with status ${{ needs.build-linux.result }}. (${{ env.ACTION_RUN_URL }})"
+ SLACK_USERNAME: ${{ secrets.BOT_USERNAME }}
+ SLACK_WEBHOOK: ${{ secrets.SLACK_WEBHOOK }}
+
+ stop-runner-linux:
+ name: Stop EC2 runner
+ needs: [build-linux, start-runner-linux]
+ runs-on: ubuntu-22.04
+ if: ${{ always() && (needs.start-runner-linux.result != 'skipped') }}
+ steps:
+ - name: Configure AWS credentials
+ uses: aws-actions/configure-aws-credentials@67fbcbb121271f7775d2e7715933280b06314838
+ with:
+ aws-access-key-id: ${{ secrets.AWS_ACCESS_KEY_ID }}
+ aws-secret-access-key: ${{ secrets.AWS_SECRET_ACCESS_KEY }}
+ aws-region: ${{ secrets.AWS_REGION }}
+
+ - name: Stop EC2 runner python 37
+ uses: machulav/ec2-github-runner@4e0303de215db88e1c489e07a15ca4d867f488ea
+ if: ${{ always() && needs.start-runner-linux.outputs.ec2-instance-id-37 }}
+ with:
+ github-token: ${{ secrets.EC2_RUNNER_BOT_TOKEN }}
+ label: ${{ needs.start-runner-linux.outputs.label-37 }}
+ ec2-instance-id: ${{ needs.start-runner-linux.outputs.ec2-instance-id-37 }}
+ mode: stop
+
+ - name: Stop EC2 runner python 38
+ uses: machulav/ec2-github-runner@4e0303de215db88e1c489e07a15ca4d867f488ea
+ if: ${{ always() && needs.start-runner-linux.outputs.ec2-instance-id-38 }}
+ with:
+ github-token: ${{ secrets.EC2_RUNNER_BOT_TOKEN }}
+ label: ${{ needs.start-runner-linux.outputs.label-38 }}
+ ec2-instance-id: ${{ needs.start-runner-linux.outputs.ec2-instance-id-38 }}
+ mode: stop
+
+ - name: Stop EC2 runner python 39
+ uses: machulav/ec2-github-runner@4e0303de215db88e1c489e07a15ca4d867f488ea
+ if: ${{ always() && needs.start-runner-linux.outputs.ec2-instance-id-39 }}
+ with:
+ github-token: ${{ secrets.EC2_RUNNER_BOT_TOKEN }}
+ label: ${{ needs.start-runner-linux.outputs.label-39 }}
+ ec2-instance-id: ${{ needs.start-runner-linux.outputs.ec2-instance-id-39 }}
+ mode: stop
+
+ - name: Stop EC2 runner python 310
+ uses: machulav/ec2-github-runner@4e0303de215db88e1c489e07a15ca4d867f488ea
+ if: ${{ always() && needs.start-runner-linux.outputs.ec2-instance-id-310 }}
+ with:
+ github-token: ${{ secrets.EC2_RUNNER_BOT_TOKEN }}
+ label: ${{ needs.start-runner-linux.outputs.label-310 }}
+ ec2-instance-id: ${{ needs.start-runner-linux.outputs.ec2-instance-id-310 }}
+ mode: stop
+
+ build-macos:
+ needs: [matrix-preparation]
+ if: ${{ needs.matrix-preparation.outputs.macos-matrix != '[]' }}
+
+ runs-on: ${{ matrix.runs_on }}
+ defaults:
+ run:
+ shell: bash
+ strategy:
+ fail-fast: false
+ matrix: ${{ fromJSON(format('{{"include":{0}}}', needs.matrix-preparation.outputs.macos-matrix)) }}
+ steps:
+ - name: Checkout Code
+ uses: actions/checkout@755da8c3cf115ac066823e79a1e1788f8940201b
+
+ - name: Set up Python ${{ matrix.python_version }}
+ uses: actions/setup-python@5ccb29d8773c3f3f653e1705f474dfaa8a06a912
+ with:
+ python-version: ${{ matrix.python_version }}
+
+ - name: Install dependencies
+ id: install-deps
+ run: |
+ ./script/make_utils/setup_os_deps.sh
+
+ PATH="/usr/local/opt/make/libexec/gnubin:$PATH"
+ echo "PATH=${PATH}" >> "$GITHUB_ENV"
+
+ make setup_env
+
+ - name: PyTest Source Code
+ run: |
+ make pytest
+
+ weekly-pip-audit:
+ if: ${{ github.event_name == 'schedule' }}
+ runs-on: ubuntu-22.04
+ steps:
+ - name: Checkout Code
+ uses: actions/checkout@755da8c3cf115ac066823e79a1e1788f8940201b
+
+ - name: Set up Python 3.8
+ uses: actions/setup-python@5ccb29d8773c3f3f653e1705f474dfaa8a06a912
+ with:
+ python-version: '3.8'
+
+ - name: Set up env
+ run: |
+ python -m pip install --upgrade pip
+ python -m pip install poetry
+ sudo apt update && sudo apt install sqlite3 -y
+ make setup_env
+
+ - name: Run pip-audit
+ shell: bash
+ run: |
+ VULN_OUT="$(mktemp --suffix=.json)"
+ REPORT_OUT="$(mktemp --suffix=.txt)"
+ echo "REPORT_OUT=${REPORT_OUT}" >> "$GITHUB_ENV"
+ poetry run pip-audit -f json > "${VULN_OUT}"
+ cat "${VULN_OUT}"
+ poetry run python ./script/actions_utils/parse_pip_audit_vulns.py \
+ --vulns-json "${VULN_OUT}" \
+ --vulns-report "${REPORT_OUT}"
+
+ # We load the report in a new step if we exited with an error code above to let the workflow fail
+ - name: Load report in env
+ if: ${{ always() }}
+ run: |
+ cat "${REPORT_OUT}"
+ REPORT="$(cat "${REPORT_OUT}")"
+ echo "REPORT=${REPORT}" >> "$GITHUB_ENV"
+
+ - name: Slack Notification
+ if: ${{ always() && !success() }}
+ continue-on-error: true
+ uses: rtCamp/action-slack-notify@12e36fc18b0689399306c2e0b3e0f2978b7f1ee7
+ env:
+ SLACK_CHANNEL: ${{ secrets.SLACK_CHANNEL }}
+ SLACK_ICON: https://pbs.twimg.com/profile_images/1274014582265298945/OjBKP9kn_400x400.png
+ SLACK_COLOR: ${{ job.status }}
+ SLACK_MESSAGE: "${{ env.REPORT || 'Error during pip-audit' }} (${{ env.ACTION_RUN_URL }})"
+ SLACK_USERNAME: ${{ secrets.BOT_USERNAME }}
+ SLACK_WEBHOOK: ${{ secrets.SLACK_WEBHOOK }}
+
+ package-release:
+ needs: [build-linux]
+ if: ${{ github.event_name == 'push' && startsWith(github.ref, 'refs/tags/') }}
+
+ outputs:
+ report: ${{ steps.report.outputs.report || 'Did not run.' }}
+
+ name: Package and artifacts release
+ runs-on: ubuntu-22.04
+
+ steps:
+ - uses: actions/checkout@755da8c3cf115ac066823e79a1e1788f8940201b
+
+ - name: Install dependencies
+ run: |
+ sudo apt-get install --no-install-recommends -y gnome-keyring sqlite3
+ python -m pip install --upgrade pip
+ python -m pip install poetry
+ make setup_env
+
+ - name: Set common environment variables
+ run: |
+ PROJECT_NAME_AND_VERSION=$(poetry version)
+ PROJECT_VERSION=$(echo "$PROJECT_NAME_AND_VERSION" | cut -d ' ' -f 2)
+
+ GIT_TAG=$(echo "${{ github.ref }}" | sed 's/refs\/tags\///g')
+ if [[ "${GIT_TAG}" != "v${PROJECT_VERSION}" ]]; then
+ echo "Mismatch between tag and version: ${GIT_TAG}, v${PROJECT_VERSION}"
+ exit 1
+ fi
+
+ IMAGE_BASE="${{ secrets.IMAGE_BASE }}"
+ ALL_IMAGE_TAGS="${IMAGE_BASE}:${GIT_TAG}"
+
+ IS_LATEST=$(poetry run python script/make_utils/is_latest.py "${PROJECT_VERSION}")
+ if [[ "${IS_LATEST}" == "True" ]]; then
+ ALL_IMAGE_TAGS="${ALL_IMAGE_TAGS},${IMAGE_BASE}:latest"
+ fi
+
+ IS_PRERELEASE=$(poetry run python script/make_utils/is_prerelease.py "${PROJECT_VERSION}")
+
+ echo "PROJECT_VERSION=${PROJECT_VERSION}" >> "$GITHUB_ENV"
+ echo "GIT_TAG=${GIT_TAG}" >> "$GITHUB_ENV"
+
+ echo "IS_LATEST=${IS_LATEST}" >> "$GITHUB_ENV"
+ echo "IS_PRERELEASE=${IS_PRERELEASE}" >> "$GITHUB_ENV"
+
+ echo "ALL_IMAGE_TAGS=${ALL_IMAGE_TAGS}" >> "$GITHUB_ENV"
+ echo "VERSIONED_IMAGE_TAG=${IMAGE_BASE}:${GIT_TAG}" >> "$GITHUB_ENV"
+
+ - name: Create directory for artifacts
+ if: ${{ success() && !cancelled() }}
+ run: |
+ ARTIFACTS_RAW_DIR=/tmp/release_artifacts/raw
+ mkdir -p "${ARTIFACTS_RAW_DIR}"
+ echo "ARTIFACTS_RAW_DIR=${ARTIFACTS_RAW_DIR}" >> "$GITHUB_ENV"
+
+ ARTIFACTS_PACKAGED_DIR=/tmp/release_artifacts/packaged
+ mkdir -p "${ARTIFACTS_PACKAGED_DIR}"
+ echo "ARTIFACTS_PACKAGED_DIR=${ARTIFACTS_PACKAGED_DIR}" >> "$GITHUB_ENV"
+
+ - name: Download changelog
+ if: ${{ success() && !cancelled() }}
+ id: download-changelog
+ uses: actions/download-artifact@9782bd6a9848b53b110e712e20e42d89988822b7
+ with:
+ name: changelog
+ path: ${{ env.ARTIFACTS_RAW_DIR }}/changelog/
+
+ - name: Download python3 wheel
+ if: ${{ success() && !cancelled() }}
+ id: download-wheel
+ uses: actions/download-artifact@9782bd6a9848b53b110e712e20e42d89988822b7
+ with:
+ name: py3-wheel
+ path: ${{ env.ARTIFACTS_PACKAGED_DIR }}/
+
+ - name: Copy wheel to docker build context
+ if: ${{ success() && !cancelled() }}
+ run: |
+ mkdir -p ./pkg
+ cp "${{ env.ARTIFACTS_PACKAGED_DIR }}"/*.whl ./pkg
+
+ - name: Login to GitHub Container Registry
+ if: ${{ success() && !cancelled() }}
+ uses: docker/login-action@f4ef78c080cd8ba55a85445d5b36e214a81df20a
+ with:
+ registry: ghcr.io
+ username: ${{ secrets.BOT_USERNAME }}
+ password: ${{ secrets.BOT_TOKEN }}
+
+ - name: Login to DockerHub
+ if: ${{ success() && !cancelled() }}
+ uses: docker/login-action@f4ef78c080cd8ba55a85445d5b36e214a81df20a
+ with:
+ username: ${{ secrets.DOCKERHUB_USER }}
+ password: ${{ secrets.DOCKERHUB_TOKEN }}
+
+ - name: Build concrete-numpy Image
+ if: ${{ success() && !cancelled() }}
+ uses: docker/build-push-action@3b5e8027fcad23fda98b2e3ac259d8d67585f671
+ with:
+ context: .
+ file: docker/Dockerfile.release
+ load: true
+ push: false
+ tags: "${{ env.ALL_IMAGE_TAGS }}"
+ no-cache: true
+
+ - name: Release image sanity check
+ if: ${{ success() && !cancelled() }}
+ run: |
+ echo "Running sanity check for ${VERSIONED_IMAGE_TAG}"
+ docker run --rm -v "$(pwd)"/docker/release_resources:/data \
+ "${VERSIONED_IMAGE_TAG}" /bin/bash -c "python ./sanity_check.py"
+
+ - name: Create ready to upload/packaged artifacts and release body
+ if: ${{ success() && !cancelled() }}
+ env:
+ RAW_CHANGELOG_DIR: ${{ steps.download-changelog.outputs.download-path }}
+ run: |
+ cp "${RAW_CHANGELOG_DIR}"/* "${ARTIFACTS_PACKAGED_DIR}"
+ ls -a "${ARTIFACTS_PACKAGED_DIR}"
+
+ RELEASE_BODY_FILE=RELEASE_BODY.md
+ echo "RELEASE_BODY_FILE=${RELEASE_BODY_FILE}" >> "$GITHUB_ENV"
+
+ cp ./script/actions_utils/RELEASE_TEMPLATE.md "${RELEASE_BODY_FILE}"
+ {
+ echo "Docker Image: ${VERSIONED_IMAGE_TAG}";
+ echo "PyPI Package: https://pypi.org/project/concrete-numpy/${PROJECT_VERSION}";
+ echo "";
+ } >> "${RELEASE_BODY_FILE}"
+ cat "${RAW_CHANGELOG_DIR}"/* >> "${RELEASE_BODY_FILE}"
+
+ - name: Push release docker image
+ if: ${{ success() && !cancelled() }}
+ run: |
+ docker image push --all-tags "${{ secrets.IMAGE_BASE }}"
+
+ - name: Push package to PyPi
+ if: ${{ success() && !cancelled() }}
+ run: |
+ poetry run twine upload \
+ -u "${{ secrets.REPO_USERNAME }}" -p ${{ secrets.REPO_PASSWORD }} \
+ ${{ secrets.REPO_DETAILS }} "${{ env.ARTIFACTS_PACKAGED_DIR }}"/*.whl
+
+ - name: Create GitHub release
+ if: ${{ success() && !cancelled() }}
+ id: create-release
+ uses: softprops/action-gh-release@de2c0eb89ae2a093876385947365aca7b0e5f844
+ with:
+ body_path: ${{ env.RELEASE_BODY_FILE }}
+ prerelease: ${{ fromJSON(env.IS_PRERELEASE) }}
+ files: |
+ ${{ env.ARTIFACTS_PACKAGED_DIR }}/*
+ tag_name: ${{ env.GIT_TAG }}
+ fail_on_unmatched_files: true
+ token: ${{ secrets.BOT_TOKEN }}
+
+ - name: Set notification report
+ id: report
+ if: ${{ always() }}
+ run: |
+ REPORT="Creating release for ${GIT_TAG} finished with status ${{ job.status }}. \
+ GitHub release link: ${{ steps.create-release.outputs.url }}."
+ echo "${REPORT}"
+ echo "::set-output name=report::${REPORT}"
+ echo "REPORT=${REPORT}" >> "$GITHUB_ENV"
+
+ - name: Slack Notification
+ if: ${{ always() && !success() }}
+ continue-on-error: true
+ uses: rtCamp/action-slack-notify@12e36fc18b0689399306c2e0b3e0f2978b7f1ee7
+ env:
+ SLACK_CHANNEL: ${{ secrets.SLACK_CHANNEL }}
+ SLACK_ICON: https://pbs.twimg.com/profile_images/1274014582265298945/OjBKP9kn_400x400.png
+ SLACK_COLOR: ${{ job.status }}
+ SLACK_MESSAGE: "${{ env.REPORT }} (${{ env.ACTION_RUN_URL }})"
+ SLACK_USERNAME: ${{ secrets.BOT_USERNAME }}
+ SLACK_WEBHOOK: ${{ secrets.SLACK_WEBHOOK }}
+
+ send-report:
+ if: ${{ always() }}
+ needs:
+ [
+ matrix-preparation,
+ start-runner-linux,
+ build-linux,
+ stop-runner-linux,
+ build-macos,
+ package-release,
+ ]
+
+ name: Send Slack notification
+ runs-on: ubuntu-22.04
+ steps:
+ - uses: actions/checkout@755da8c3cf115ac066823e79a1e1788f8940201b
+
+ - name: Prepare whole job status
+ if: ${{ always() }}
+ continue-on-error: true
+ env:
+ NEEDS_JSON: ${{ toJSON(needs) }}
+ run: |
+ echo "${NEEDS_JSON}" > /tmp/needs_context.json
+ JOB_STATUS=$(python3 ./script/actions_utils/actions_combine_status.py \
+ --needs_context_json /tmp/needs_context.json)
+ echo "JOB_STATUS=${JOB_STATUS}" >> "$GITHUB_ENV"
+
+ - name: Slack Notification
+ if: ${{ always() }}
+ continue-on-error: true
+ uses: rtCamp/action-slack-notify@12e36fc18b0689399306c2e0b3e0f2978b7f1ee7
+ env:
+ SLACK_CHANNEL: ${{ secrets.SLACK_CHANNEL }}
+ SLACK_ICON: https://pbs.twimg.com/profile_images/1274014582265298945/OjBKP9kn_400x400.png
+ SLACK_COLOR: ${{ env.JOB_STATUS || 'failure' }}
+ SLACK_MESSAGE: "Full run finished with status ${{ env.JOB_STATUS || 'failure' }} \
+ (${{ env.ACTION_RUN_URL }})\n\
+ - matrix-preparation: ${{ needs.matrix-preparation.result || 'Did not run.'}}\n\n\
+ - start-runner-linux: ${{ needs.start-runner-linux.result || 'Did not run.'}}\n\n\
+ - build-linux: ${{ needs.build-linux.result || 'Did not run.' }}\n\n\
+ - stop-runner-linux: ${{ needs.stop-runner-linux.result || 'Did not run.'}}\n\n\
+ - build-macos: ${{ needs.build-macos.result || 'Did not run.' }}\n\n\
+ - package-release: ${{ needs.package-release.outputs.report || 'Did not run.' }}"
+ SLACK_USERNAME: ${{ secrets.BOT_USERNAME }}
+ SLACK_WEBHOOK: ${{ secrets.SLACK_WEBHOOK }}
diff --git a/.github/workflows/daily-benchmarks.yaml b/.github/workflows/daily-benchmarks.yaml
new file mode 100644
index 000000000..1a6fd2c79
--- /dev/null
+++ b/.github/workflows/daily-benchmarks.yaml
@@ -0,0 +1,134 @@
+name: Daily Benchmarks
+on:
+ workflow_dispatch:
+
+env:
+ ACTION_RUN_URL: ${{ github.server_url }}/${{ github.repository }}/actions/runs/${{ github.run_id }}
+
+jobs:
+ perform:
+ name: Run Benchmarks on EC2 and Publish Results to Progress Tracker
+ runs-on: ubuntu-22.04
+ steps:
+ - name: Configure AWS Credentials
+ uses: aws-actions/configure-aws-credentials@67fbcbb121271f7775d2e7715933280b06314838
+ with:
+ aws-access-key-id: ${{ secrets.AWS_ACCESS_KEY_ID }}
+ aws-secret-access-key: ${{ secrets.AWS_SECRET_ACCESS_KEY }}
+ aws-region: eu-west-3 # Europe (Paris)
+
+ - name: Write SSH Key To A File
+ run: echo "$SSH_KEY" > ~/ssh-key && chmod 400 ~/ssh-key
+ env:
+ SSH_KEY: ${{ secrets.BENCHMARKS_EC2_SSH_KEY }}
+
+ - name: Start AMD EC2 Instance
+ run: |
+ aws ec2 start-instances --instance-ids ${{ secrets.BENCHMARKS_EC2_AMD_INSTANCE_ID }}
+
+ - name: Wait For The AMD EC2 Instance To Get An IP Address
+ run: |
+ # shellcheck disable=SC2016,2026
+ timeout 180 bash -c 'until [[ $(aws ec2 describe-instances --instance-ids ${{ secrets.BENCHMARKS_EC2_AMD_INSTANCE_ID }} --query 'Reservations[].Instances[].PublicIpAddress' --output text) != "" ]]; do sleep 0.1; done'
+
+ - name: Get Public IP Address of AMD EC2 Instance
+ id: amd-public-ip
+ run: echo "::set-output name=value::$(aws ec2 describe-instances --region eu-west-3 --instance-ids ${{ secrets.BENCHMARKS_EC2_AMD_INSTANCE_ID }} --query 'Reservations[].Instances[].PublicIpAddress' --output text)"
+
+ - name: Hide Public IP Address of AMD EC2 Instance From GitHub Logs
+ run: echo "::add-mask::${{ steps.amd-public-ip.outputs.value }}"
+
+ - name: Wait For The AMD EC2 Instance To Accept SSH Connections
+ run: timeout 180 bash -c 'until nc -z ${{ steps.amd-public-ip.outputs.value }} 22; do sleep 0.1; done'
+
+ - name: Connect To AMD EC2 Instance, Perform Benchmarks, Publish Results
+ uses: appleboy/ssh-action@b60142998894e495c513803efc6d5d72a72c968a
+ with:
+ host: ${{ steps.amd-public-ip.outputs.value }}
+ username: ${{ secrets.BENCHMARKS_EC2_USERNAME }}
+ key: ${{ secrets.BENCHMARKS_EC2_SSH_KEY }}
+ command_timeout: 720m
+ script: |
+ cd ~/project
+ git pull
+ make docker_publish_measurements
+ docker system prune -f
+
+ - name: Copy AMD EC2 Instance Concrete Logs
+ run: scp -o StrictHostKeyChecking=no -i ~/ssh-key ${{ secrets.BENCHMARKS_EC2_USERNAME }}@${{ steps.amd-public-ip.outputs.value }}:~/project/logs/latest.concrete.log ~/latest.concrete.log
+
+ - name: Copy AMD EC2 Instance ML Logs
+ run: scp -o StrictHostKeyChecking=no -i ~/ssh-key ${{ secrets.BENCHMARKS_EC2_USERNAME }}@${{ steps.amd-public-ip.outputs.value }}:~/project/logs/latest.ml.log ~/latest.ml.log
+
+ - name: Stop AMD EC2 Instance
+ if: ${{ always() }}
+ run: |
+ aws ec2 stop-instances --instance-ids ${{ secrets.BENCHMARKS_EC2_AMD_INSTANCE_ID }}
+
+ - name: Upload Logs of AMD EC2 Instance
+ uses: actions/upload-artifact@83fd05a356d7e2593de66fc9913b3002723633cb
+ with:
+ name: amd
+ path: ~/latest.*.log
+
+ - name: Start Intel EC2 Instance
+ run: |
+ aws ec2 start-instances --instance-ids ${{ secrets.BENCHMARKS_EC2_INTEL_INSTANCE_ID }}
+
+ - name: Wait For The Intel EC2 Instance To Get An IP Address
+ run: |
+ # shellcheck disable=SC2016,2026
+ timeout 180 bash -c 'until [[ $(aws ec2 describe-instances --instance-ids ${{ secrets.BENCHMARKS_EC2_INTEL_INSTANCE_ID }} --query 'Reservations[].Instances[].PublicIpAddress' --output text) != "" ]]; do sleep 0.1; done'
+
+ - name: Get Public IP Address of Intel EC2 Instance
+ id: intel-public-ip
+ run: echo "::set-output name=value::$(aws ec2 describe-instances --region eu-west-3 --instance-ids ${{ secrets.BENCHMARKS_EC2_INTEL_INSTANCE_ID }} --query 'Reservations[].Instances[].PublicIpAddress' --output text)"
+
+ - name: Hide Public IP Address of Intel EC2 Instance From GitHub Logs
+ run: echo "::add-mask::${{ steps.intel-public-ip.outputs.value }}"
+
+ - name: Wait For The Intel EC2 Instance To Accept SSH Connections
+ run: timeout 180 bash -c 'until nc -z ${{ steps.intel-public-ip.outputs.value }} 22; do sleep 0.1; done'
+
+ - name: Connect To Intel EC2 Instance, Perform Benchmarks, Publish Results
+ uses: appleboy/ssh-action@b60142998894e495c513803efc6d5d72a72c968a
+ with:
+ host: ${{ steps.intel-public-ip.outputs.value }}
+ username: ${{ secrets.BENCHMARKS_EC2_USERNAME }}
+ key: ${{ secrets.BENCHMARKS_EC2_SSH_KEY }}
+ command_timeout: 720m
+ script: |
+ cd ~/project
+ git pull
+ make docker_publish_measurements
+ docker system prune -f
+
+ - name: Copy Intel EC2 Instance Concrete Logs
+ run: scp -o StrictHostKeyChecking=no -i ~/ssh-key ${{ secrets.BENCHMARKS_EC2_USERNAME }}@${{ steps.intel-public-ip.outputs.value }}:~/project/logs/latest.concrete.log ~/latest.concrete.log
+
+ - name: Copy Intel EC2 Instance ML Logs
+ run: scp -o StrictHostKeyChecking=no -i ~/ssh-key ${{ secrets.BENCHMARKS_EC2_USERNAME }}@${{ steps.intel-public-ip.outputs.value }}:~/project/logs/latest.ml.log ~/latest.ml.log
+
+ - name: Stop Intel EC2 Instance
+ if: ${{ always() }}
+ run: |
+ aws ec2 stop-instances --instance-ids ${{ secrets.BENCHMARKS_EC2_INTEL_INSTANCE_ID }}
+
+ - name: Upload Logs of Intel EC2 Instance
+ uses: actions/upload-artifact@83fd05a356d7e2593de66fc9913b3002723633cb
+ with:
+ name: intel
+ path: ~/latest.*.log
+
+ - name: Send Slack Notification
+ if: ${{ always() }}
+ continue-on-error: true
+ uses: rtCamp/action-slack-notify@12e36fc18b0689399306c2e0b3e0f2978b7f1ee7
+ env:
+ SLACK_CHANNEL: ${{ secrets.SLACK_CHANNEL }}
+ SLACK_ICON: https://pbs.twimg.com/profile_images/1274014582265298945/OjBKP9kn_400x400.png
+ SLACK_COLOR: ${{ job.status }}
+ SLACK_MESSAGE: "Publishing benchmarks finished with status ${{ job.status }} \
+ (${{ env.ACTION_RUN_URL }})"
+ SLACK_USERNAME: ${{ secrets.BOT_USERNAME }}
+ SLACK_WEBHOOK: ${{ secrets.SLACK_WEBHOOK }}
diff --git a/.gitignore b/.gitignore
deleted file mode 100644
index 96ef6c0b9..000000000
--- a/.gitignore
+++ /dev/null
@@ -1,2 +0,0 @@
-/target
-Cargo.lock
diff --git a/frontends/concrete-python/.dockerignore b/frontends/concrete-python/.dockerignore
new file mode 100644
index 000000000..1d085cacc
--- /dev/null
+++ b/frontends/concrete-python/.dockerignore
@@ -0,0 +1 @@
+**
diff --git a/frontends/concrete-python/.editorconfig b/frontends/concrete-python/.editorconfig
new file mode 100644
index 000000000..c783b32ab
--- /dev/null
+++ b/frontends/concrete-python/.editorconfig
@@ -0,0 +1,15 @@
+# EditorConfig is awesome: https://EditorConfig.org
+
+# top-most EditorConfig file
+root = true
+
+# Unix-style newlines with a newline ending every file
+[*]
+end_of_line = lf
+insert_final_newline = true
+
+# 4 space indentation
+[*.py]
+charset = utf-8
+indent_style = space
+indent_size = 4
diff --git a/frontends/concrete-python/.gitbook.yaml b/frontends/concrete-python/.gitbook.yaml
new file mode 100644
index 000000000..afdeba9ff
--- /dev/null
+++ b/frontends/concrete-python/.gitbook.yaml
@@ -0,0 +1 @@
+root: ./docs
diff --git a/frontends/concrete-python/.gitignore b/frontends/concrete-python/.gitignore
new file mode 100644
index 000000000..11d46b704
--- /dev/null
+++ b/frontends/concrete-python/.gitignore
@@ -0,0 +1,137 @@
+# Byte-compiled / optimized / DLL files
+__pycache__/
+*.py[cod]
+*$py.class
+
+# C extensions
+*.so
+
+# Distribution / packaging
+.Python
+build/
+develop-eggs/
+dist/
+downloads/
+eggs/
+.eggs/
+lib/
+lib64/
+parts/
+sdist/
+var/
+wheels/
+pip-wheel-metadata/
+share/python-wheels/
+*.egg-info/
+.installed.cfg
+*.egg
+MANIFEST
+
+# PyInstaller
+# Usually these files are written by a python script from a template
+# before PyInstaller builds the exe, so as to inject date/other infos into it.
+*.manifest
+*.spec
+
+# Installer logs
+pip-log.txt
+pip-delete-this-directory.txt
+
+# Unit test / coverage reports
+htmlcov/
+.tox/
+.nox/
+.coverage
+.coverage.*
+.cache
+nosetests.xml
+coverage.xml
+coverage.html
+diff-coverage.txt
+*.cover
+*.py,cover
+.hypothesis/
+.pytest_cache/
+.global-coverage.json
+
+# Translations
+*.mo
+*.pot
+
+# Django stuff:
+*.log
+local_settings.py
+db.sqlite3
+db.sqlite3-journal
+
+# Flask stuff:
+instance/
+.webassets-cache
+
+# Scrapy stuff:
+.scrapy
+
+# Sphinx documentation
+docs/_build/
+docs/_apidoc/
+
+# PyBuilder
+target/
+
+# Jupyter Notebook
+.ipynb_checkpoints
+
+# IPython
+profile_default/
+ipython_config.py
+
+# pyenv
+.python-version
+
+# pipenv
+# According to pypa/pipenv#598, it is recommended to include Pipfile.lock in version control.
+# However, in case of collaboration, if having platform-specific dependencies or dependencies
+# having no cross-platform support, pipenv may install dependencies that don't work, or not
+# install all needed dependencies.
+#Pipfile.lock
+
+# PEP 582; used by e.g. github.com/David-OConnor/pyflow
+__pypackages__/
+
+# Celery stuff
+celerybeat-schedule
+celerybeat.pid
+
+# SageMath parsed files
+*.sage.py
+
+# Environments
+.env
+.venv
+.docker_venv
+env/
+venv/
+ENV/
+env.bak/
+venv.bak/
+
+# Spyder project settings
+.spyderproject
+.spyproject
+
+# Rope project settings
+.ropeproject
+
+# mkdocs documentation
+/site
+
+# mypy
+.mypy_cache/
+.dmypy.json
+dmypy.json
+
+# Pyre type checker
+.pyre/
+
+# Compilation Artifacts
+.artifacts
\ No newline at end of file
diff --git a/frontends/concrete-python/LICENSE b/frontends/concrete-python/LICENSE
new file mode 100644
index 000000000..62fdc0b51
--- /dev/null
+++ b/frontends/concrete-python/LICENSE
@@ -0,0 +1,28 @@
+BSD 3-Clause Clear License
+
+Copyright © 2022 ZAMA.
+All rights reserved.
+
+Redistribution and use in source and binary forms, with or without modification,
+are permitted provided that the following conditions are met:
+
+1. Redistributions of source code must retain the above copyright notice, this
+list of conditions and the following disclaimer.
+
+2. Redistributions in binary form must reproduce the above copyright notice, this
+list of conditions and the following disclaimer in the documentation and/or other
+materials provided with the distribution.
+
+3. Neither the name of ZAMA nor the names of its contributors may be used to endorse
+or promote products derived from this software without specific prior written permission.
+
+NO EXPRESS OR IMPLIED LICENSES TO ANY PARTY'S PATENT RIGHTS ARE GRANTED BY THIS LICENSE.
+THIS SOFTWARE IS PROVIDED BY THE ZAMA AND CONTRIBUTORS "AS IS" AND ANY EXPRESS OR
+IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF
+MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL
+ZAMA OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY,
+OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS
+OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND
+ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING
+NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF
+ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
diff --git a/frontends/concrete-python/Makefile b/frontends/concrete-python/Makefile
new file mode 100644
index 000000000..83f3268be
--- /dev/null
+++ b/frontends/concrete-python/Makefile
@@ -0,0 +1,313 @@
+SHELL:=/bin/bash
+
+DEV_DOCKER_IMG:=concrete-numpy-dev
+DEV_DOCKERFILE:=docker/Dockerfile.dev
+DEV_CONTAINER_VENV_VOLUME:=concrete-numpy-internal-venv
+DEV_CONTAINER_CACHE_VOLUME:=concrete-numpy-internal-cache
+SRC_DIR:=concrete
+
+.PHONY: setup_env # Set up the environment
+setup_env:
+ poetry run python -m pip install -U pip wheel
+ poetry run python -m pip install -U --force-reinstall setuptools
+ if [[ $$(uname) != "Linux" ]] && [[ $$(uname) != "Darwin" ]]; then \
+ poetry install --only dev; \
+ else \
+ poetry install; \
+ fi
+
+.PHONY: sync_env # Synchronise the environment
+sync_env:
+ if [[ $$(uname) != "Linux" ]] && [[ $$(uname) != "Darwin" ]]; then \
+ poetry install --remove-untracked --only dev; \
+ else \
+ poetry install --remove-untracked; \
+ fi
+ $(MAKE) setup_env
+
+.PHONY: python_format # Apply python formatting
+python_format:
+ poetry run env bash ./script/source_format/format_python.sh \
+ --dir $(SRC_DIR) --dir tests --dir script
+
+.PHONY: check_python_format # Check python format
+check_python_format:
+ poetry run env bash ./script/source_format/format_python.sh \
+ --dir $(SRC_DIR) --dir tests --dir script --check
+
+.PHONY: check_finalize_nb # Sanitize notebooks
+check_finalize_nb:
+ poetry run python ./script/nbmake_utils/notebook_finalize.py docs --check
+
+.PHONY: pylint # Run pylint
+pylint:
+ $(MAKE) --keep-going pylint_src pylint_tests pylint_script
+
+.PHONY: pylint_src # Run pylint on sources
+pylint_src:
+ poetry run pylint --rcfile=pylintrc $(SRC_DIR)
+
+.PHONY: pylint_tests # Run pylint on tests
+pylint_tests:
+ @# Disable duplicate code detection (R0801) in tests
+ @# Disable unnecessary lambda (W0108) for tests
+ find ./tests/ -type f -name "*.py" | xargs poetry run pylint --disable=R0801,W0108 --rcfile=pylintrc
+
+.PHONY: pylint_script # Run pylint on scripts
+pylint_script:
+ find ./script/ -type f -name "*.py" | xargs poetry run pylint --rcfile=pylintrc
+
+.PHONY: flake8 # Run flake8
+flake8:
+ poetry run flake8 --max-line-length 100 --per-file-ignores="__init__.py:F401" \
+ $(SRC_DIR)/ tests/ script/
+
+.PHONY: ruff
+ruff:
+ poetry run ruff $(SRC_DIR)/ tests/ script/
+
+.PHONY: python_linting # Run python linters
+python_linting: pylint flake8 ruff
+
+.PHONY: conformance # Run command to fix some conformance issues automatically
+conformance: finalize_nb python_format supported_functions licenses
+
+.PHONY: pcc # Run pre-commit checks
+pcc:
+ @$(MAKE) --keep-going --jobs $$(./script/make_utils/ncpus.sh) --output-sync=recurse \
+ --no-print-directory pcc_internal
+
+PCC_DEPS := check_python_format check_finalize_nb python_linting mypy_ci pydocstyle shell_lint
+PCC_DEPS += check_supported_functions # check_licenses
+
+# Not commented on purpose for make help, since internal
+.PHONY: pcc_internal
+pcc_internal: $(PCC_DEPS)
+
+# One can reproduce pytest thanks to the --randomly-seed which is given by
+# pytest-randomly
+.PHONY: pytest # Run pytest
+pytest:
+ poetry run pytest -svv \
+ --global-coverage=.global-coverage.json \
+ -n $$(./script/make_utils/ncpus.sh) \
+ --cov=$(SRC_DIR) --cov-fail-under=100 \
+ --randomly-dont-reorganize \
+ --cov-report=term-missing:skip-covered tests/
+
+# Not a huge fan of ignoring missing imports, but some packages do not have typing stubs
+.PHONY: mypy # Run mypy
+mypy:
+ poetry run mypy -p $(SRC_DIR) --ignore-missing-imports
+
+# Friendly target to run mypy without ignoring missing stubs and still have errors messages
+# Allows to see which stubs we are missing
+.PHONY: mypy_ns # Run mypy (without ignoring missing stubs)
+mypy_ns:
+ poetry run mypy -p $(SRC_DIR)
+
+.PHONY: mypy_test # Run mypy on test files
+mypy_test:
+ find ./tests/ -name "*.py" | xargs poetry run mypy --ignore-missing-imports
+
+.PHONY: mypy_script # Run mypy on scripts
+mypy_script:
+ find ./script/ -name "*.py" | xargs poetry run mypy --ignore-missing-imports
+
+# The plus indicates that make will be called by the command and allows to share the context with
+# the parent make execution. We serialize calls to these targets as they may overwrite each others
+# cache which can cause issues.
+.PHONY: mypy_ci # Run all mypy checks for CI
+mypy_ci:
+ $(MAKE) --keep-going mypy mypy_test mypy_script
+
+.PHONY: docker_build # Build dev docker
+docker_build:
+ BUILD_ARGS=; \
+ if [[ $$(uname) == "Linux" ]]; then \
+ BUILD_ARGS="--build-arg BUILD_UID=$$(id -u) --build-arg BUILD_GID=$$(id -g)"; \
+ fi; \
+ DOCKER_BUILDKIT=1 docker build $${BUILD_ARGS:+$$BUILD_ARGS} \
+ --pull -t $(DEV_DOCKER_IMG) -f $(DEV_DOCKERFILE) .
+
+.PHONY: docker_rebuild # Rebuild docker
+docker_rebuild: docker_clean_volumes
+ BUILD_ARGS=; \
+ if [[ $$(uname) == "Linux" ]]; then \
+ BUILD_ARGS="--build-arg BUILD_UID=$$(id -u) --build-arg BUILD_GID=$$(id -g)"; \
+ fi; \
+ DOCKER_BUILDKIT=1 docker build $${BUILD_ARGS:+$$BUILD_ARGS} \
+ --pull --no-cache -t $(DEV_DOCKER_IMG) -f $(DEV_DOCKERFILE) .
+
+.PHONY: docker_start # Launch docker
+docker_start:
+ @# the slash before pwd is for Windows
+ docker run --rm -it \
+ -p 8888:8888 \
+ --env DISPLAY=host.docker.internal:0 \
+ --volume /"$$(pwd)":/src \
+ --volume $(DEV_CONTAINER_VENV_VOLUME):/home/dev_user/dev_venv \
+ --volume $(DEV_CONTAINER_CACHE_VOLUME):/home/dev_user/.cache \
+ $(DEV_DOCKER_IMG)
+
+.PHONY: docker_build_and_start # Docker build and start
+docker_build_and_start: docker_build docker_start
+
+.PHONY: docker_bas # Docker build and start
+docker_bas: docker_build_and_start
+
+.PHONY: docker_clean_volumes # Docker clean volumes
+docker_clean_volumes:
+ docker volume rm -f $(DEV_CONTAINER_VENV_VOLUME)
+ docker volume rm -f $(DEV_CONTAINER_CACHE_VOLUME)
+
+.PHONY: docker_cv # Docker clean volumes
+docker_cv: docker_clean_volumes
+
+.PHONY: pydocstyle # Launch syntax checker on source code documentation
+pydocstyle:
+ @# From http://www.pydocstyle.org/en/stable/error_codes.html
+ poetry run pydocstyle $(SRC_DIR) --convention google --add-ignore=D1,D200,D202,D212,D402,D417 --add-select=D401
+
+.PHONY: finalize_nb # Sanitize notebooks
+finalize_nb:
+ poetry run python ./script/nbmake_utils/notebook_finalize.py docs
+
+# A warning in a package unrelated to the project made pytest fail with notebooks
+# Run notebook tests without warnings as sources are already tested with warnings treated as errors
+.PHONY: pytest_nb # Launch notebook tests
+pytest_nb:
+ find docs -name "*.ipynb" | grep -v _build | grep -v .ipynb_checkpoints | xargs poetry run pytest -Wignore --nbmake
+
+.PHONY: jupyter # Launch jupyter notebook
+jupyter:
+ poetry run jupyter notebook --allow-root --no-browser --ip=0.0.0.0
+
+.PHONY: release_docker # Build a docker release image
+release_docker:
+ ./docker/build_release_image.sh
+
+.PHONY: upgrade_py_deps # Upgrade python dependencies
+upgrade_py_deps:
+ ./script/make_utils/upgrade_deps.sh
+
+# Keeping this target as it proved useful before we had a proper package, allowed to run code that
+# pytest-codeblocks was failing to execute if not installed as a pip package.
+# This is done by hand as pytest-codeblocks was failing with our native extensions.
+# See refused PR on the project here: https://github.com/nschloe/pytest-codeblocks/pull/58
+# Test code blocks using a custom python script in the documentation
+.PHONY: test_codeblocks
+test_codeblocks:
+ poetry run python ./script/make_utils/test_md_python_code.py --md_dir docs/
+
+.PHONY: pytest_codeblocks # Test code blocks using pytest in the documentation
+pytest_codeblocks:
+ poetry run pytest --codeblocks -svv -n $$(./script/make_utils/ncpus.sh) \
+ --randomly-dont-reorganize docs/
+
+# From https://stackoverflow.com/a/63523300 for the find command
+.PHONY: shell_lint # Lint all bash scripts
+shell_lint:
+ find \( -path "./.venv" -o -path "./.docker_venv" \) -prune -o -type f -name "*.sh" -print | \
+ xargs shellcheck
+
+.PHONY: set_version_no_commit # Dry run for set_version
+set_version_no_commit:
+ @if [[ "$$VERSION" == "" ]]; then \
+ echo "VERSION env variable is empty. Please set to desired version."; \
+ exit 1; \
+ fi && \
+ poetry run python ./script/make_utils/version_utils.py set-version --version "$${VERSION}"
+
+.PHONY: set_version # Generate a new version number and update all files with it accordingly
+set_version:
+ @if [[ "$$VERSION" == "" ]]; then \
+ echo "VERSION env variable is empty. Please set to desired version."; \
+ exit 1; \
+ fi && \
+ STASH_COUNT="$$(git stash list | wc -l)" && \
+ git stash && \
+ poetry run python ./script/make_utils/version_utils.py set-version --version "$${VERSION}" && \
+ git add -u && \
+ git commit -m "chore: bump version to $${VERSION}" && \
+ NEW_STASH_COUNT="$$(git stash list | wc -l)" && \
+ if [[ "$$NEW_STASH_COUNT" != "$$STASH_COUNT" ]]; then \
+ git stash pop; \
+ fi
+
+.PHONY: changelog # Generate a changelog
+changelog:
+ PROJECT_VER=($$(poetry version)) && \
+ PROJECT_VER="$${PROJECT_VER[1]}" && \
+ poetry run python ./script/make_utils/changelog_helper.py > "CHANGELOG_$${PROJECT_VER}.md"
+
+.PHONY: release # Create a new release
+release:
+ @PROJECT_VER=($$(poetry version)) && \
+ PROJECT_VER="$${PROJECT_VER[1]}" && \
+ TAG_NAME="v$${PROJECT_VER}" && \
+ git fetch --tags --force && \
+ git tag -s -a -m "$${TAG_NAME} release" "$${TAG_NAME}" && \
+ git push origin "refs/tags/$${TAG_NAME}"
+
+
+.PHONY: show_scope # Show the accepted types and optional scopes (for git conventional commits)
+show_scope:
+ @echo "Accepted types and optional scopes:"
+ @cat .github/workflows/continuous-integration.yaml | grep feat | grep pattern | cut -f 2- -d ":" | cut -f 2- -d " "
+
+.PHONY: show_type # Show the accepted types and optional scopes (for git conventional commits)
+show_type:show_scope
+
+# grep recursively, ignore binary files, print file line, print file name
+# exclude dot dirs, exclude pylintrc (would match the notes)
+# exclude notebooks (sometimes matches in svg text), match the notes in this directory
+.PHONY: todo # List all todo left in the code
+todo:
+ @NOTES_ARGS=$$(poetry run python ./script/make_utils/get_pylintrc_notes.py \
+ --pylintrc-path pylintrc) && \
+ grep -rInH --exclude-dir='.[^.]*' --exclude=pylintrc --exclude='*.ipynb' "$${NOTES_ARGS}" .
+
+
+.PHONY: supported_functions # Update docs with supported functions
+supported_functions:
+ poetry run python script/doc_utils/gen_supported_ufuncs.py docs/getting-started/compatibility.md
+
+.PHONY: check_supported_functions # Check supported functions (for the doc)
+check_supported_functions:
+ poetry run python script/doc_utils/gen_supported_ufuncs.py docs/getting-started/compatibility.md --check
+
+.PHONY: licenses # Generate the list of licenses of dependencies
+licenses:
+ @./script/make_utils/licenses.sh
+
+.PHONY: check_licenses # Check if the licenses of dependencies have changed
+check_licenses:
+ @TMP_OUT="$$(mktemp)" && \
+ if ! poetry run env bash ./script/make_utils/licenses.sh --check > "$${TMP_OUT}"; then \
+ cat "$${TMP_OUT}"; \
+ rm -f "$${TMP_OUT}"; \
+ echo "Error while checking licenses, see log above."; \
+ echo "Consider re-running 'make licenses'"; \
+ exit 1; \
+ else \
+ echo "Licenses check OK"; \
+ fi
+.PHONY: check_licenses
+
+.PHONY: help # Generate list of targets with descriptions
+help:
+ @grep '^.PHONY: .* #' Makefile | sed 's/\.PHONY: \(.*\) # \(.*\)/\1\t\2/' | expand -t30 | sort
+
+.PHONY: pip_audit # Run pip-audit and check if there are known vulnerabilities in our dependencies
+pip_audit:
+ poetry run pip-audit
+
+.PHONY: clean_local_git # Tell the user how to delete local git branches, except main
+clean_local_git:
+ @git fetch --all --prune
+ @echo "Consider doing: "
+ @echo
+ @# Don't consider deleting `main` or current branches
+ @git branch | grep -v "^*" | grep -v main | xargs echo "git branch -D "
+ @echo
diff --git a/frontends/concrete-python/README.md b/frontends/concrete-python/README.md
new file mode 100644
index 000000000..249b8be72
--- /dev/null
+++ b/frontends/concrete-python/README.md
@@ -0,0 +1,145 @@
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+**Concrete Numpy** is an open-source library which simplifies the use of fully homomorphic encryption (FHE) in Python.
+
+FHE is a powerful cryptographic tool, which allows computation to be performed directly on encrypted data without needing to decrypt it first.
+
+With FHE, you can build services that preserve the privacy of the users. FHE is also great against data breaches as everything is done on encrypted data. Even if the server is compromised, in the end no sensitive data is leaked.
+
+## Main features
+
+- Ability to compile Python functions (that may use NumPy within) to their FHE equivalents, to operate on encrypted data
+- Support for [large collection of operators](https://docs.zama.ai/concrete-numpy/getting-started/compatibility)
+- Partial support for floating points
+- Support for table lookups on integers
+- Support for integration with Client / Server architectures
+
+## Installation
+
+| OS / HW | Available on Docker | Available on PyPI |
+| :----------------------------------: | :-----------------: | :--------------: |
+| Linux | Yes | Yes |
+| Windows | Yes | Coming soon |
+| Windows Subsystem for Linux | Yes | Yes |
+| macOS (Intel) | Yes | Yes |
+| macOS (Apple Silicon, ie M1, M2 etc) | Yes (Rosetta) | Coming soon |
+
+
+The preferred way to install Concrete Numpy is through PyPI:
+
+```shell
+pip install concrete-numpy
+```
+
+You can get the concrete-numpy docker image by pulling the latest docker image:
+
+```shell
+docker pull zamafhe/concrete-numpy:v0.10.0
+```
+
+You can find more detailed installation instructions in [installing.md](docs/getting-started/installing.md)
+
+## Getting started
+
+```python
+import concrete.numpy as cnp
+
+def add(x, y):
+ return x + y
+
+compiler = cnp.Compiler(add, {"x": "encrypted", "y": "encrypted"})
+inputset = [(2, 3), (0, 0), (1, 6), (7, 7), (7, 1), (3, 2), (6, 1), (1, 7), (4, 5), (5, 4)]
+
+print(f"Compiling...")
+circuit = compiler.compile(inputset)
+
+print(f"Generating keys...")
+circuit.keygen()
+
+examples = [(3, 4), (1, 2), (7, 7), (0, 0)]
+for example in examples:
+ encrypted_example = circuit.encrypt(*example)
+ encrypted_result = circuit.run(encrypted_example)
+ result = circuit.decrypt(encrypted_result)
+ print(f"Evaluation of {' + '.join(map(str, example))} homomorphically = {result}")
+```
+
+or if you have a simple function that you can decorate, and you don't care about explicit steps of key generation, encryption, evaluation and decryption:
+
+```python
+import concrete.numpy as cnp
+
+@cnp.compiler({"x": "encrypted", "y": "encrypted"})
+def add(x, y):
+ return x + y
+
+inputset = [(2, 3), (0, 0), (1, 6), (7, 7), (7, 1), (3, 2), (6, 1), (1, 7), (4, 5), (5, 4)]
+
+print(f"Compiling...")
+circuit = add.compile(inputset)
+
+examples = [(3, 4), (1, 2), (7, 7), (0, 0)]
+for example in examples:
+ result = circuit.encrypt_run_decrypt(*example)
+ print(f"Evaluation of {' + '.join(map(str, example))} homomorphically = {result}")
+```
+
+## Documentation
+
+Full, comprehensive documentation is available at [https://docs.zama.ai/concrete-numpy](https://docs.zama.ai/concrete-numpy).
+
+## Target users
+
+Concrete Numpy is a generic library that supports a variety of use cases. Because of this flexibility,
+it doesn't provide primitives for specific use cases.
+
+If you have a specific use case, or a specific field of computation, you may want to build abstractions on top of Concrete Numpy.
+
+One such example is [Concrete ML](https://github.com/zama-ai/concrete-ml), which is built on top of Concrete Numpy to simplify Machine Learning oriented use cases.
+
+## Tutorials
+
+Various tutorials are proposed in the documentation to help you start writing homomorphic programs:
+
+- How to use Concrete Numpy with [Decorators](https://docs.zama.ai/concrete-numpy/tutorials/decorator)
+- Partial support of [Floating Points](https://docs.zama.ai/concrete-numpy/tutorials/floating_points)
+- How to perform [Table Lookup](https://docs.zama.ai/concrete-numpy/tutorials/table_lookup)
+
+More generally, if you have built awesome projects using Concrete Numpy, feel free to let us know and we'll link to it!
+
+## Need support?
+
+
+
+
+
+## License
+
+This software is distributed under the BSD-3-Clause-Clear license. If you have any questions, please contact us at hello@zama.ai.
diff --git a/frontends/concrete-python/concrete/__init__.py b/frontends/concrete-python/concrete/__init__.py
new file mode 100644
index 000000000..616df0b7f
--- /dev/null
+++ b/frontends/concrete-python/concrete/__init__.py
@@ -0,0 +1,7 @@
+"""
+Setup concrete module to be enlarged with numpy module.
+"""
+
+# Do not modify, this is to have a compatible namespace package
+# https://packaging.python.org/en/latest/guides/packaging-namespace-packages/#pkg-resources-style-namespace-packages
+__import__("pkg_resources").declare_namespace(__name__) # pragma: no cover
diff --git a/frontends/concrete-python/concrete/numpy/__init__.py b/frontends/concrete-python/concrete/numpy/__init__.py
new file mode 100644
index 000000000..1ad2e263a
--- /dev/null
+++ b/frontends/concrete-python/concrete/numpy/__init__.py
@@ -0,0 +1,166 @@
+"""
+Export everything that users might need.
+"""
+
+from concrete.compiler import EvaluationKeys, PublicArguments, PublicResult
+
+from .compilation import (
+ DEFAULT_GLOBAL_P_ERROR,
+ DEFAULT_P_ERROR,
+ Circuit,
+ Client,
+ ClientSpecs,
+ Compiler,
+ Configuration,
+ DebugArtifacts,
+ EncryptionStatus,
+ Server,
+)
+from .compilation.decorators import circuit, compiler
+from .extensions import (
+ AutoRounder,
+ LookupTable,
+ array,
+ one,
+ ones,
+ round_bit_pattern,
+ tag,
+ univariate,
+ zero,
+ zeros,
+)
+from .mlir.utils import MAXIMUM_TLU_BIT_WIDTH
+from .representation import Graph
+from .tracing.typing import (
+ f32,
+ f64,
+ int1,
+ int2,
+ int3,
+ int4,
+ int5,
+ int6,
+ int7,
+ int8,
+ int9,
+ int10,
+ int11,
+ int12,
+ int13,
+ int14,
+ int15,
+ int16,
+ int17,
+ int18,
+ int19,
+ int20,
+ int21,
+ int22,
+ int23,
+ int24,
+ int25,
+ int26,
+ int27,
+ int28,
+ int29,
+ int30,
+ int31,
+ int32,
+ int33,
+ int34,
+ int35,
+ int36,
+ int37,
+ int38,
+ int39,
+ int40,
+ int41,
+ int42,
+ int43,
+ int44,
+ int45,
+ int46,
+ int47,
+ int48,
+ int49,
+ int50,
+ int51,
+ int52,
+ int53,
+ int54,
+ int55,
+ int56,
+ int57,
+ int58,
+ int59,
+ int60,
+ int61,
+ int62,
+ int63,
+ int64,
+ tensor,
+ uint1,
+ uint2,
+ uint3,
+ uint4,
+ uint5,
+ uint6,
+ uint7,
+ uint8,
+ uint9,
+ uint10,
+ uint11,
+ uint12,
+ uint13,
+ uint14,
+ uint15,
+ uint16,
+ uint17,
+ uint18,
+ uint19,
+ uint20,
+ uint21,
+ uint22,
+ uint23,
+ uint24,
+ uint25,
+ uint26,
+ uint27,
+ uint28,
+ uint29,
+ uint30,
+ uint31,
+ uint32,
+ uint33,
+ uint34,
+ uint35,
+ uint36,
+ uint37,
+ uint38,
+ uint39,
+ uint40,
+ uint41,
+ uint42,
+ uint43,
+ uint44,
+ uint45,
+ uint46,
+ uint47,
+ uint48,
+ uint49,
+ uint50,
+ uint51,
+ uint52,
+ uint53,
+ uint54,
+ uint55,
+ uint56,
+ uint57,
+ uint58,
+ uint59,
+ uint60,
+ uint61,
+ uint62,
+ uint63,
+ uint64,
+)
diff --git a/frontends/concrete-python/concrete/numpy/compilation/__init__.py b/frontends/concrete-python/concrete/numpy/compilation/__init__.py
new file mode 100644
index 000000000..7fa7edf3a
--- /dev/null
+++ b/frontends/concrete-python/concrete/numpy/compilation/__init__.py
@@ -0,0 +1,11 @@
+"""
+Glue the compilation process together.
+"""
+
+from .artifacts import DebugArtifacts
+from .circuit import Circuit
+from .client import Client
+from .compiler import Compiler, EncryptionStatus
+from .configuration import DEFAULT_GLOBAL_P_ERROR, DEFAULT_P_ERROR, Configuration
+from .server import Server
+from .specs import ClientSpecs
diff --git a/frontends/concrete-python/concrete/numpy/compilation/artifacts.py b/frontends/concrete-python/concrete/numpy/compilation/artifacts.py
new file mode 100644
index 000000000..d15ee468a
--- /dev/null
+++ b/frontends/concrete-python/concrete/numpy/compilation/artifacts.py
@@ -0,0 +1,189 @@
+"""
+Declaration of `DebugArtifacts` class.
+"""
+
+import inspect
+import platform
+import shutil
+import subprocess
+from pathlib import Path
+from typing import Callable, Dict, List, Optional, Union
+
+from ..representation import Graph
+
+DEFAULT_OUTPUT_DIRECTORY: Path = Path(".artifacts")
+
+
+class DebugArtifacts:
+ """
+ DebugArtifacts class, to export information about the compilation process.
+ """
+
+ output_directory: Path
+
+ source_code: Optional[str]
+ parameter_encryption_statuses: Dict[str, str]
+ textual_representations_of_graphs: Dict[str, List[str]]
+ final_graph: Optional[Graph]
+ mlir_to_compile: Optional[str]
+ client_parameters: Optional[bytes]
+
+ def __init__(self, output_directory: Union[str, Path] = DEFAULT_OUTPUT_DIRECTORY):
+ self.output_directory = Path(output_directory)
+
+ self.source_code = None
+ self.parameter_encryption_statuses = {}
+ self.textual_representations_of_graphs = {}
+ self.final_graph = None
+ self.mlir_to_compile = None
+ self.client_parameters = None
+
+ def add_source_code(self, function: Union[str, Callable]):
+ """
+ Add source code of the function being compiled.
+
+ Args:
+ function (Union[str, Callable]):
+ either the source code of the function or the function itself
+ """
+
+ try:
+ self.source_code = (
+ function if isinstance(function, str) else inspect.getsource(function)
+ )
+ except OSError: # pragma: no cover
+ self.source_code = "unavailable"
+
+ def add_parameter_encryption_status(self, name: str, encryption_status: str):
+ """
+ Add parameter encryption status of a parameter of the function being compiled.
+
+ Args:
+ name (str):
+ name of the parameter
+
+ encryption_status (str):
+ encryption status of the parameter
+ """
+
+ self.parameter_encryption_statuses[name] = encryption_status
+
+ def add_graph(self, name: str, graph: Graph):
+ """
+ Add a representation of the function being compiled.
+
+ Args:
+ name (str):
+ name of the graph (e.g., initial, optimized, final)
+
+ graph (Graph):
+ a representation of the function being compiled
+ """
+
+ if name not in self.textual_representations_of_graphs:
+ self.textual_representations_of_graphs[name] = []
+
+ textual_representation = graph.format()
+ self.textual_representations_of_graphs[name].append(textual_representation)
+
+ self.final_graph = graph
+
+ def add_mlir_to_compile(self, mlir: str):
+ """
+ Add textual representation of the resulting MLIR.
+
+ Args:
+ mlir (str):
+ textual representation of the resulting MLIR
+ """
+
+ self.mlir_to_compile = mlir
+
+ def add_client_parameters(self, client_parameters: bytes):
+ """
+ Add client parameters used.
+
+ Args:
+ client_parameters (bytes): client parameters
+ """
+
+ self.client_parameters = client_parameters
+
+ def export(self):
+ """
+ Export the collected information to `self.output_directory`.
+ """
+
+ # pylint: disable=too-many-branches
+
+ output_directory = self.output_directory
+ if output_directory.exists():
+ shutil.rmtree(output_directory)
+ output_directory.mkdir(parents=True)
+
+ with open(output_directory.joinpath("environment.txt"), "w", encoding="utf-8") as f:
+ f.write(f"{platform.platform()} {platform.version()}\n")
+ f.write(f"Python {platform.python_version()}\n")
+
+ with open(output_directory.joinpath("requirements.txt"), "w", encoding="utf-8") as f:
+ # example `pip list` output
+
+ # Package Version
+ # ----------------------------- ---------
+ # alabaster 0.7.12
+ # appdirs 1.4.4
+ # ... ...
+ # ... ...
+ # wrapt 1.12.1
+ # zipp 3.5.0
+
+ pip_process = subprocess.run(
+ ["pip", "--disable-pip-version-check", "list"], stdout=subprocess.PIPE, check=True
+ )
+ dependencies = iter(pip_process.stdout.decode("utf-8").split("\n"))
+
+ # skip 'Package ... Version' line
+ next(dependencies)
+
+ # skip '------- ... -------' line
+ next(dependencies)
+
+ for dependency in dependencies:
+ tokens = [token for token in dependency.split(" ") if token != ""] # noqa: S105
+ if len(tokens) == 0:
+ continue
+
+ name = tokens[0]
+ version = tokens[1]
+
+ f.write(f"{name}=={version}\n")
+
+ if self.source_code is not None:
+ with open(output_directory.joinpath("function.txt"), "w", encoding="utf-8") as f:
+ f.write(self.source_code)
+
+ if len(self.parameter_encryption_statuses) > 0:
+ with open(output_directory.joinpath("parameters.txt"), "w", encoding="utf-8") as f:
+ for name, parameter in self.parameter_encryption_statuses.items():
+ f.write(f"{name} :: {parameter}\n")
+
+ identifier = 0
+
+ textual_representations = self.textual_representations_of_graphs.items()
+ for name, representations in textual_representations:
+ for representation in representations:
+ identifier += 1
+ output_path = output_directory.joinpath(f"{identifier}.{name}.graph.txt")
+ with open(output_path, "w", encoding="utf-8") as f:
+ f.write(f"{representation}\n")
+
+ if self.mlir_to_compile is not None:
+ assert self.final_graph is not None
+ with open(output_directory.joinpath("mlir.txt"), "w", encoding="utf-8") as f:
+ f.write(f"{self.mlir_to_compile}\n")
+
+ if self.client_parameters is not None:
+ with open(output_directory.joinpath("client_parameters.json"), "wb") as f:
+ f.write(self.client_parameters)
+
+ # pylint: enable=too-many-branches
diff --git a/frontends/concrete-python/concrete/numpy/compilation/circuit.py b/frontends/concrete-python/concrete/numpy/compilation/circuit.py
new file mode 100644
index 000000000..436656ae9
--- /dev/null
+++ b/frontends/concrete-python/concrete/numpy/compilation/circuit.py
@@ -0,0 +1,218 @@
+"""
+Declaration of `Circuit` class.
+"""
+
+from typing import Any, Optional, Tuple, Union, cast
+
+import numpy as np
+from concrete.compiler import PublicArguments, PublicResult
+
+from ..dtypes import Integer
+from ..internal.utils import assert_that
+from ..representation import Graph
+from .client import Client
+from .configuration import Configuration
+from .server import Server
+
+
+class Circuit:
+ """
+ Circuit class, to combine computation graph, mlir, client and server into a single object.
+ """
+
+ configuration: Configuration
+
+ graph: Graph
+ mlir: str
+
+ client: Client
+ server: Server
+
+ def __init__(self, graph: Graph, mlir: str, configuration: Optional[Configuration] = None):
+ self.configuration = configuration if configuration is not None else Configuration()
+
+ self.graph = graph
+ self.mlir = mlir
+
+ self._initialize_client_and_server()
+
+ def _initialize_client_and_server(self):
+ input_signs = []
+ for i in range(len(self.graph.input_nodes)): # pylint: disable=consider-using-enumerate
+ input_value = self.graph.input_nodes[i].output
+ assert_that(isinstance(input_value.dtype, Integer))
+ input_dtype = cast(Integer, input_value.dtype)
+ input_signs.append(input_dtype.is_signed)
+
+ output_signs = []
+ for i in range(len(self.graph.output_nodes)): # pylint: disable=consider-using-enumerate
+ output_value = self.graph.output_nodes[i].output
+ assert_that(isinstance(output_value.dtype, Integer))
+ output_dtype = cast(Integer, output_value.dtype)
+ output_signs.append(output_dtype.is_signed)
+
+ self.server = Server.create(self.mlir, input_signs, output_signs, self.configuration)
+
+ keyset_cache_directory = None
+ if self.configuration.use_insecure_key_cache:
+ assert_that(self.configuration.enable_unsafe_features)
+ assert_that(self.configuration.insecure_key_cache_location is not None)
+ keyset_cache_directory = self.configuration.insecure_key_cache_location
+
+ self.client = Client(self.server.client_specs, keyset_cache_directory)
+
+ def __str__(self):
+ return self.graph.format()
+
+ def simulate(self, *args: Any) -> Any:
+ """
+ Simulate execution of the circuit.
+
+ Args:
+ *args (Any):
+ inputs to the circuit
+
+ Returns:
+ Any:
+ result of the simulation
+ """
+
+ return self.graph(*args, p_error=self.p_error)
+
+ def keygen(self, force: bool = False):
+ """
+ Generate keys required for homomorphic evaluation.
+
+ Args:
+ force (bool, default = False):
+ whether to generate new keys even if keys are already generated
+ """
+
+ self.client.keygen(force)
+
+ def encrypt(self, *args: Union[int, np.ndarray]) -> PublicArguments:
+ """
+ Prepare inputs to be run on the circuit.
+
+ Args:
+ *args (Union[int, numpy.ndarray]):
+ inputs to the circuit
+
+ Returns:
+ PublicArguments:
+ encrypted and plain arguments as well as public keys
+ """
+
+ return self.client.encrypt(*args)
+
+ def run(self, args: PublicArguments) -> PublicResult:
+ """
+ Evaluate circuit using encrypted arguments.
+
+ Args:
+ args (PublicArguments):
+ arguments to the circuit (can be obtained with `encrypt` method of `Circuit`)
+
+ Returns:
+ PublicResult:
+ encrypted result of homomorphic evaluaton
+ """
+
+ self.keygen(force=False)
+ return self.server.run(args, self.client.evaluation_keys)
+
+ def decrypt(
+ self,
+ result: PublicResult,
+ ) -> Union[int, np.ndarray, Tuple[Union[int, np.ndarray], ...]]:
+ """
+ Decrypt result of homomorphic evaluaton.
+
+ Args:
+ result (PublicResult):
+ encrypted result of homomorphic evaluaton
+
+ Returns:
+ Union[int, numpy.ndarray]:
+ clear result of homomorphic evaluaton
+ """
+
+ return self.client.decrypt(result)
+
+ def encrypt_run_decrypt(self, *args: Any) -> Any:
+ """
+ Encrypt inputs, run the circuit, and decrypt the outputs in one go.
+
+ Args:
+ *args (Union[int, numpy.ndarray]):
+ inputs to the circuit
+
+ Returns:
+ Union[int, np.ndarray, Tuple[Union[int, np.ndarray], ...]]:
+ clear result of homomorphic evaluation
+ """
+
+ return self.decrypt(self.run(self.encrypt(*args)))
+
+ def cleanup(self):
+ """
+ Cleanup the temporary library output directory.
+ """
+
+ self.server.cleanup()
+
+ @property
+ def complexity(self) -> float:
+ """
+ Get complexity of the circuit.
+ """
+ return self.server.complexity
+
+ @property
+ def size_of_secret_keys(self) -> int:
+ """
+ Get size of the secret keys of the circuit.
+ """
+ return self.server.size_of_secret_keys
+
+ @property
+ def size_of_bootstrap_keys(self) -> int:
+ """
+ Get size of the bootstrap keys of the circuit.
+ """
+ return self.server.size_of_bootstrap_keys
+
+ @property
+ def size_of_keyswitch_keys(self) -> int:
+ """
+ Get size of the key switch keys of the circuit.
+ """
+ return self.server.size_of_keyswitch_keys
+
+ @property
+ def size_of_inputs(self) -> int:
+ """
+ Get size of the inputs of the circuit.
+ """
+ return self.server.size_of_inputs
+
+ @property
+ def size_of_outputs(self) -> int:
+ """
+ Get size of the outputs of the circuit.
+ """
+ return self.server.size_of_outputs
+
+ @property
+ def p_error(self) -> int:
+ """
+ Get probability of error for each simple TLU (on a scalar).
+ """
+ return self.server.p_error
+
+ @property
+ def global_p_error(self) -> int:
+ """
+ Get the probability of having at least one simple TLU error during the entire execution.
+ """
+ return self.server.global_p_error
diff --git a/frontends/concrete-python/concrete/numpy/compilation/client.py b/frontends/concrete-python/concrete/numpy/compilation/client.py
new file mode 100644
index 000000000..0cfa664f2
--- /dev/null
+++ b/frontends/concrete-python/concrete/numpy/compilation/client.py
@@ -0,0 +1,271 @@
+"""
+Declaration of `Client` class.
+"""
+
+import json
+import shutil
+import tempfile
+from pathlib import Path
+from typing import Dict, List, Optional, Tuple, Union
+
+import numpy as np
+from concrete.compiler import (
+ ClientSupport,
+ EvaluationKeys,
+ KeySet,
+ KeySetCache,
+ PublicArguments,
+ PublicResult,
+)
+
+from ..dtypes.integer import SignedInteger, UnsignedInteger
+from ..internal.utils import assert_that
+from ..values.value import Value
+from .specs import ClientSpecs
+
+
+class Client:
+ """
+ Client class, which can be used to manage keys, encrypt arguments and decrypt results.
+ """
+
+ specs: ClientSpecs
+
+ _keyset: Optional[KeySet]
+ _keyset_cache: Optional[KeySetCache]
+
+ def __init__(
+ self,
+ client_specs: ClientSpecs,
+ keyset_cache_directory: Optional[Union[str, Path]] = None,
+ ):
+ self.specs = client_specs
+
+ self._keyset = None
+ self._keyset_cache = None
+
+ if keyset_cache_directory is not None:
+ self._keyset_cache = KeySetCache.new(str(keyset_cache_directory))
+
+ def save(self, path: Union[str, Path]):
+ """
+ Save the client into the given path in zip format.
+
+ Args:
+ path (Union[str, Path]):
+ path to save the client
+ """
+
+ with tempfile.TemporaryDirectory() as tmp_dir:
+ with open(Path(tmp_dir) / "client.specs.json", "w", encoding="utf-8") as f:
+ f.write(self.specs.serialize())
+
+ path = str(path)
+ if path.endswith(".zip"):
+ path = path[: len(path) - 4]
+
+ shutil.make_archive(path, "zip", tmp_dir)
+
+ @staticmethod
+ def load(
+ path: Union[str, Path],
+ keyset_cache_directory: Optional[Union[str, Path]] = None,
+ ) -> "Client":
+ """
+ Load the client from the given path in zip format.
+
+ Args:
+ path (Union[str, Path]):
+ path to load the client from
+
+ keyset_cache_directory (Optional[Union[str, Path]], default = None):
+ keyset cache directory to use
+
+ Returns:
+ Client:
+ client loaded from the filesystem
+ """
+
+ with tempfile.TemporaryDirectory() as tmp_dir:
+ shutil.unpack_archive(path, tmp_dir, "zip")
+ with open(Path(tmp_dir) / "client.specs.json", "r", encoding="utf-8") as f:
+ client_specs = ClientSpecs.unserialize(f.read())
+
+ return Client(client_specs, keyset_cache_directory)
+
+ def keygen(self, force: bool = False):
+ """
+ Generate keys required for homomorphic evaluation.
+
+ Args:
+ force (bool, default = False):
+ whether to generate new keys even if keys are already generated
+ """
+
+ if self._keyset is None or force:
+ self._keyset = ClientSupport.key_set(self.specs.client_parameters, self._keyset_cache)
+
+ def encrypt(self, *args: Union[int, np.ndarray]) -> PublicArguments:
+ """
+ Prepare inputs to be run on the circuit.
+
+ Args:
+ *args (Union[int, numpy.ndarray]):
+ inputs to the circuit
+
+ Returns:
+ PublicArguments:
+ encrypted and plain arguments as well as public keys
+ """
+
+ client_parameters_json = json.loads(self.specs.client_parameters.serialize())
+ assert_that("inputs" in client_parameters_json)
+ input_specs = client_parameters_json["inputs"]
+
+ if len(args) != len(input_specs):
+ message = f"Expected {len(input_specs)} inputs but got {len(args)}"
+ raise ValueError(message)
+
+ sanitized_args: Dict[int, Union[int, np.ndarray]] = {}
+ for index, spec in enumerate(input_specs):
+ arg = args[index]
+ if isinstance(arg, list):
+ arg = np.array(arg)
+
+ is_valid = isinstance(arg, (int, np.integer)) or (
+ isinstance(arg, np.ndarray) and np.issubdtype(arg.dtype, np.integer)
+ )
+
+ width = spec["shape"]["width"]
+ shape = tuple(spec["shape"]["dimensions"])
+ is_encrypted = spec["encryption"] is not None
+
+ expected_dtype = (
+ SignedInteger(width) if self.specs.input_signs[index] else UnsignedInteger(width)
+ )
+ expected_value = Value(expected_dtype, shape, is_encrypted)
+ if is_valid:
+ expected_min = expected_dtype.min()
+ expected_max = expected_dtype.max()
+
+ actual_min = arg if isinstance(arg, int) else arg.min()
+ actual_max = arg if isinstance(arg, int) else arg.max()
+ actual_shape = () if isinstance(arg, int) else arg.shape
+
+ is_valid = (
+ actual_min >= expected_min
+ and actual_max <= expected_max
+ and actual_shape == expected_value.shape
+ )
+
+ if is_valid:
+ is_signed = self.specs.input_signs[index]
+ sanitizer = 0 if not is_signed else 2 ** (width - 1)
+
+ if isinstance(arg, int):
+ sanitized_args[index] = arg + sanitizer
+ else:
+ sanitized_args[index] = (arg + sanitizer).astype(np.uint64)
+
+ if not is_valid:
+ actual_value = Value.of(arg, is_encrypted=is_encrypted)
+ message = (
+ f"Expected argument {index} to be {expected_value} but it's {actual_value}"
+ )
+ raise ValueError(message)
+
+ self.keygen(force=False)
+ return ClientSupport.encrypt_arguments(
+ self.specs.client_parameters,
+ self._keyset,
+ [sanitized_args[i] for i in range(len(sanitized_args))],
+ )
+
+ def decrypt(
+ self,
+ result: PublicResult,
+ ) -> Union[int, np.ndarray, Tuple[Union[int, np.ndarray], ...]]:
+ """
+ Decrypt result of homomorphic evaluaton.
+
+ Args:
+ result (PublicResult):
+ encrypted result of homomorphic evaluaton
+
+ Returns:
+ Union[int, numpy.ndarray]:
+ clear result of homomorphic evaluaton
+ """
+
+ self.keygen(force=False)
+ outputs = ClientSupport.decrypt_result(self.specs.client_parameters, self._keyset, result)
+ if not isinstance(outputs, tuple):
+ outputs = (outputs,)
+
+ sanitized_outputs: List[Union[int, np.ndarray]] = []
+
+ client_parameters_json = json.loads(self.specs.client_parameters.serialize())
+ assert_that("outputs" in client_parameters_json)
+ output_specs = client_parameters_json["outputs"]
+
+ for index, output in enumerate(outputs):
+ is_signed = self.specs.output_signs[index]
+ crt_decomposition = (
+ output_specs[index].get("encryption", {}).get("encoding", {}).get("crt", [])
+ )
+
+ if is_signed:
+ if crt_decomposition:
+ if isinstance(output, int):
+ sanititzed_output = (
+ output
+ if output < (int(np.prod(crt_decomposition)) // 2)
+ else -int(np.prod(crt_decomposition)) + output
+ )
+ else:
+ output = output.astype(np.longlong) # to prevent overflows in numpy
+ sanititzed_output = np.where(
+ output < (np.prod(crt_decomposition) // 2),
+ output,
+ -np.prod(crt_decomposition) + output,
+ ).astype(
+ np.int64
+ ) # type: ignore
+
+ sanitized_outputs.append(sanititzed_output)
+
+ else:
+ n = output_specs[index]["shape"]["width"]
+ output %= 2**n
+ if isinstance(output, int):
+ sanititzed_output = output if output < (2 ** (n - 1)) else output - (2**n)
+ sanitized_outputs.append(sanititzed_output)
+ else:
+ output = output.astype(np.longlong) # to prevent overflows in numpy
+ sanititzed_output = np.where(
+ output < (2 ** (n - 1)), output, output - (2**n)
+ ).astype(
+ np.int64
+ ) # type: ignore
+ sanitized_outputs.append(sanititzed_output)
+ else:
+ sanitized_outputs.append(
+ output if isinstance(output, int) else output.astype(np.uint64)
+ )
+
+ return sanitized_outputs[0] if len(sanitized_outputs) == 1 else tuple(sanitized_outputs)
+
+ @property
+ def evaluation_keys(self) -> EvaluationKeys:
+ """
+ Get evaluation keys for encrypted computation.
+
+ Returns:
+ EvaluationKeys
+ evaluation keys for encrypted computation
+ """
+
+ self.keygen(force=False)
+
+ assert self._keyset is not None
+ return self._keyset.get_evaluation_keys()
diff --git a/frontends/concrete-python/concrete/numpy/compilation/compiler.py b/frontends/concrete-python/concrete/numpy/compilation/compiler.py
new file mode 100644
index 000000000..0ba0ce80e
--- /dev/null
+++ b/frontends/concrete-python/concrete/numpy/compilation/compiler.py
@@ -0,0 +1,550 @@
+"""
+Declaration of `Compiler` class.
+"""
+
+import inspect
+import os
+import traceback
+from copy import deepcopy
+from enum import Enum, unique
+from typing import Any, Callable, Dict, Iterable, List, Optional, Tuple, Union
+
+import numpy as np
+
+from ..extensions import AutoRounder
+from ..mlir import GraphConverter
+from ..representation import Graph
+from ..tracing import Tracer
+from ..values import Value
+from .artifacts import DebugArtifacts
+from .circuit import Circuit
+from .configuration import Configuration
+from .utils import fuse
+
+
+@unique
+class EncryptionStatus(str, Enum):
+ """
+ EncryptionStatus enum, to represent encryption status of parameters.
+ """
+
+ CLEAR = "clear"
+ ENCRYPTED = "encrypted"
+
+
+class Compiler:
+ """
+ Compiler class, to glue the compilation pipeline.
+ """
+
+ function: Callable
+ parameter_encryption_statuses: Dict[str, EncryptionStatus]
+
+ configuration: Configuration
+ artifacts: Optional[DebugArtifacts]
+
+ inputset: List[Any]
+ graph: Optional[Graph]
+
+ _is_direct: bool
+ _parameter_values: Dict[str, Value]
+
+ @staticmethod
+ def assemble(
+ function: Callable,
+ parameter_values: Dict[str, Value],
+ configuration: Optional[Configuration] = None,
+ artifacts: Optional[DebugArtifacts] = None,
+ **kwargs,
+ ) -> Circuit:
+ """
+ Assemble a circuit from the raw parameter values, used in direct circuit definition.
+
+ Args:
+ function (Callable):
+ function to convert to a circuit
+
+ parameter_values (Dict[str, Value]):
+ parameter values of the function
+
+ configuration(Optional[Configuration], default = None):
+ configuration to use
+
+ artifacts (Optional[DebugArtifacts], default = None):
+ artifacts to store information about the process
+
+ kwargs (Dict[str, Any]):
+ configuration options to overwrite
+
+ Returns:
+ Circuit:
+ assembled circuit
+ """
+
+ compiler = Compiler(
+ function,
+ {
+ name: "encrypted" if value.is_encrypted else "clear"
+ for name, value in parameter_values.items()
+ },
+ )
+
+ # pylint: disable=protected-access
+ compiler._is_direct = True
+ compiler._parameter_values = parameter_values
+ # pylint: enable=protected-access
+
+ return compiler.compile(None, configuration, artifacts, **kwargs)
+
+ def __init__(
+ self,
+ function: Callable,
+ parameter_encryption_statuses: Dict[str, Union[str, EncryptionStatus]],
+ ):
+ signature = inspect.signature(function)
+
+ missing_args = list(signature.parameters)
+ for arg in parameter_encryption_statuses.keys():
+ if arg in signature.parameters:
+ missing_args.remove(arg)
+
+ if len(missing_args) != 0:
+ parameter_str = repr(missing_args[0])
+ for arg in missing_args[1:-1]:
+ parameter_str += f", {repr(arg)}"
+ if len(missing_args) != 1:
+ parameter_str += f" and {repr(missing_args[-1])}"
+
+ message = (
+ f"Encryption status{'es' if len(missing_args) > 1 else ''} "
+ f"of parameter{'s' if len(missing_args) > 1 else ''} "
+ f"{parameter_str} of function '{function.__name__}' "
+ f"{'are' if len(missing_args) > 1 else 'is'} not provided"
+ )
+ raise ValueError(message)
+
+ additional_args = list(parameter_encryption_statuses)
+ for arg in signature.parameters.keys():
+ if arg in parameter_encryption_statuses:
+ additional_args.remove(arg)
+
+ if len(additional_args) != 0:
+ parameter_str = repr(additional_args[0])
+ for arg in additional_args[1:-1]:
+ parameter_str += f", {repr(arg)}"
+ if len(additional_args) != 1:
+ parameter_str += f" and {repr(additional_args[-1])}"
+
+ message = (
+ f"Encryption status{'es' if len(additional_args) > 1 else ''} "
+ f"of {parameter_str} {'are' if len(additional_args) > 1 else 'is'} provided but "
+ f"{'they are' if len(additional_args) > 1 else 'it is'} not a parameter "
+ f"of function '{function.__name__}'"
+ )
+ raise ValueError(message)
+
+ self.function = function # type: ignore
+ self.parameter_encryption_statuses = {
+ param: EncryptionStatus(status.lower())
+ for param, status in parameter_encryption_statuses.items()
+ }
+
+ self.configuration = Configuration()
+ self.artifacts = None
+
+ self.inputset = []
+ self.graph = None
+
+ self._is_direct = False
+ self._parameter_values = {}
+
+ def __call__(
+ self,
+ *args: Any,
+ **kwargs: Any,
+ ) -> Union[
+ np.bool_,
+ np.integer,
+ np.floating,
+ np.ndarray,
+ Tuple[Union[np.bool_, np.integer, np.floating, np.ndarray], ...],
+ ]:
+ if len(kwargs) != 0:
+ message = f"Calling function '{self.function.__name__}' with kwargs is not supported"
+ raise RuntimeError(message)
+
+ sample = args[0] if len(args) == 1 else args
+
+ if self.graph is None:
+ self._trace(sample)
+ assert self.graph is not None
+
+ self.inputset.append(sample)
+ return self.graph(*args)
+
+ def _trace(self, sample: Union[Any, Tuple[Any, ...]]):
+ """
+ Trace the function and fuse the resulting graph with a sample input.
+
+ Args:
+ sample (Union[Any, Tuple[Any, ...]]):
+ sample to use for tracing
+ """
+
+ if self.artifacts is not None:
+ self.artifacts.add_source_code(self.function)
+ for param, encryption_status in self.parameter_encryption_statuses.items():
+ self.artifacts.add_parameter_encryption_status(param, encryption_status)
+
+ parameters = {
+ param: Value.of(arg, is_encrypted=(status == EncryptionStatus.ENCRYPTED))
+ for arg, (param, status) in zip(
+ sample if len(self.parameter_encryption_statuses) > 1 else (sample,),
+ self.parameter_encryption_statuses.items(),
+ )
+ }
+
+ self.graph = Tracer.trace(self.function, parameters)
+ if self.artifacts is not None:
+ self.artifacts.add_graph("initial", self.graph)
+
+ fuse(self.graph, self.artifacts)
+
+ def _evaluate(
+ self,
+ action: str,
+ inputset: Optional[Union[Iterable[Any], Iterable[Tuple[Any, ...]]]],
+ ):
+ """
+ Trace, fuse, measure bounds, and update values in the resulting graph in one go.
+
+ Args:
+ action (str):
+ action being performed (e.g., "trace", "compile")
+
+ inputset (Optional[Union[Iterable[Any], Iterable[Tuple[Any, ...]]]]):
+ optional inputset to extend accumulated inputset before bounds measurement
+ """
+
+ if self._is_direct:
+
+ self.graph = Tracer.trace(self.function, self._parameter_values, is_direct=True)
+ if self.artifacts is not None:
+ self.artifacts.add_graph("initial", self.graph) # pragma: no cover
+
+ fuse(self.graph, self.artifacts)
+ if self.artifacts is not None:
+ self.artifacts.add_graph("final", self.graph) # pragma: no cover
+
+ return
+
+ if inputset is not None:
+ previous_inputset_length = len(self.inputset)
+ for index, sample in enumerate(iter(inputset)):
+ self.inputset.append(sample)
+
+ if not isinstance(sample, tuple):
+ sample = (sample,)
+
+ if len(sample) != len(self.parameter_encryption_statuses):
+ self.inputset = self.inputset[:previous_inputset_length]
+
+ expected = (
+ "a single value"
+ if len(self.parameter_encryption_statuses) == 1
+ else f"a tuple of {len(self.parameter_encryption_statuses)} values"
+ )
+ actual = (
+ "a single value" if len(sample) == 1 else f"a tuple of {len(sample)} values"
+ )
+
+ message = (
+ f"Input #{index} of your inputset is not well formed "
+ f"(expected {expected} got {actual})"
+ )
+ raise ValueError(message)
+
+ if self.configuration.auto_adjust_rounders:
+ AutoRounder.adjust(self.function, self.inputset)
+
+ if self.graph is None:
+ try:
+ first_sample = next(iter(self.inputset))
+ except StopIteration as error:
+ message = (
+ f"{action} function '{self.function.__name__}' "
+ f"without an inputset is not supported"
+ )
+ raise RuntimeError(message) from error
+
+ self._trace(first_sample)
+ assert self.graph is not None
+
+ bounds = self.graph.measure_bounds(self.inputset)
+ self.graph.update_with_bounds(bounds)
+
+ if self.artifacts is not None:
+ self.artifacts.add_graph("final", self.graph)
+
+ def trace(
+ self,
+ inputset: Optional[Union[Iterable[Any], Iterable[Tuple[Any, ...]]]] = None,
+ configuration: Optional[Configuration] = None,
+ artifacts: Optional[DebugArtifacts] = None,
+ **kwargs,
+ ) -> Graph:
+ """
+ Trace the function using an inputset.
+
+ Args:
+ inputset (Optional[Union[Iterable[Any], Iterable[Tuple[Any, ...]]]]):
+ optional inputset to extend accumulated inputset before bounds measurement
+
+ configuration(Optional[Configuration], default = None):
+ configuration to use
+
+ artifacts (Optional[DebugArtifacts], default = None):
+ artifacts to store information about the process
+
+ kwargs (Dict[str, Any]):
+ configuration options to overwrite
+
+ Returns:
+ Graph:
+ computation graph representing the function prior to MLIR conversion
+ """
+
+ old_configuration = deepcopy(self.configuration)
+ old_artifacts = deepcopy(self.artifacts)
+
+ if configuration is not None:
+ self.configuration = configuration
+
+ if len(kwargs) != 0:
+ self.configuration = self.configuration.fork(**kwargs)
+
+ self.artifacts = (
+ artifacts
+ if artifacts is not None
+ else DebugArtifacts()
+ if self.configuration.dump_artifacts_on_unexpected_failures
+ else None
+ )
+
+ try:
+
+ self._evaluate("Tracing", inputset)
+ assert self.graph is not None
+
+ if self.configuration.verbose or self.configuration.show_graph:
+ graph = self.graph.format()
+ longest_line = max([len(line) for line in graph.split("\n")])
+
+ try: # pragma: no cover
+
+ # this branch cannot be covered
+ # because `os.get_terminal_size()`
+ # raises an exception during tests
+
+ columns, _ = os.get_terminal_size()
+ if columns == 0:
+ columns = min(longest_line, 80)
+ else:
+ columns = min(longest_line, columns)
+ except OSError: # pragma: no cover
+ columns = min(longest_line, 80)
+
+ print()
+
+ print("Computation Graph")
+ print("-" * columns)
+ print(graph)
+ print("-" * columns)
+
+ print()
+
+ return self.graph
+
+ except Exception: # pragma: no cover
+
+ # this branch is reserved for unexpected issues and hence it shouldn't be tested
+ # if it could be tested, we would have fixed the underlying issue
+
+ # if the user desires so,
+ # we need to export all the information we have about the compilation
+
+ if self.configuration.dump_artifacts_on_unexpected_failures:
+ assert self.artifacts is not None
+ self.artifacts.export()
+
+ traceback_path = self.artifacts.output_directory.joinpath("traceback.txt")
+ with open(traceback_path, "w", encoding="utf-8") as f:
+ f.write(traceback.format_exc())
+
+ raise
+
+ finally:
+
+ self.configuration = old_configuration
+ self.artifacts = old_artifacts
+
+ # pylint: disable=too-many-branches,too-many-statements
+
+ def compile(
+ self,
+ inputset: Optional[Union[Iterable[Any], Iterable[Tuple[Any, ...]]]] = None,
+ configuration: Optional[Configuration] = None,
+ artifacts: Optional[DebugArtifacts] = None,
+ **kwargs,
+ ) -> Circuit:
+ """
+ Compile the function using an inputset.
+
+ Args:
+ inputset (Optional[Union[Iterable[Any], Iterable[Tuple[Any, ...]]]]):
+ optional inputset to extend accumulated inputset before bounds measurement
+
+ configuration(Optional[Configuration], default = None):
+ configuration to use
+
+ artifacts (Optional[DebugArtifacts], default = None):
+ artifacts to store information about the process
+
+ kwargs (Dict[str, Any]):
+ configuration options to overwrite
+
+ Returns:
+ Circuit:
+ compiled circuit
+ """
+
+ old_configuration = deepcopy(self.configuration)
+ old_artifacts = deepcopy(self.artifacts)
+
+ if configuration is not None:
+ self.configuration = configuration
+
+ if len(kwargs) != 0:
+ self.configuration = self.configuration.fork(**kwargs)
+
+ self.artifacts = (
+ artifacts
+ if artifacts is not None
+ else DebugArtifacts()
+ if self.configuration.dump_artifacts_on_unexpected_failures
+ else None
+ )
+
+ try:
+ self._evaluate("Compiling", inputset)
+ assert self.graph is not None
+
+ mlir = GraphConverter.convert(self.graph)
+ if self.artifacts is not None:
+ self.artifacts.add_mlir_to_compile(mlir)
+
+ show_graph = (
+ self.configuration.show_graph
+ if self.configuration.show_graph is not None
+ else self.configuration.verbose
+ )
+ show_mlir = (
+ self.configuration.show_mlir
+ if self.configuration.show_mlir is not None
+ else self.configuration.verbose
+ )
+ show_optimizer = (
+ self.configuration.show_optimizer
+ if self.configuration.show_optimizer is not None
+ else self.configuration.verbose
+ )
+
+ columns = 0
+ if show_graph or show_mlir or show_optimizer:
+
+ graph = (
+ self.graph.format()
+ if self.configuration.verbose or self.configuration.show_graph
+ else ""
+ )
+
+ longest_graph_line = max([len(line) for line in graph.split("\n")])
+ longest_mlir_line = max([len(line) for line in mlir.split("\n")])
+ longest_line = max(longest_graph_line, longest_mlir_line)
+
+ try: # pragma: no cover
+
+ # this branch cannot be covered
+ # because `os.get_terminal_size()`
+ # raises an exception during tests
+
+ columns, _ = os.get_terminal_size()
+ if columns == 0:
+ columns = min(longest_line, 80)
+ else:
+ columns = min(longest_line, columns)
+ except OSError: # pragma: no cover
+ columns = min(longest_line, 80)
+
+ if show_graph:
+ print()
+
+ print("Computation Graph")
+ print("-" * columns)
+ print(graph)
+ print("-" * columns)
+
+ print()
+
+ if show_mlir:
+ print("\n" if not show_graph else "", end="")
+
+ print("MLIR")
+ print("-" * columns)
+ print(mlir)
+ print("-" * columns)
+
+ print()
+
+ if show_optimizer:
+ print("\n" if not (show_graph or show_mlir) else "", end="")
+
+ print("Optimizer")
+ print("-" * columns)
+
+ circuit = Circuit(self.graph, mlir, self.configuration)
+
+ client_parameters = circuit.client.specs.client_parameters
+ if self.artifacts is not None:
+ self.artifacts.add_client_parameters(client_parameters.serialize())
+
+ if show_optimizer:
+ print("-" * columns)
+ print()
+
+ return circuit
+
+ except Exception: # pragma: no cover
+
+ # this branch is reserved for unexpected issues and hence it shouldn't be tested
+ # if it could be tested, we would have fixed the underlying issue
+
+ # if the user desires so,
+ # we need to export all the information we have about the compilation
+
+ if self.configuration.dump_artifacts_on_unexpected_failures:
+ assert self.artifacts is not None
+ self.artifacts.export()
+
+ traceback_path = self.artifacts.output_directory.joinpath("traceback.txt")
+ with open(traceback_path, "w", encoding="utf-8") as f:
+ f.write(traceback.format_exc())
+
+ raise
+
+ finally:
+
+ self.configuration = old_configuration
+ self.artifacts = old_artifacts
+
+ # pylint: enable=too-many-branches,too-many-statements
diff --git a/frontends/concrete-python/concrete/numpy/compilation/configuration.py b/frontends/concrete-python/concrete/numpy/compilation/configuration.py
new file mode 100644
index 000000000..ab786d794
--- /dev/null
+++ b/frontends/concrete-python/concrete/numpy/compilation/configuration.py
@@ -0,0 +1,162 @@
+"""
+Declaration of `Configuration` class.
+"""
+
+from copy import deepcopy
+from pathlib import Path
+from typing import Optional, Union, get_type_hints
+
+DEFAULT_P_ERROR = None
+DEFAULT_GLOBAL_P_ERROR = 1 / 100_000
+
+
+class Configuration:
+ """
+ Configuration class, to allow the compilation process to be customized.
+ """
+
+ # pylint: disable=too-many-instance-attributes
+
+ verbose: bool
+ show_graph: Optional[bool]
+ show_mlir: Optional[bool]
+ show_optimizer: Optional[bool]
+ dump_artifacts_on_unexpected_failures: bool
+ enable_unsafe_features: bool
+ use_insecure_key_cache: bool
+ loop_parallelize: bool
+ dataflow_parallelize: bool
+ auto_parallelize: bool
+ jit: bool
+ p_error: Optional[float]
+ global_p_error: Optional[float]
+ insecure_key_cache_location: Optional[str]
+ auto_adjust_rounders: bool
+
+ # pylint: enable=too-many-instance-attributes
+
+ def _validate(self):
+ """
+ Validate configuration.
+ """
+
+ if not self.enable_unsafe_features:
+
+ if self.use_insecure_key_cache:
+ message = "Insecure key cache cannot be used without enabling unsafe features"
+ raise RuntimeError(message)
+
+ if self.use_insecure_key_cache and self.insecure_key_cache_location is None:
+ message = "Insecure key cache cannot be enabled without specifying its location"
+ raise RuntimeError(message)
+
+ # pylint: disable=too-many-arguments
+
+ def __init__(
+ self,
+ verbose: bool = False,
+ show_graph: Optional[bool] = None,
+ show_mlir: Optional[bool] = None,
+ show_optimizer: Optional[bool] = None,
+ dump_artifacts_on_unexpected_failures: bool = True,
+ enable_unsafe_features: bool = False,
+ use_insecure_key_cache: bool = False,
+ insecure_key_cache_location: Optional[Union[Path, str]] = None,
+ loop_parallelize: bool = True,
+ dataflow_parallelize: bool = True,
+ auto_parallelize: bool = False,
+ jit: bool = False,
+ p_error: Optional[float] = None,
+ global_p_error: Optional[float] = None,
+ auto_adjust_rounders: bool = False,
+ ):
+ self.verbose = verbose
+ self.show_graph = show_graph
+ self.show_mlir = show_mlir
+ self.show_optimizer = show_optimizer
+ self.dump_artifacts_on_unexpected_failures = dump_artifacts_on_unexpected_failures
+ self.enable_unsafe_features = enable_unsafe_features
+ self.use_insecure_key_cache = use_insecure_key_cache
+ self.insecure_key_cache_location = (
+ str(insecure_key_cache_location) if insecure_key_cache_location is not None else None
+ )
+ self.loop_parallelize = loop_parallelize
+ self.dataflow_parallelize = dataflow_parallelize
+ self.auto_parallelize = auto_parallelize
+ self.jit = jit
+ self.p_error = p_error
+ self.global_p_error = global_p_error
+ self.auto_adjust_rounders = auto_adjust_rounders
+
+ self._validate()
+
+ # pylint: enable=too-many-arguments
+
+ def fork(self, **kwargs) -> "Configuration":
+ """
+ Get a new configuration from another one specified changes.
+
+ Args:
+ **kwargs:
+ changes to make
+
+ Returns:
+ Configuration:
+ configuration that is forked from self and updated using kwargs
+ """
+
+ # pylint: disable=too-many-branches
+
+ result = deepcopy(self)
+
+ hints = get_type_hints(Configuration)
+ for name, value in kwargs.items():
+ if name not in hints:
+ message = f"Unexpected keyword argument '{name}'"
+ raise TypeError(message)
+
+ hint = hints[name]
+ expected = None
+ is_correctly_typed = True
+
+ if name == "insecure_key_cache_location":
+ if not (value is None or isinstance(value, str)):
+ is_correctly_typed = False
+ expected = "Optional[str]"
+
+ elif name == "p_error":
+ if not (value is None or isinstance(value, float)):
+ is_correctly_typed = False
+ expected = "Optional[float]"
+
+ elif name == "global_p_error":
+ if not (value is None or isinstance(value, float)):
+ is_correctly_typed = False
+ expected = "Optional[float]"
+
+ elif name in ["show_graph", "show_mlir", "show_optimizer"]:
+ if not (value is None or isinstance(value, bool)):
+ is_correctly_typed = False
+ expected = "Optional[bool]"
+
+ elif not isinstance(value, hint): # type: ignore
+ is_correctly_typed = False
+
+ if not is_correctly_typed:
+ if expected is None:
+ expected = hint.__name__ if hasattr(hint, "__name__") else str(hint)
+ message = (
+ f"Unexpected type for keyword argument '{name}' "
+ f"(expected '{expected}', got '{type(value).__name__}')"
+ )
+ raise TypeError(message)
+
+ setattr(result, name, value)
+
+ # pylint: disable=protected-access
+ result._validate()
+ # pylint: enable=protected-access
+
+ return result
+
+ # pylint: enable=too-many-branches
diff --git a/frontends/concrete-python/concrete/numpy/compilation/decorators.py b/frontends/concrete-python/concrete/numpy/compilation/decorators.py
new file mode 100644
index 000000000..50b4e8c9d
--- /dev/null
+++ b/frontends/concrete-python/concrete/numpy/compilation/decorators.py
@@ -0,0 +1,163 @@
+"""
+Declaration of `circuit` and `compiler` decorators.
+"""
+
+import inspect
+from typing import Any, Callable, Dict, Iterable, Mapping, Optional, Tuple, Union
+
+from ..representation import Graph
+from ..tracing.typing import ScalarAnnotation
+from ..values import Value
+from .artifacts import DebugArtifacts
+from .circuit import Circuit
+from .compiler import Compiler, EncryptionStatus
+from .configuration import Configuration
+
+
+def circuit(
+ parameters: Mapping[str, Union[str, EncryptionStatus]],
+ configuration: Optional[Configuration] = None,
+ artifacts: Optional[DebugArtifacts] = None,
+ **kwargs,
+):
+ """
+ Provide a direct interface for compilation.
+
+ Args:
+ parameters (Mapping[str, Union[str, EncryptionStatus]]):
+ encryption statuses of the parameters of the function to compile
+
+ configuration(Optional[Configuration], default = None):
+ configuration to use
+
+ artifacts (Optional[DebugArtifacts], default = None):
+ artifacts to store information about the process
+
+ kwargs (Dict[str, Any]):
+ configuration options to overwrite
+ """
+
+ def decoration(function: Callable):
+ signature = inspect.signature(function)
+
+ parameter_values: Dict[str, Value] = {}
+ for name, details in signature.parameters.items():
+ if name not in parameters:
+ continue
+
+ annotation = details.annotation
+
+ is_value = isinstance(annotation, Value)
+ is_scalar_annotation = isinstance(annotation, type) and issubclass(
+ annotation, ScalarAnnotation
+ )
+
+ if not (is_value or is_scalar_annotation):
+ message = (
+ f"Annotation {annotation} for argument '{name}' is not valid "
+ f"(please use a cnp type such as "
+ f"`cnp.uint4` or 'cnp.tensor[cnp.uint4, 3, 2]')"
+ )
+ raise ValueError(message)
+
+ parameter_values[name] = (
+ annotation if is_value else Value(annotation.dtype, shape=(), is_encrypted=False)
+ )
+
+ status = EncryptionStatus(parameters[name].lower())
+ parameter_values[name].is_encrypted = status == "encrypted"
+
+ return Compiler.assemble(function, parameter_values, configuration, artifacts, **kwargs)
+
+ return decoration
+
+
+def compiler(parameters: Mapping[str, Union[str, EncryptionStatus]]):
+ """
+ Provide an easy interface for compilation.
+
+ Args:
+ parameters (Mapping[str, Union[str, EncryptionStatus]]):
+ encryption statuses of the parameters of the function to compile
+ """
+
+ def decoration(function: Callable):
+ class Compilable:
+ """
+ Compilable class, to wrap a function and provide methods to trace and compile it.
+ """
+
+ function: Callable
+ compiler: Compiler
+
+ def __init__(self, function: Callable):
+ self.function = function # type: ignore
+ self.compiler = Compiler(self.function, dict(parameters))
+
+ def __call__(self, *args, **kwargs) -> Any:
+ self.compiler(*args, **kwargs)
+ return self.function(*args, **kwargs)
+
+ def trace(
+ self,
+ inputset: Optional[Union[Iterable[Any], Iterable[Tuple[Any, ...]]]] = None,
+ configuration: Optional[Configuration] = None,
+ artifacts: Optional[DebugArtifacts] = None,
+ **kwargs,
+ ) -> Graph:
+ """
+ Trace the function into computation graph.
+
+ Args:
+ inputset (Optional[Union[Iterable[Any], Iterable[Tuple[Any, ...]]]]):
+ optional inputset to extend accumulated inputset before bounds measurement
+
+ configuration(Optional[Configuration], default = None):
+ configuration to use
+
+ artifacts (Optional[DebugArtifacts], default = None):
+ artifacts to store information about the process
+
+ kwargs (Dict[str, Any]):
+ configuration options to overwrite
+
+ Returns:
+ Graph:
+ computation graph representing the function prior to MLIR conversion
+ """
+
+ return self.compiler.trace(inputset, configuration, artifacts, **kwargs)
+
+ def compile(
+ self,
+ inputset: Optional[Union[Iterable[Any], Iterable[Tuple[Any, ...]]]] = None,
+ configuration: Optional[Configuration] = None,
+ artifacts: Optional[DebugArtifacts] = None,
+ **kwargs,
+ ) -> Circuit:
+ """
+ Compile the function into a circuit.
+
+ Args:
+ inputset (Optional[Union[Iterable[Any], Iterable[Tuple[Any, ...]]]]):
+ optional inputset to extend accumulated inputset before bounds measurement
+
+ configuration(Optional[Configuration], default = None):
+ configuration to use
+
+ artifacts (Optional[DebugArtifacts], default = None):
+ artifacts to store information about the process
+
+ kwargs (Dict[str, Any]):
+ configuration options to overwrite
+
+ Returns:
+ Circuit:
+ compiled circuit
+ """
+
+ return self.compiler.compile(inputset, configuration, artifacts, **kwargs)
+
+ return Compilable(function)
+
+ return decoration
diff --git a/frontends/concrete-python/concrete/numpy/compilation/server.py b/frontends/concrete-python/concrete/numpy/compilation/server.py
new file mode 100644
index 000000000..f01ad4dac
--- /dev/null
+++ b/frontends/concrete-python/concrete/numpy/compilation/server.py
@@ -0,0 +1,350 @@
+"""
+Declaration of `Server` class.
+"""
+
+import json
+import shutil
+import tempfile
+from pathlib import Path
+from typing import List, Optional, Union
+
+import concrete.compiler
+from concrete.compiler import (
+ CompilationFeedback,
+ CompilationOptions,
+ EvaluationKeys,
+ JITCompilationResult,
+ JITLambda,
+ JITSupport,
+ LibraryCompilationResult,
+ LibraryLambda,
+ LibrarySupport,
+ PublicArguments,
+ PublicResult,
+)
+
+from ..internal.utils import assert_that
+from .configuration import DEFAULT_GLOBAL_P_ERROR, DEFAULT_P_ERROR, Configuration
+from .specs import ClientSpecs
+
+
+class Server:
+ """
+ Server class, which can be used to perform homomorphic computation.
+ """
+
+ client_specs: ClientSpecs
+
+ _output_dir: Optional[tempfile.TemporaryDirectory]
+ _support: Union[JITSupport, LibrarySupport]
+ _compilation_result: Union[JITCompilationResult, LibraryCompilationResult]
+ _compilation_feedback: CompilationFeedback
+ _server_lambda: Union[JITLambda, LibraryLambda]
+
+ _mlir: Optional[str]
+ _configuration: Optional[Configuration]
+
+ def __init__(
+ self,
+ client_specs: ClientSpecs,
+ output_dir: Optional[tempfile.TemporaryDirectory],
+ support: Union[JITSupport, LibrarySupport],
+ compilation_result: Union[JITCompilationResult, LibraryCompilationResult],
+ server_lambda: Union[JITLambda, LibraryLambda],
+ ):
+ self.client_specs = client_specs
+
+ self._output_dir = output_dir
+ self._support = support
+ self._compilation_result = compilation_result
+ self._compilation_feedback = self._support.load_compilation_feedback(compilation_result)
+ self._server_lambda = server_lambda
+ self._mlir = None
+
+ assert_that(
+ support.load_client_parameters(compilation_result).serialize()
+ == client_specs.client_parameters.serialize()
+ )
+
+ @staticmethod
+ def create(
+ mlir: str,
+ input_signs: List[bool],
+ output_signs: List[bool],
+ configuration: Configuration,
+ ) -> "Server":
+ """
+ Create a server using MLIR and output sign information.
+
+ Args:
+ mlir (str):
+ mlir to compile
+
+ input_signs (List[bool]):
+ sign status of the inputs
+
+ output_signs (List[bool]):
+ sign status of the outputs
+
+ configuration (Optional[Configuration], default = None):
+ configuration to use
+ """
+
+ options = CompilationOptions.new("main")
+
+ options.set_loop_parallelize(configuration.loop_parallelize)
+ options.set_dataflow_parallelize(configuration.dataflow_parallelize)
+ options.set_auto_parallelize(configuration.auto_parallelize)
+
+ if configuration.auto_parallelize or configuration.dataflow_parallelize:
+ concrete.compiler.init_dfr()
+
+ global_p_error_is_set = configuration.global_p_error is not None
+ p_error_is_set = configuration.p_error is not None
+
+ if global_p_error_is_set and p_error_is_set: # pragma: no cover
+ options.set_global_p_error(configuration.global_p_error)
+ options.set_p_error(configuration.p_error)
+
+ elif global_p_error_is_set: # pragma: no cover
+ options.set_global_p_error(configuration.global_p_error)
+ options.set_p_error(1.0)
+
+ elif p_error_is_set: # pragma: no cover
+ options.set_global_p_error(1.0)
+ options.set_p_error(configuration.p_error)
+
+ else: # pragma: no cover
+ if DEFAULT_GLOBAL_P_ERROR is not None:
+ options.set_global_p_error(DEFAULT_GLOBAL_P_ERROR)
+ else:
+ options.set_global_p_error(1.0)
+
+ if DEFAULT_P_ERROR is not None:
+ options.set_p_error(DEFAULT_P_ERROR)
+ else:
+ options.set_p_error(1.0)
+
+ show_optimizer = (
+ configuration.show_optimizer
+ if configuration.show_optimizer is not None
+ else configuration.verbose
+ )
+ options.set_display_optimizer_choice(show_optimizer)
+
+ if configuration.jit:
+
+ output_dir = None
+
+ support = JITSupport.new()
+ compilation_result = support.compile(mlir, options)
+ server_lambda = support.load_server_lambda(compilation_result)
+
+ else:
+
+ # pylint: disable=consider-using-with
+ output_dir = tempfile.TemporaryDirectory()
+ output_dir_path = Path(output_dir.name)
+ # pylint: enable=consider-using-with
+
+ support = LibrarySupport.new(
+ str(output_dir_path), generateCppHeader=False, generateStaticLib=False
+ )
+ compilation_result = support.compile(mlir, options)
+ server_lambda = support.load_server_lambda(compilation_result)
+
+ client_parameters = support.load_client_parameters(compilation_result)
+ client_specs = ClientSpecs(input_signs, client_parameters, output_signs)
+
+ result = Server(client_specs, output_dir, support, compilation_result, server_lambda)
+
+ # pylint: disable=protected-access
+ result._mlir = mlir
+ result._configuration = configuration
+ # pylint: enable=protected-access
+
+ return result
+
+ def save(self, path: Union[str, Path], via_mlir: bool = False):
+ """
+ Save the server into the given path in zip format.
+
+ Args:
+ path (Union[str, Path]):
+ path to save the server
+
+ via_mlir (bool, default = False)
+ export using the MLIR code of the program,
+ this will make the export cross-platform
+ """
+
+ path = str(path)
+ if path.endswith(".zip"):
+ path = path[: len(path) - 4]
+
+ if via_mlir:
+ if self._mlir is None or self._configuration is None:
+ message = "Loaded server objects cannot be saved again via MLIR"
+ raise RuntimeError(message)
+
+ with tempfile.TemporaryDirectory() as tmp:
+
+ with open(Path(tmp) / "circuit.mlir", "w", encoding="utf-8") as f:
+ f.write(self._mlir)
+
+ with open(Path(tmp) / "input_signs.json", "w", encoding="utf-8") as f:
+ f.write(json.dumps(self.client_specs.input_signs))
+
+ with open(Path(tmp) / "output_signs.json", "w", encoding="utf-8") as f:
+ f.write(json.dumps(self.client_specs.output_signs))
+
+ with open(Path(tmp) / "configuration.json", "w", encoding="utf-8") as f:
+ f.write(json.dumps(self._configuration.__dict__))
+
+ shutil.make_archive(path, "zip", tmp)
+
+ return
+
+ if self._output_dir is None:
+ message = "Just-in-Time compilation cannot be saved"
+ raise RuntimeError(message)
+
+ with open(Path(self._output_dir.name) / "client.specs.json", "w", encoding="utf-8") as f:
+ f.write(self.client_specs.serialize())
+
+ shutil.make_archive(path, "zip", self._output_dir.name)
+
+ @staticmethod
+ def load(path: Union[str, Path]) -> "Server":
+ """
+ Load the server from the given path in zip format.
+
+ Args:
+ path (Union[str, Path]):
+ path to load the server from
+
+ Returns:
+ Server:
+ server loaded from the filesystem
+ """
+
+ # pylint: disable=consider-using-with
+ output_dir = tempfile.TemporaryDirectory()
+ output_dir_path = Path(output_dir.name)
+ # pylint: enable=consider-using-with
+
+ shutil.unpack_archive(path, str(output_dir_path), "zip")
+
+ if (output_dir_path / "circuit.mlir").exists():
+ with open(output_dir_path / "circuit.mlir", "r", encoding="utf-8") as f:
+ mlir = f.read()
+
+ with open(output_dir_path / "input_signs.json", "r", encoding="utf-8") as f:
+ input_signs = json.load(f)
+ assert_that(isinstance(input_signs, list))
+ assert_that(all(isinstance(sign, bool) for sign in input_signs))
+
+ with open(output_dir_path / "output_signs.json", "r", encoding="utf-8") as f:
+ output_signs = json.load(f)
+ assert_that(isinstance(output_signs, list))
+ assert_that(all(isinstance(sign, bool) for sign in output_signs))
+
+ with open(output_dir_path / "configuration.json", "r", encoding="utf-8") as f:
+ configuration = Configuration().fork(**json.load(f))
+
+ return Server.create(mlir, input_signs, output_signs, configuration)
+
+ with open(output_dir_path / "client.specs.json", "r", encoding="utf-8") as f:
+ client_specs = ClientSpecs.unserialize(f.read())
+
+ support = LibrarySupport.new(
+ str(output_dir_path),
+ generateCppHeader=False,
+ generateStaticLib=False,
+ )
+ compilation_result = support.reload("main")
+ server_lambda = support.load_server_lambda(compilation_result)
+
+ return Server(client_specs, output_dir, support, compilation_result, server_lambda)
+
+ def run(self, args: PublicArguments, evaluation_keys: EvaluationKeys) -> PublicResult:
+ """
+ Evaluate using encrypted arguments.
+
+ Args:
+ args (PublicArguments):
+ encrypted arguments of the computation
+
+ evaluation_keys (EvaluationKeys):
+ evaluation keys for encrypted computation
+
+ Returns:
+ PublicResult:
+ encrypted result of the computation
+ """
+
+ return self._support.server_call(self._server_lambda, args, evaluation_keys)
+
+ def cleanup(self):
+ """
+ Cleanup the temporary library output directory.
+ """
+
+ if self._output_dir is not None:
+ self._output_dir.cleanup()
+
+ @property
+ def complexity(self) -> float:
+ """
+ Get complexity of the compiled program.
+ """
+ return self._compilation_feedback.complexity
+
+ @property
+ def size_of_secret_keys(self) -> int:
+ """
+ Get size of the secret keys of the compiled program.
+ """
+ return self._compilation_feedback.total_secret_keys_size
+
+ @property
+ def size_of_bootstrap_keys(self) -> int:
+ """
+ Get size of the bootstrap keys of the compiled program.
+ """
+ return self._compilation_feedback.total_bootstrap_keys_size
+
+ @property
+ def size_of_keyswitch_keys(self) -> int:
+ """
+ Get size of the key switch keys of the compiled program.
+ """
+ return self._compilation_feedback.total_keyswitch_keys_size
+
+ @property
+ def size_of_inputs(self) -> int:
+ """
+ Get size of the inputs of the compiled program.
+ """
+ return self._compilation_feedback.total_inputs_size
+
+ @property
+ def size_of_outputs(self) -> int:
+ """
+ Get size of the outputs of the compiled program.
+ """
+ return self._compilation_feedback.total_output_size
+
+ @property
+ def p_error(self) -> int:
+ """
+ Get the probability of error for each simple TLU (on a scalar).
+ """
+ return self._compilation_feedback.p_error
+
+ @property
+ def global_p_error(self) -> int:
+ """
+ Get the probability of having at least one simple TLU error during the entire execution.
+ """
+ return self._compilation_feedback.global_p_error
diff --git a/frontends/concrete-python/concrete/numpy/compilation/specs.py b/frontends/concrete-python/concrete/numpy/compilation/specs.py
new file mode 100644
index 000000000..1d0c4f54d
--- /dev/null
+++ b/frontends/concrete-python/concrete/numpy/compilation/specs.py
@@ -0,0 +1,127 @@
+"""
+Declaration of `ClientSpecs` class.
+"""
+
+import json
+from typing import List
+
+from concrete.compiler import ClientParameters, PublicArguments, PublicResult
+
+
+class ClientSpecs:
+ """
+ ClientSpecs class, to create Client objects.
+ """
+
+ input_signs: List[bool]
+ client_parameters: ClientParameters
+ output_signs: List[bool]
+
+ def __init__(
+ self,
+ input_signs: List[bool],
+ client_parameters: ClientParameters,
+ output_signs: List[bool],
+ ):
+ self.input_signs = input_signs
+ self.client_parameters = client_parameters
+ self.output_signs = output_signs
+
+ def serialize(self) -> str:
+ """
+ Serialize client specs into a string representation.
+
+ Returns:
+ str:
+ string representation of the client specs
+ """
+
+ client_parameters_json = json.loads(self.client_parameters.serialize())
+ return json.dumps(
+ {
+ "input_signs": self.input_signs,
+ "client_parameters": client_parameters_json,
+ "output_signs": self.output_signs,
+ }
+ )
+
+ @staticmethod
+ def unserialize(serialized_client_specs: str) -> "ClientSpecs":
+ """
+ Create client specs from its string representation.
+
+ Args:
+ serialized_client_specs (str):
+ client specs to unserialize
+
+ Returns:
+ ClientSpecs:
+ unserialized client specs
+ """
+
+ raw_specs = json.loads(serialized_client_specs)
+
+ client_parameters_bytes = json.dumps(raw_specs["client_parameters"]).encode("utf-8")
+ client_parameters = ClientParameters.unserialize(client_parameters_bytes)
+
+ return ClientSpecs(raw_specs["input_signs"], client_parameters, raw_specs["output_signs"])
+
+ def serialize_public_args(self, args: PublicArguments) -> bytes: # pylint: disable=no-self-use
+ """
+ Serialize public arguments to bytes.
+
+ Args:
+ args (PublicArguments):
+ public arguments to serialize
+
+ Returns:
+ bytes:
+ serialized public arguments
+ """
+
+ return args.serialize()
+
+ def unserialize_public_args(self, serialized_args: bytes) -> PublicArguments:
+ """
+ Unserialize public arguments from bytes.
+
+ Args:
+ serialized_args (bytes):
+ serialized public arguments
+
+ Returns:
+ PublicArguments:
+ unserialized public arguments
+ """
+
+ return PublicArguments.unserialize(self.client_parameters, serialized_args)
+
+ def serialize_public_result(self, result: PublicResult) -> bytes: # pylint: disable=no-self-use
+ """
+ Serialize public result to bytes.
+
+ Args:
+ result (PublicResult):
+ public result to serialize
+
+ Returns:
+ bytes:
+ serialized public result
+ """
+
+ return result.serialize()
+
+ def unserialize_public_result(self, serialized_result: bytes) -> PublicResult:
+ """
+ Unserialize public result from bytes.
+
+ Args:
+ serialized_result (bytes):
+ serialized public result
+
+ Returns:
+ PublicResult:
+ unserialized public result
+ """
+
+ return PublicResult.unserialize(self.client_parameters, serialized_result)
diff --git a/frontends/concrete-python/concrete/numpy/compilation/utils.py b/frontends/concrete-python/concrete/numpy/compilation/utils.py
new file mode 100644
index 000000000..8be9df467
--- /dev/null
+++ b/frontends/concrete-python/concrete/numpy/compilation/utils.py
@@ -0,0 +1,685 @@
+"""
+Declaration of various functions and constants related to compilation.
+"""
+
+from copy import deepcopy
+from typing import Dict, Iterable, List, Optional, Set, Tuple
+
+import networkx as nx
+
+from ..dtypes import Float, Integer
+from ..representation import Graph, Node, Operation
+from .artifacts import DebugArtifacts
+
+# ruff: noqa: ERA001
+
+
+def fuse(graph: Graph, artifacts: Optional[DebugArtifacts] = None):
+ """
+ Fuse appropriate subgraphs in a graph to a single Operation.Generic node.
+
+ Args:
+ graph (Graph):
+ graph to search and update
+
+ artifacts (Optional[DebugArtifacts], default = None):
+ compilation artifacts to store information about the fusing process
+
+ Raises:
+ RuntimeError:
+ if there is a subgraph which needs to be fused cannot be fused
+ """
+
+ nx_graph = graph.graph
+ processed_terminal_nodes: Set[Node] = set()
+
+ fusing_floats = True
+ while True:
+ subgraph_to_fuse = (
+ find_float_subgraph_with_unique_terminal_node(
+ graph,
+ processed_terminal_nodes,
+ )
+ if fusing_floats
+ else find_tlu_subgraph_with_multiple_variable_inputs_that_has_a_single_common_ancestor(
+ graph,
+ processed_terminal_nodes,
+ )
+ )
+
+ if subgraph_to_fuse is None:
+ if fusing_floats:
+ fusing_floats = False
+ processed_terminal_nodes.clear()
+ continue
+ break
+
+ all_nodes, start_nodes, terminal_node = subgraph_to_fuse
+ processed_terminal_nodes.add(terminal_node)
+
+ fused_node, node_before_subgraph = convert_subgraph_to_subgraph_node(
+ graph,
+ all_nodes,
+ start_nodes,
+ terminal_node,
+ )
+ nx_graph.add_node(fused_node)
+
+ if terminal_node in graph.output_nodes.values():
+ output_node_to_idx: Dict[Node, List[int]] = {
+ out_node: [] for out_node in graph.output_nodes.values()
+ }
+ for output_idx, output_node in graph.output_nodes.items():
+ output_node_to_idx[output_node].append(output_idx)
+
+ for output_idx in output_node_to_idx.get(terminal_node, []):
+ graph.output_nodes[output_idx] = fused_node
+
+ terminal_node_succ = list(nx_graph.successors(terminal_node))
+ for succ in terminal_node_succ:
+ succ_edge_data = deepcopy(nx_graph.get_edge_data(terminal_node, succ))
+ for edge_key, edge_data in succ_edge_data.items():
+ nx_graph.remove_edge(terminal_node, succ, key=edge_key)
+ new_edge_data = deepcopy(edge_data)
+ nx_graph.add_edge(fused_node, succ, key=edge_key, **new_edge_data)
+
+ nx_graph.add_edge(node_before_subgraph, fused_node, input_idx=0)
+
+ graph.prune_useless_nodes()
+ if artifacts is not None:
+ artifacts.add_graph("after-fusing", graph)
+
+
+def find_float_subgraph_with_unique_terminal_node(
+ graph: Graph,
+ processed_terminal_nodes: Set[Node],
+) -> Optional[Tuple[Dict[Node, None], Dict[Node, None], Node]]:
+ """
+ Find a subgraph with float computations that end with an integer output.
+
+ Args:
+ graph (Graph):
+ graph to search
+
+ processed_terminal_nodes (Set[Node]):
+ set of terminal nodes which have already been searched for float subgraphs
+
+ Returns:
+ Optional[Tuple[Dict[Node, None], Dict[Node, None], Node]]:
+ None if there are no such subgraphs,
+ tuple containing all nodes in the subgraph, start nodes of the subgraph,
+ and terminal node of the subgraph otherwise
+ """
+
+ nx_graph = graph.graph
+ terminal_nodes = (
+ node
+ for node in nx_graph.nodes()
+ if (
+ node not in processed_terminal_nodes
+ and any(isinstance(input.dtype, Float) for input in node.inputs)
+ and isinstance(node.output.dtype, Integer)
+ )
+ )
+ try:
+ terminal_node = next(terminal_nodes)
+ except StopIteration:
+ return None
+
+ all_nodes: Dict[Node, None] = {}
+
+ start_single_int_output_nodes_search_from = terminal_node
+ while True:
+ all_nodes, start_nodes = find_closest_integer_output_nodes(
+ graph,
+ [start_single_int_output_nodes_search_from],
+ all_nodes,
+ )
+
+ variable_start_nodes = [
+ start_node for start_node in start_nodes if start_node.operation != Operation.Constant
+ ]
+ if len(variable_start_nodes) == 1:
+ break
+
+ # find a common ancestor as we need a single variable input node
+ # lca == lowest common ancestor
+ lca = find_single_lca(graph, variable_start_nodes)
+
+ # if subgraph cannot be fused because there is no way to find a common ancestor, break
+ if lca is None:
+ break
+
+ # add the nodes from the `start_nodes` to `lca`, to `all_nodes`
+ all_nodes = add_nodes_from_to(graph, start_nodes, {lca: None}, all_nodes)
+
+ # if `lca` is a valid starting node for fusing break
+ if isinstance(lca.output.dtype, Integer):
+ # `lca` is the new start node
+ start_nodes = {lca: None}
+ break
+
+ # otherwise, push a little further
+ # (e.g., if there is a node just before, which has an integer output)
+ start_single_int_output_nodes_search_from = lca
+
+ return all_nodes, start_nodes, terminal_node
+
+
+def find_tlu_subgraph_with_multiple_variable_inputs_that_has_a_single_common_ancestor(
+ graph: Graph,
+ processed_terminal_nodes: Set[Node],
+) -> Optional[Tuple[Dict[Node, None], Dict[Node, None], Node]]:
+ """
+ Find a subgraph with a tlu computation that has multiple variable inputs \
+ where all variable inputs share a common ancestor.
+
+ Args:
+ graph (Graph):
+ graph to search
+
+ processed_terminal_nodes (Set[Node]):
+ set of terminal nodes which have already been searched for tlu subgraphs
+
+ Returns:
+ Optional[Tuple[Dict[Node, None], Dict[Node, None], Node]]:
+ None if there are no such subgraphs,
+ tuple containing all nodes in the subgraph, start nodes of the subgraph,
+ and terminal node of the subgraph otherwise
+ """
+
+ nx_graph = graph.graph
+ terminal_nodes = (
+ node
+ for node in nx_graph.nodes()
+ if (
+ node not in processed_terminal_nodes
+ and node.converted_to_table_lookup
+ and all(isinstance(input.dtype, Integer) for input in node.inputs)
+ and isinstance(node.output.dtype, Integer)
+ and len(
+ [
+ pred
+ for pred in nx_graph.predecessors(node)
+ if pred.operation != Operation.Constant
+ ]
+ )
+ > 1
+ )
+ )
+ try:
+ terminal_node = next(terminal_nodes)
+ except StopIteration:
+ return None
+
+ all_nodes: Dict[Node, None] = {}
+
+ while True:
+ variable_start_nodes = list(nx_graph.predecessors(terminal_node))
+
+ # find a common ancestor as we need a single variable input node
+ # lca == lowest common ancestor
+ lca = find_single_lca(graph, variable_start_nodes)
+
+ # if subgraph cannot be fused because there is no way to find a common ancestor, break
+ if lca is None:
+ start_nodes = {node: None for node in variable_start_nodes}
+ all_nodes = {node: None for node in variable_start_nodes + [terminal_node]}
+ break
+
+ # add the nodes from the `start_nodes` to `lca`, to `all_nodes`
+ all_nodes = add_nodes_from_to(
+ graph,
+ list(nx_graph.predecessors(terminal_node)),
+ {lca: None},
+ all_nodes,
+ )
+ all_nodes[terminal_node] = None
+
+ # if `lca` is a valid starting node for fusing break
+ if isinstance(lca.output.dtype, Integer):
+ # `lca` is the new start node
+ start_nodes = {lca: None}
+ break
+
+ return all_nodes, start_nodes, terminal_node
+
+
+def find_single_lca(graph: Graph, nodes: List[Node]) -> Optional[Node]:
+ """
+ Find the single lowest common ancestor of a list of nodes.
+
+ Args:
+ graph (Graph):
+ graph to search for single lca
+
+ nodes (List[Node]):
+ nodes to find the single lca of
+
+ Returns
+ Optional[Node]:
+ single lca if it exists, None otherwise
+ """
+
+ nx_graph = graph.graph
+
+ # find all ancestors of `nodes`
+ # nodes themselves need to be in this set because the single lca can be within `nodes`
+ all_ancestors = [set(list(nx.ancestors(nx_graph, node)) + [node]) for node in nodes]
+
+ # find common ancestors among `nodes`
+ # if the single lca exists, it's in this set
+ common_ancestors = {
+ node
+ for node in nx_graph.nodes()
+ if node.operation != Operation.Constant
+ and all(node in ancestors for ancestors in all_ancestors)
+ }
+
+ # iterate over every node in the graph reversed topological order
+ # this is to ensure result, if found, is the single "lowest" common ancestor
+ for candidate in reversed(list(nx.topological_sort(nx_graph))):
+ # check if node is a common ancestor of all `nodes`
+ if candidate not in common_ancestors:
+ # if not, it cannot be the single lca
+ continue
+
+ # check if node is a single common ancestor of `nodes`
+ if is_single_common_ancestor(graph, candidate, nodes):
+ # if so, it's the single lca of `nodes`
+ # so return it
+ return candidate
+
+ # if none of the nodes in `common_ancestors` is the single lca
+ # there is no single lca of this set of nodes, so return None
+ return None
+
+
+def is_single_common_ancestor(
+ graph: Graph,
+ candidate: Node,
+ nodes: List[Node],
+) -> bool:
+ """
+ Determine if a node is the single common ancestor of a list of nodes.
+
+ Note that this function doesn't care about `lowest` property of `lca`.
+
+ Args:
+ graph (Graph):
+ graph to perform the check
+
+ candidate (Node):
+ node to determine single common ancestor status
+
+ nodes (List[Node]):
+ nodes to determine single common ancestor status against
+
+ Returns
+ bool:
+ True if `candidate` is a single common ancestor of `nodes`, False otherwise
+ """
+
+ nx_graph = graph.graph
+
+ # create a subgraph with `candidate` node
+ subgraph = nx.DiGraph()
+ subgraph.add_node(candidate)
+
+ # iterate over `nodes` to add them to the subgraph
+ # along with every path from `candidate` to them
+ for node in nodes:
+ subgraph.add_node(node)
+ for path in nx.all_simple_paths(nx_graph, source=candidate, target=node):
+ nx.add_path(subgraph, path)
+
+ # iterate over the nodes of the subgraph
+ for node in subgraph.nodes():
+ # the condition below doesn't apply to `candidate`
+ # as its predecessors are not in the subgraph
+ if node == candidate:
+ continue
+
+ # find number of predecessors in the subgraph and in the original graph
+ # except constant nodes in the original graph as
+ # - they are not in the subgraph
+ # - they don't affect fusability status
+ predecessor_count_in_subgraph = len(list(subgraph.predecessors(node)))
+ predecessor_count_in_nx_graph = len(
+ [pred for pred in nx_graph.predecessors(node) if pred.operation != Operation.Constant]
+ )
+
+ # see if number of predecessors are different
+ if predecessor_count_in_subgraph != predecessor_count_in_nx_graph:
+ # if so, `candidate` cannot be a single common ancestor
+ # reasoning for is explained below
+ return False
+
+ # if every node in the subgraph has the same number of predecessors
+ # as in the original graph `candidate` is in fact a single common ancestor
+ return True
+
+ # Here is why this function works.
+ #
+ # Legend:
+ # - /|\- = Edge
+ # - (...) = Intermediate Node
+ # - {...} = Candidate Node
+ # - [...] = Node of which single common ancestor is searched
+ # - {[...]} = Both Candidate Node and Node of which single common ancestor is searched
+ #
+ # Consider the folowing graph:
+ #
+ # (3) (x) (2)
+ # \ / \ /
+ # [{*}] (/)
+ # \ /
+ # [+]
+ #
+ # - Operation: (x * 3) + (x / 2)
+ # - Candidate: {*}
+ # - Nodes: [*] and [+]
+ #
+ # So we want to know if multiplication node is a single common ancestor of
+ # multiplication and addition nodes. The result is no in this case for our purposes.
+ #
+ # Once you apply the subgraph creation above, you'll get the following graph:
+ #
+ # (*)
+ # |
+ # (+)
+ #
+ # In this subgraph, addition node only have a single predecessor,
+ # which means there is path leading to the addition node and that path doesn't include
+ # the multiplication node, so we conclude multiplication node is not a single common ancestor
+ #
+ # Now, consider the folowing graph:
+ #
+ # (3) {x} (2)
+ # \ / \ /
+ # [*] (/)
+ # \ /
+ # [+]
+ #
+ # - Operation: (x * 3) + (x / 2)
+ # - Candidate: {x}
+ # - Nodes: [*] and [+]
+ #
+ # So we want to know if the input node 'x' is the single common ancestor of
+ # multiplication and addition nodes. The result is yes in this case.
+ #
+ # Once you apply the subgraph creation above, you'll get the following graph:
+ #
+ # {x}
+ # / \
+ # [*] (/)
+ # \ /
+ # [+]
+ #
+ # In this subgraph, every node except the candidate node
+ # will keep all of their non-constant predecessors,
+ # which means all of their non-constant predecessors originated
+ # from the `candidate`, so it's a single common anscestor.
+ #
+ # When you think about it, this implementation makes a lot of sense for our purposes
+ # It basically determines if `nodes` "solely" depend on the `candidate`,
+ # which is the condition for fusing.
+
+
+def find_closest_integer_output_nodes(
+ graph: Graph,
+ start_nodes: List[Node],
+ all_nodes: Dict[Node, None],
+) -> Tuple[Dict[Node, None], Dict[Node, None]]:
+ """
+ Find the closest upstream integer output nodes to a set of start nodes in a graph.
+
+ Args:
+ graph (Graph):
+ graph to search
+
+ start_nodes (List[Node]):
+ nodes from which to start the search
+
+ all_nodes (Dict[Node, None]):
+ set of nodes to be extended with visited nodes during the search
+
+ Returns:
+ Tuple[Dict[Node, None], Dict[Node, None]]:
+ tuple containing extended `all_nodes` and integer output nodes closest to `start_nodes`
+ """
+
+ nx_graph = graph.graph
+
+ closest_integer_output_nodes: Dict[Node, None] = {}
+ visited_nodes: Set[Node] = set()
+
+ current_nodes = {start_node: None for start_node in start_nodes}
+ while current_nodes:
+ next_nodes: Dict[Node, None] = {}
+ for node in current_nodes:
+ if node not in visited_nodes:
+ visited_nodes.add(node)
+
+ all_nodes.update({node: None})
+ for pred in nx_graph.predecessors(node):
+ if isinstance(pred.output.dtype, Integer):
+ closest_integer_output_nodes.update({pred: None})
+ all_nodes.update({pred: None})
+ else:
+ next_nodes.update({pred: None})
+ current_nodes = next_nodes
+
+ return all_nodes, closest_integer_output_nodes
+
+
+def add_nodes_from_to(
+ graph: Graph,
+ from_nodes: Iterable[Node],
+ to_nodes: Dict[Node, None],
+ all_nodes: Dict[Node, None],
+) -> Dict[Node, None]:
+ """
+ Add nodes from `from_nodes` to `to_nodes`, to `all_nodes`.
+
+ Args:
+ graph (Graph):
+ graph to traverse
+
+ from_nodes (Iterable[Node]):
+ nodes from which extending `all_nodes` start
+
+ to_nodes (Dict[Node, None]):
+ nodes to which extending `all_nodes` stop
+
+ all_nodes (Dict[Node, None]):
+ nodes to be extended
+
+ Returns:
+ Dict[Node, None]:
+ extended `all_nodes`
+ """
+
+ nx_graph = graph.graph
+
+ all_nodes.update(to_nodes)
+ visited_nodes: Set[Node] = set()
+
+ current_nodes = {from_node: None for from_node in from_nodes}
+ while current_nodes:
+ next_nodes: Dict[Node, None] = {}
+ for node in current_nodes:
+ if node not in visited_nodes:
+ visited_nodes.add(node)
+
+ all_nodes.update({node: None})
+ if node not in to_nodes:
+ predecessors = nx_graph.predecessors(node)
+ next_nodes.update({pred: None for pred in predecessors if pred not in to_nodes})
+ current_nodes = next_nodes
+
+ return all_nodes
+
+
+def convert_subgraph_to_subgraph_node(
+ graph: Graph,
+ all_nodes: Dict[Node, None],
+ start_nodes: Dict[Node, None],
+ terminal_node: Node,
+) -> Tuple[Node, Node]:
+ """
+ Convert a subgraph to Operation.Generic node.
+
+ Args:
+ graph (Graph):
+ orginal graph
+
+ all_nodes (Dict[Node, None]):
+ all nodes in the subgraph
+
+ start_nodes (Dict[Node, None]):
+ start nodes of the subgraph
+
+ terminal_node (Node):
+ terminal node of the subgraph
+
+ Raises:
+ RuntimeError:
+ if subgraph is not fusable
+
+ Returns:
+ Tuple[Node, Node]:
+ None if the subgraph cannot be fused,
+ subgraph node and its predecessor otherwise
+ """
+
+ nx_graph = graph.graph
+
+ variable_input_nodes = [node for node in start_nodes if node.operation != Operation.Constant]
+ if len(variable_input_nodes) != 1:
+ base_highlighted_nodes = {
+ node: ["within this subgraph", node.location] for node in all_nodes
+ }
+ for variable_input_node in variable_input_nodes:
+ base_highlighted_nodes[variable_input_node] = [
+ "this is one of the input nodes",
+ variable_input_node.location,
+ ]
+
+ raise RuntimeError(
+ "A subgraph within the function you are trying to compile cannot be fused "
+ "because it has multiple input nodes\n\n"
+ + graph.format(highlighted_nodes=base_highlighted_nodes, show_bounds=False)
+ )
+
+ variable_input_node = variable_input_nodes[0]
+ check_subgraph_fusability(graph, all_nodes, variable_input_node)
+
+ nx_subgraph = nx.MultiDiGraph(nx_graph)
+ nodes_to_remove = [node for node in nx_subgraph.nodes() if node not in all_nodes]
+ nx_subgraph.remove_nodes_from(nodes_to_remove)
+
+ subgraph_variable_input_node = Node.input("input", deepcopy(variable_input_node.output))
+ nx_subgraph.add_node(subgraph_variable_input_node)
+
+ subgraph_variable_input_node.location = variable_input_node.location
+ subgraph_variable_input_node.tag = variable_input_node.tag
+ subgraph_variable_input_node.created_at = variable_input_node.created_at
+
+ variable_input_node_successors = {
+ node: None for node in all_nodes if node in nx_graph.succ[variable_input_node]
+ }
+ for successor in variable_input_node_successors:
+ edges = deepcopy(nx_subgraph.get_edge_data(variable_input_node, successor))
+ for edge_key, edge_data in edges.items():
+ nx_subgraph.remove_edge(variable_input_node, successor, key=edge_key)
+ new_edge_data = deepcopy(edge_data)
+ nx_subgraph.add_edge(
+ subgraph_variable_input_node,
+ successor,
+ key=edge_key,
+ **new_edge_data,
+ )
+
+ original_location = terminal_node.location
+ original_tag = terminal_node.tag
+ original_created_at = terminal_node.created_at
+
+ subgraph = Graph(nx_subgraph, {0: subgraph_variable_input_node}, {0: terminal_node})
+ subgraph_node = Node.generic(
+ "subgraph",
+ subgraph_variable_input_node.inputs,
+ terminal_node.output,
+ lambda x, subgraph, terminal_node: subgraph.evaluate(x)[terminal_node],
+ kwargs={
+ "subgraph": subgraph,
+ "terminal_node": terminal_node,
+ },
+ )
+
+ subgraph_node.location = original_location
+ subgraph_node.tag = original_tag
+ subgraph_node.created_at = original_created_at
+
+ return subgraph_node, variable_input_node
+
+
+def check_subgraph_fusability(
+ graph: Graph,
+ all_nodes: Dict[Node, None],
+ variable_input_node: Node,
+):
+ """
+ Determine if a subgraph can be fused.
+
+ e.g.,
+
+ shuffling or reshaping a tensor make fusing impossible as there should be a one-to-one mapping
+ between each cell of the input and each cell of the output for table lookups
+
+ Args:
+ graph (Graph):
+ original graph
+
+ all_nodes (Dict[Node, None]):
+ all nodes in the subgraph
+
+ variable_input_node (Node):
+ variable input node to the subgraph
+
+ Raises:
+ RuntimeError:
+ if subgraph is not fusable
+ """
+
+ base_highlighted_nodes = {node: ["within this subgraph", node.location] for node in all_nodes}
+ base_highlighted_nodes[variable_input_node] = [
+ "with this input node",
+ variable_input_node.location,
+ ]
+
+ non_constant_nodes = (node for node in all_nodes if node.operation != Operation.Constant)
+ for node in non_constant_nodes:
+ if node == variable_input_node:
+ continue
+
+ if not node.is_fusable:
+ base_highlighted_nodes[node] = ["this node is not fusable", node.location]
+ raise RuntimeError(
+ "A subgraph within the function you are trying to compile cannot be fused "
+ "because of a node, which is marked explicitly as non-fusable\n\n"
+ + graph.format(highlighted_nodes=base_highlighted_nodes, show_bounds=False)
+ )
+
+ if node.output.shape != variable_input_node.output.shape:
+ base_highlighted_nodes[node] = [
+ "this node has a different shape than the input node",
+ node.location,
+ ]
+ raise RuntimeError(
+ "A subgraph within the function you are trying to compile cannot be fused "
+ "because of a node, which is has a different shape than the input node\n\n"
+ + graph.format(highlighted_nodes=base_highlighted_nodes, show_bounds=False)
+ )
+
+ return True
diff --git a/frontends/concrete-python/concrete/numpy/dtypes/__init__.py b/frontends/concrete-python/concrete/numpy/dtypes/__init__.py
new file mode 100644
index 000000000..1dbd8e7bd
--- /dev/null
+++ b/frontends/concrete-python/concrete/numpy/dtypes/__init__.py
@@ -0,0 +1,7 @@
+"""
+Define available data types and their semantics.
+"""
+
+from .base import BaseDataType
+from .float import Float
+from .integer import Integer, SignedInteger, UnsignedInteger
diff --git a/frontends/concrete-python/concrete/numpy/dtypes/base.py b/frontends/concrete-python/concrete/numpy/dtypes/base.py
new file mode 100644
index 000000000..50521d140
--- /dev/null
+++ b/frontends/concrete-python/concrete/numpy/dtypes/base.py
@@ -0,0 +1,17 @@
+"""
+Declaration of `BaseDataType` abstract class.
+"""
+
+from abc import ABC, abstractmethod
+
+
+class BaseDataType(ABC):
+ """BaseDataType abstract class, to form a basis for data types."""
+
+ @abstractmethod
+ def __eq__(self, other: object) -> bool:
+ pass # pragma: no cover
+
+ @abstractmethod
+ def __str__(self) -> str:
+ pass # pragma: no cover
diff --git a/frontends/concrete-python/concrete/numpy/dtypes/float.py b/frontends/concrete-python/concrete/numpy/dtypes/float.py
new file mode 100644
index 000000000..fb4c276e8
--- /dev/null
+++ b/frontends/concrete-python/concrete/numpy/dtypes/float.py
@@ -0,0 +1,31 @@
+"""
+Declaration of `Float` class.
+"""
+
+from .base import BaseDataType
+
+
+class Float(BaseDataType):
+ """
+ Float class, to represent floating point numbers.
+ """
+
+ bit_width: int
+
+ def __init__(self, bit_width: int):
+ super().__init__()
+
+ if bit_width not in [16, 32, 64]:
+ message = (
+ f"Float({repr(bit_width)}) is not supported "
+ f"(bit width must be one of 16, 32 or 64)"
+ )
+ raise ValueError(message)
+
+ self.bit_width = bit_width
+
+ def __eq__(self, other: object) -> bool:
+ return isinstance(other, self.__class__) and self.bit_width == other.bit_width
+
+ def __str__(self) -> str:
+ return f"float{self.bit_width}"
diff --git a/frontends/concrete-python/concrete/numpy/dtypes/integer.py b/frontends/concrete-python/concrete/numpy/dtypes/integer.py
new file mode 100644
index 000000000..58957efb3
--- /dev/null
+++ b/frontends/concrete-python/concrete/numpy/dtypes/integer.py
@@ -0,0 +1,155 @@
+"""
+Declaration of `Integer` class.
+"""
+
+import math
+from functools import partial
+from typing import Any
+
+import numpy as np
+
+from .base import BaseDataType
+
+
+class Integer(BaseDataType):
+ """
+ Integer class, to represent integers.
+ """
+
+ is_signed: bool
+ bit_width: int
+
+ @staticmethod
+ def that_can_represent(value: Any, force_signed: bool = False) -> "Integer":
+ """
+ Get the minimal `Integer` that can represent `value`.
+
+ Args:
+ value (Any):
+ value that needs to be represented
+
+ force_signed (bool, default = False):
+ whether to force signed integers or not
+
+ Returns:
+ Integer:
+ minimal `Integer` that can represent `value`
+
+ Raises:
+ ValueError:
+ if `value` cannot be represented by `Integer`
+ """
+
+ lower_bound: int
+ upper_bound: int
+
+ if isinstance(value, list):
+ try:
+ value = np.array(value)
+ except Exception: # pylint: disable=broad-except
+ # here we try our best to convert the list to np.ndarray
+ # if it fails we raise the exception at the else branch below
+ pass
+
+ if isinstance(value, (int, np.integer)):
+ lower_bound = int(value)
+ upper_bound = int(value)
+ elif isinstance(value, np.ndarray) and np.issubdtype(value.dtype, np.integer):
+ lower_bound = int(value.min())
+ upper_bound = int(value.max())
+ else:
+ message = f"Integer cannot represent {repr(value)}"
+ raise ValueError(message)
+
+ def bits_to_represent_int(value: int, force_signed: bool) -> int:
+ bits: int
+
+ if value == 0:
+ return 1
+
+ if value < 0:
+ bits = int(math.ceil(math.log2(abs(value)))) + 1
+ else:
+ bits = int(math.ceil(math.log2(value + 1)))
+ if force_signed:
+ bits += 1
+
+ return bits
+
+ is_signed = force_signed or lower_bound < 0
+ bit_width = (
+ bits_to_represent_int(lower_bound, is_signed)
+ if lower_bound == upper_bound
+ else max(
+ bits_to_represent_int(lower_bound, is_signed),
+ bits_to_represent_int(upper_bound, is_signed),
+ )
+ )
+
+ return Integer(is_signed, bit_width)
+
+ def __init__(self, is_signed: bool, bit_width: int):
+ super().__init__()
+
+ if not isinstance(bit_width, int) or bit_width <= 0:
+ integer_str = "SignedInteger" if is_signed else "UnsignedInteger"
+ message = (
+ f"{integer_str}({repr(bit_width)}) is not supported "
+ f"(bit width must be a positive integer)"
+ )
+ raise ValueError(message)
+
+ self.is_signed = is_signed
+ self.bit_width = bit_width
+
+ def __eq__(self, other: Any) -> bool:
+ return (
+ isinstance(other, self.__class__)
+ and self.is_signed == other.is_signed
+ and self.bit_width == other.bit_width
+ )
+
+ def __str__(self) -> str:
+ return f"{('int' if self.is_signed else 'uint')}{self.bit_width}"
+
+ def min(self) -> int:
+ """
+ Get the minumum value that can be represented by the `Integer`.
+
+ Returns:
+ int:
+ minumum value that can be represented by the `Integer`
+ """
+
+ return 0 if not self.is_signed else -(2 ** (self.bit_width - 1))
+
+ def max(self) -> int:
+ """
+ Get the maximum value that can be represented by the `Integer`.
+
+ Returns:
+ int:
+ maximum value that can be represented by the `Integer`
+ """
+
+ return (2**self.bit_width) - 1 if not self.is_signed else (2 ** (self.bit_width - 1)) - 1
+
+ def can_represent(self, value: int) -> bool:
+ """
+ Get whether `value` can be represented by the `Integer` or not.
+
+ Args:
+ value (int):
+ value to check representability
+
+ Returns:
+ bool:
+ True if `value` is representable by the `integer`, False otherwise
+ """
+
+ return self.min() <= value <= self.max()
+
+
+SignedInteger = partial(Integer, True)
+
+UnsignedInteger = partial(Integer, False)
diff --git a/frontends/concrete-python/concrete/numpy/dtypes/utils.py b/frontends/concrete-python/concrete/numpy/dtypes/utils.py
new file mode 100644
index 000000000..06f584065
--- /dev/null
+++ b/frontends/concrete-python/concrete/numpy/dtypes/utils.py
@@ -0,0 +1,74 @@
+"""
+Declaration of various functions and constants related to data types.
+"""
+
+from typing import List
+
+from ..internal.utils import assert_that
+from .base import BaseDataType
+from .float import Float
+from .integer import Integer, SignedInteger, UnsignedInteger
+
+
+def combine_dtypes(dtypes: List[BaseDataType]) -> BaseDataType:
+ """
+ Get the 'BaseDataType' that can represent a set of 'BaseDataType's.
+
+ Args:
+ dtypes (List[BaseDataType]):
+ dtypes to combine
+
+ Returns:
+ BaseDataType:
+ dtype that can hold all the given dtypes (potentially lossy)
+ """
+
+ assert_that(len(dtypes) != 0)
+ assert_that(all(isinstance(dtype, (Integer, Float)) for dtype in dtypes))
+
+ def combine_2_dtypes(dtype1: BaseDataType, dtype2: BaseDataType) -> BaseDataType:
+ result: BaseDataType = dtype1
+
+ if isinstance(dtype1, Integer) and isinstance(dtype2, Integer):
+ max_bits = max(dtype1.bit_width, dtype2.bit_width)
+
+ if dtype1.is_signed and dtype2.is_signed:
+ result = SignedInteger(max_bits)
+
+ elif not dtype1.is_signed and not dtype2.is_signed:
+ result = UnsignedInteger(max_bits)
+
+ elif dtype1.is_signed and not dtype2.is_signed:
+ # if dtype2 has the bigger bit_width,
+ # we need a signed integer that can hold
+ # it, so add 1 bit of sign to its bit_width
+ if dtype2.bit_width >= dtype1.bit_width:
+ new_bit_width = dtype2.bit_width + 1
+ result = SignedInteger(new_bit_width)
+ else:
+ result = SignedInteger(dtype1.bit_width)
+
+ elif not dtype1.is_signed and dtype2.is_signed:
+ # Same as above, with dtype1 and dtype2 switched around
+ if dtype1.bit_width >= dtype2.bit_width:
+ new_bit_width = dtype1.bit_width + 1
+ result = SignedInteger(new_bit_width)
+ else:
+ result = SignedInteger(dtype2.bit_width)
+
+ elif isinstance(dtype1, Float) and isinstance(dtype2, Float):
+ max_bits = max(dtype1.bit_width, dtype2.bit_width)
+ result = Float(max_bits)
+
+ elif isinstance(dtype1, Float):
+ result = dtype1
+
+ elif isinstance(dtype2, Float):
+ result = dtype2
+
+ return result
+
+ result = dtypes[0]
+ for other in dtypes[1:]:
+ result = combine_2_dtypes(result, other)
+ return result
diff --git a/frontends/concrete-python/concrete/numpy/extensions/__init__.py b/frontends/concrete-python/concrete/numpy/extensions/__init__.py
new file mode 100644
index 000000000..7648218d9
--- /dev/null
+++ b/frontends/concrete-python/concrete/numpy/extensions/__init__.py
@@ -0,0 +1,11 @@
+"""
+Provide additional features that are not present in numpy.
+"""
+
+from .array import array
+from .ones import one, ones
+from .round_bit_pattern import AutoRounder, round_bit_pattern
+from .table import LookupTable
+from .tag import tag
+from .univariate import univariate
+from .zeros import zero, zeros
diff --git a/frontends/concrete-python/concrete/numpy/extensions/array.py b/frontends/concrete-python/concrete/numpy/extensions/array.py
new file mode 100644
index 000000000..32c9d768b
--- /dev/null
+++ b/frontends/concrete-python/concrete/numpy/extensions/array.py
@@ -0,0 +1,59 @@
+"""
+Declaration of `array` function, to simplify creation of encrypted arrays.
+"""
+
+from typing import Any, Union
+
+import numpy as np
+
+from ..dtypes.utils import combine_dtypes
+from ..representation import Node
+from ..tracing import Tracer
+from ..values import Value
+
+
+def array(values: Any) -> Union[np.ndarray, Tracer]:
+ """
+ Create an encrypted array from either encrypted or clear values.
+
+ Args:
+ values (Any):
+ array like object compatible with numpy to construct the resulting encrypted array
+
+ Returns:
+ Union[np.ndarray, Tracer]:
+ Tracer that respresents the operation during tracing
+ ndarray with values otherwise
+ """
+
+ # pylint: disable=protected-access
+ is_tracing = Tracer._is_tracing
+ # pylint: enable=protected-access
+
+ if not isinstance(values, np.ndarray):
+ values = np.array(values)
+
+ if not is_tracing:
+ return values
+
+ shape = values.shape
+ values = values.flatten()
+
+ for i, value in enumerate(values):
+ if not isinstance(value, Tracer):
+ values[i] = Tracer.sanitize(value)
+
+ if not values[i].output.is_scalar:
+ message = "Encrypted arrays can only be created from scalars"
+ raise ValueError(message)
+
+ dtype = combine_dtypes([value.output.dtype for value in values])
+ is_encrypted = True
+
+ computation = Node.generic(
+ "array",
+ [value.output for value in values],
+ Value(dtype, shape, is_encrypted),
+ lambda *args: np.array(args).reshape(shape),
+ )
+ return Tracer(computation, values)
diff --git a/frontends/concrete-python/concrete/numpy/extensions/ones.py b/frontends/concrete-python/concrete/numpy/extensions/ones.py
new file mode 100644
index 000000000..bc523468f
--- /dev/null
+++ b/frontends/concrete-python/concrete/numpy/extensions/ones.py
@@ -0,0 +1,56 @@
+"""
+Declaration of `ones` and `one` functions, to simplify creation of encrypted ones.
+"""
+
+from typing import Tuple, Union
+
+import numpy as np
+
+from ..representation import Node
+from ..tracing import Tracer
+from ..values import Value
+
+
+def ones(shape: Union[int, Tuple[int, ...]]) -> Union[np.ndarray, Tracer]:
+ """
+ Create an encrypted array of ones.
+
+ Args:
+ shape (Tuple[int, ...]):
+ shape of the array
+
+ Returns:
+ Union[np.ndarray, Tracer]:
+ Tracer that respresents the operation during tracing
+ ndarray filled with ones otherwise
+ """
+
+ # pylint: disable=protected-access
+ is_tracing = Tracer._is_tracing
+ # pylint: enable=protected-access
+
+ numpy_ones = np.ones(shape, dtype=np.int64)
+
+ if is_tracing:
+ computation = Node.generic(
+ "ones",
+ [],
+ Value.of(numpy_ones, is_encrypted=True),
+ lambda: np.ones(shape, dtype=np.int64),
+ )
+ return Tracer(computation, [])
+
+ return numpy_ones
+
+
+def one() -> Union[np.ndarray, Tracer]:
+ """
+ Create an encrypted scalar with the value of one.
+
+ Returns:
+ Union[np.ndarray, Tracer]:
+ Tracer that respresents the operation during tracing
+ ndarray with one otherwise
+ """
+
+ return ones(())
diff --git a/frontends/concrete-python/concrete/numpy/extensions/round_bit_pattern.py b/frontends/concrete-python/concrete/numpy/extensions/round_bit_pattern.py
new file mode 100644
index 000000000..288479e6e
--- /dev/null
+++ b/frontends/concrete-python/concrete/numpy/extensions/round_bit_pattern.py
@@ -0,0 +1,246 @@
+"""
+Declaration of `round_bit_pattern` function, to provide an interface for rounded table lookups.
+"""
+
+import threading
+from copy import deepcopy
+from typing import Any, Callable, Iterable, List, Tuple, Union
+
+import numpy as np
+
+from ..dtypes import Integer
+from ..mlir.utils import MAXIMUM_TLU_BIT_WIDTH
+from ..representation import Node
+from ..tracing import Tracer
+from ..values import Value
+
+local = threading.local()
+
+# pylint: disable=protected-access
+local._is_adjusting = False
+# pylint: enable=protected-access
+
+
+class Adjusting(BaseException):
+ """
+ Adjusting class, to be used as early stop signal during adjustment.
+ """
+
+ rounder: "AutoRounder"
+ input_min: int
+ input_max: int
+
+ def __init__(self, rounder: "AutoRounder", input_min: int, input_max: int):
+ super().__init__()
+ self.rounder = rounder
+ self.input_min = input_min
+ self.input_max = input_max
+
+
+class AutoRounder:
+ """
+ AutoRounder class, to optimize for number of msbs to keep druing round bit pattern operation.
+ """
+
+ target_msbs: int
+
+ is_adjusted: bool
+ input_min: int
+ input_max: int
+ input_bit_width: int
+ lsbs_to_remove: int
+
+ def __init__(self, target_msbs: int = MAXIMUM_TLU_BIT_WIDTH):
+ # pylint: disable=protected-access
+ if local._is_adjusting:
+ message = (
+ "AutoRounders cannot be constructed during adjustment, "
+ "please construct AutoRounders outside the function and reference it"
+ )
+ raise RuntimeError(message)
+ # pylint: enable=protected-access
+
+ self.target_msbs = target_msbs
+
+ self.is_adjusted = False
+ self.input_min = 0
+ self.input_max = 0
+ self.input_bit_width = 0
+ self.lsbs_to_remove = 0
+
+ @staticmethod
+ def adjust(function: Callable, inputset: Union[Iterable[Any], Iterable[Tuple[Any, ...]]]):
+ """
+ Adjust AutoRounders in a function using an inputset.
+ """
+
+ # pylint: disable=protected-access,too-many-branches
+
+ try: # extract underlying function for decorators
+ function = function.function # type: ignore
+ assert callable(function)
+ except AttributeError:
+ pass
+
+ if local._is_adjusting:
+ message = "AutoRounders cannot be adjusted recursively"
+ raise RuntimeError(message)
+
+ try:
+ local._is_adjusting = True
+ while True:
+ rounder = None
+
+ for sample in inputset:
+ if not isinstance(sample, tuple):
+ sample = (sample,)
+
+ try:
+ function(*sample)
+ except Adjusting as adjuster:
+ rounder = adjuster.rounder
+
+ rounder.input_min = min(rounder.input_min, adjuster.input_min)
+ rounder.input_max = max(rounder.input_max, adjuster.input_max)
+
+ input_value = Value.of([rounder.input_min, rounder.input_max])
+ assert isinstance(input_value.dtype, Integer)
+ rounder.input_bit_width = input_value.dtype.bit_width
+
+ if rounder.input_bit_width - rounder.lsbs_to_remove > rounder.target_msbs:
+ rounder.lsbs_to_remove = rounder.input_bit_width - rounder.target_msbs
+ else:
+ return
+
+ if rounder is None:
+ message = "AutoRounders cannot be adjusted with an empty inputset"
+ raise ValueError(message)
+
+ rounder.is_adjusted = True
+
+ finally:
+ local._is_adjusting = False
+
+ # pylint: enable=protected-access,too-many-branches
+
+
+def round_bit_pattern(
+ x: Union[int, np.integer, List, np.ndarray, Tracer],
+ lsbs_to_remove: Union[int, AutoRounder],
+) -> Union[int, np.integer, List, np.ndarray, Tracer]:
+ """
+ Round the bit pattern of an integer.
+
+ If `lsbs_to_remove` is an `AutoRounder`:
+ corresponding integer value will be determined by adjustment process.
+
+ x = 0b_0000_0000 , lsbs_to_remove = 3 => 0b_0000_0000
+ x = 0b_0000_0001 , lsbs_to_remove = 3 => 0b_0000_0000
+ x = 0b_0000_0010 , lsbs_to_remove = 3 => 0b_0000_0000
+ x = 0b_0000_0011 , lsbs_to_remove = 3 => 0b_0000_0000
+ x = 0b_0000_0100 , lsbs_to_remove = 3 => 0b_0000_1000
+ x = 0b_0000_0101 , lsbs_to_remove = 3 => 0b_0000_1000
+ x = 0b_0000_0110 , lsbs_to_remove = 3 => 0b_0000_1000
+ x = 0b_0000_0111 , lsbs_to_remove = 3 => 0b_0000_1000
+
+ x = 0b_1010_0000 , lsbs_to_remove = 3 => 0b_1010_0000
+ x = 0b_1010_0001 , lsbs_to_remove = 3 => 0b_1010_0000
+ x = 0b_1010_0010 , lsbs_to_remove = 3 => 0b_1010_0000
+ x = 0b_1010_0011 , lsbs_to_remove = 3 => 0b_1010_0000
+ x = 0b_1010_0100 , lsbs_to_remove = 3 => 0b_1010_1000
+ x = 0b_1010_0101 , lsbs_to_remove = 3 => 0b_1010_1000
+ x = 0b_1010_0110 , lsbs_to_remove = 3 => 0b_1010_1000
+ x = 0b_1010_0111 , lsbs_to_remove = 3 => 0b_1010_1000
+
+ x = 0b_1010_1000 , lsbs_to_remove = 3 => 0b_1010_1000
+ x = 0b_1010_1001 , lsbs_to_remove = 3 => 0b_1010_1000
+ x = 0b_1010_1010 , lsbs_to_remove = 3 => 0b_1010_1000
+ x = 0b_1010_1011 , lsbs_to_remove = 3 => 0b_1010_1000
+ x = 0b_1010_1100 , lsbs_to_remove = 3 => 0b_1011_0000
+ x = 0b_1010_1101 , lsbs_to_remove = 3 => 0b_1011_0000
+ x = 0b_1010_1110 , lsbs_to_remove = 3 => 0b_1011_0000
+ x = 0b_1010_1111 , lsbs_to_remove = 3 => 0b_1011_0000
+
+ x = 0b_1011_1000 , lsbs_to_remove = 3 => 0b_1011_1000
+ x = 0b_1011_1001 , lsbs_to_remove = 3 => 0b_1011_1000
+ x = 0b_1011_1010 , lsbs_to_remove = 3 => 0b_1011_1000
+ x = 0b_1011_1011 , lsbs_to_remove = 3 => 0b_1011_1000
+ x = 0b_1011_1100 , lsbs_to_remove = 3 => 0b_1100_0000
+ x = 0b_1011_1101 , lsbs_to_remove = 3 => 0b_1100_0000
+ x = 0b_1011_1110 , lsbs_to_remove = 3 => 0b_1100_0000
+ x = 0b_1011_1111 , lsbs_to_remove = 3 => 0b_1100_0000
+
+ Args:
+ x (Union[int, np.integer, np.ndarray, Tracer]):
+ input to round
+
+ lsbs_to_remove (Union[int, AutoRounder]):
+ number of the least significant bits to remove
+ or an auto rounder object which will be used to determine the integer value
+
+ Returns:
+ Union[int, np.integer, np.ndarray, Tracer]:
+ Tracer that respresents the operation during tracing
+ rounded value(s) otherwise
+ """
+
+ # pylint: disable=protected-access,too-many-branches
+
+ if isinstance(lsbs_to_remove, AutoRounder):
+ if local._is_adjusting:
+ if not lsbs_to_remove.is_adjusted:
+ raise Adjusting(lsbs_to_remove, int(np.min(x)), int(np.max(x))) # type: ignore
+
+ elif not lsbs_to_remove.is_adjusted:
+ message = (
+ "AutoRounders cannot be used before adjustment, "
+ "please call AutoRounder.adjust with the function that will be compiled "
+ "and provide the exact inputset that will be used for compilation"
+ )
+ raise RuntimeError(message)
+
+ lsbs_to_remove = lsbs_to_remove.lsbs_to_remove
+
+ assert isinstance(lsbs_to_remove, int)
+
+ def evaluator(
+ x: Union[int, np.integer, np.ndarray],
+ lsbs_to_remove: int,
+ ) -> Union[int, np.integer, np.ndarray]:
+ if lsbs_to_remove == 0:
+ return x
+
+ unit = 1 << lsbs_to_remove
+ half = 1 << lsbs_to_remove - 1
+ rounded = (x + half) // unit
+ return rounded * unit
+
+ if isinstance(x, Tracer):
+ computation = Node.generic(
+ "round_bit_pattern",
+ [x.output],
+ deepcopy(x.output),
+ evaluator,
+ kwargs={"lsbs_to_remove": lsbs_to_remove},
+ )
+ return Tracer(computation, [x])
+
+ if isinstance(x, list): # pragma: no cover
+ try:
+ x = np.array(x)
+ except Exception: # pylint: disable=broad-except
+ pass
+
+ if isinstance(x, np.ndarray):
+ if not np.issubdtype(x.dtype, np.integer):
+ message = (
+ f"Expected input elements to be integers but they are {type(x.dtype).__name__}"
+ )
+ raise TypeError(message)
+ elif not isinstance(x, (int, np.integer)):
+ message = f"Expected input to be an int or a numpy array but it's {type(x).__name__}"
+ raise TypeError(message)
+
+ return evaluator(x, lsbs_to_remove)
+
+ # pylint: enable=protected-access,too-many-branches
diff --git a/frontends/concrete-python/concrete/numpy/extensions/table.py b/frontends/concrete-python/concrete/numpy/extensions/table.py
new file mode 100644
index 000000000..c612af7e3
--- /dev/null
+++ b/frontends/concrete-python/concrete/numpy/extensions/table.py
@@ -0,0 +1,134 @@
+"""
+Declaration of `LookupTable` class.
+"""
+
+from copy import deepcopy
+from typing import Any, Union
+
+import numpy as np
+
+from ..dtypes import BaseDataType, Integer
+from ..representation import Node
+from ..tracing import Tracer
+
+
+class LookupTable:
+ """
+ LookupTable class, to provide a way to do direct table lookups.
+ """
+
+ table: np.ndarray
+ output_dtype: BaseDataType
+
+ def __init__(self, table: Any):
+ is_valid = True
+ try:
+ self.table = table if isinstance(table, np.ndarray) else np.array(table)
+ except Exception: # pragma: no cover # pylint: disable=broad-except
+ # here we try our best to convert the table to np.ndarray
+ # if it fails we raise the exception at the end of the function
+ is_valid = False
+
+ if is_valid:
+ is_valid = self.table.size > 0
+
+ if is_valid:
+ minimum: int = 0
+ maximum: int = 0
+
+ if np.issubdtype(self.table.dtype, np.integer):
+ minimum = int(self.table.min())
+ maximum = int(self.table.max())
+ if self.table.ndim != 1:
+ is_valid = False
+ else:
+ is_valid = all(isinstance(item, LookupTable) for item in self.table.flat)
+ if is_valid:
+ minimum = int(self.table.flat[0].table.min())
+ maximum = int(self.table.flat[0].table.max())
+ for item in self.table.flat:
+ minimum = min(minimum, item.table.min())
+ maximum = max(maximum, item.table.max())
+
+ self.output_dtype = Integer.that_can_represent([minimum, maximum])
+
+ if not is_valid:
+ message = f"LookupTable cannot be constructed with {repr(table)}"
+ raise ValueError(message)
+
+ def __repr__(self):
+ return str(list(self.table))
+
+ def __getitem__(self, key: Union[int, np.integer, np.ndarray, Tracer]):
+ if not isinstance(key, Tracer):
+ return LookupTable.apply(key, self.table)
+
+ if not isinstance(key.output.dtype, Integer):
+ message = f"LookupTable cannot be looked up with {key.output}"
+ raise ValueError(message)
+
+ table = self.table
+ if not np.issubdtype(self.table.dtype, np.integer):
+ try:
+ table = np.broadcast_to(table, key.output.shape)
+ except Exception as error:
+ message = (
+ f"LookupTable of shape {self.table.shape} "
+ f"cannot be looked up with {key.output}"
+ )
+ raise ValueError(message) from error
+
+ output = deepcopy(key.output)
+ output.dtype = self.output_dtype
+
+ computation = Node.generic(
+ "tlu",
+ [key.output],
+ output,
+ LookupTable.apply,
+ kwargs={"table": table},
+ )
+ return Tracer(computation, [key])
+
+ @staticmethod
+ def apply(
+ key: Union[int, np.integer, np.ndarray],
+ table: np.ndarray,
+ ) -> Union[int, np.integer, np.ndarray]:
+ """
+ Apply lookup table.
+
+ Args:
+ key (Union[int, np.integer, np.ndarray]):
+ lookup key
+
+ table (np.ndarray):
+ lookup table
+
+ Returns:
+ Union[int, np.integer, np.ndarray]:
+ lookup result
+
+ Raises:
+ ValueError:
+ if `table` cannot be looked up with `key`
+ """
+
+ if not isinstance(key, (int, np.integer, np.ndarray)) or (
+ isinstance(key, np.ndarray) and not np.issubdtype(key.dtype, np.integer)
+ ):
+ message = f"LookupTable cannot be looked up with {key}"
+ raise ValueError(message)
+
+ if np.issubdtype(table.dtype, np.integer):
+ return table[key]
+
+ if not isinstance(key, np.ndarray) or key.shape != table.shape:
+ message = f"LookupTable of shape {table.shape} cannot be looked up with {key}"
+ raise ValueError(message)
+
+ flat_result = np.fromiter(
+ (lt.table[k] for lt, k in zip(table.flat, key.flat)),
+ dtype=np.longlong,
+ )
+ return flat_result.reshape(table.shape)
diff --git a/frontends/concrete-python/concrete/numpy/extensions/tag.py b/frontends/concrete-python/concrete/numpy/extensions/tag.py
new file mode 100644
index 000000000..904493652
--- /dev/null
+++ b/frontends/concrete-python/concrete/numpy/extensions/tag.py
@@ -0,0 +1,24 @@
+"""
+Declaration of `tag` context manager, to allow tagging certain nodes.
+"""
+
+import threading
+from contextlib import contextmanager
+
+tag_context = threading.local()
+tag_context.stack = []
+
+
+@contextmanager
+def tag(name: str):
+ """
+ Introduce a new tag to the tag stack.
+
+ Can be nested, and the resulting tag will be `tag1.tag2`.
+ """
+
+ tag_context.stack.append(name)
+ try:
+ yield
+ finally:
+ tag_context.stack.pop()
diff --git a/frontends/concrete-python/concrete/numpy/extensions/univariate.py b/frontends/concrete-python/concrete/numpy/extensions/univariate.py
new file mode 100644
index 000000000..683689349
--- /dev/null
+++ b/frontends/concrete-python/concrete/numpy/extensions/univariate.py
@@ -0,0 +1,89 @@
+"""
+Declaration of `univariate` function.
+"""
+
+from typing import Any, Callable, Optional, Type, Union
+
+import numpy as np
+
+from ..dtypes import BaseDataType, Float
+from ..representation import Node
+from ..tracing import ScalarAnnotation, Tracer
+from ..values import Value
+
+
+def univariate(
+ function: Callable[[Any], Any],
+ outputs: Optional[Union[BaseDataType, Type[ScalarAnnotation]]] = None,
+) -> Callable[[Union[Tracer, Any]], Union[Tracer, Any]]:
+ """
+ Wrap a univariate function so that it is traced into a single generic node.
+
+ Args:
+ function (Callable[[Any], Any]):
+ univariate function to wrap
+
+ outputs (Optional[Union[BaseDataType, Type[ScalarAnnotation]]], default = None):
+ data type of the result, unused during compilation, required for direct definition
+
+ Returns:
+ Callable[[Union[Tracer, Any]], Union[Tracer, Any]]:
+ another univariate function that can be called with a Tracer as well
+ """
+
+ def wrapper(x: Union[Tracer, Any]) -> Union[Tracer, Any]:
+ """
+ Evaluate or trace wrapped univariate function.
+
+ Args:
+ x (Union[Tracer, Any]):
+ input of the function
+
+ Returns:
+ Union[Tracer, Any]:
+ result of tracing or evaluation
+ """
+
+ if isinstance(x, Tracer):
+ dtype = (
+ {64: np.float64, 32: np.float32, 16: np.float16}[x.output.dtype.bit_width]
+ if isinstance(x.output.dtype, Float)
+ else np.int64
+ )
+
+ if x.output.shape == ():
+ sample = dtype(1) # type: ignore
+ else:
+ sample = np.ones(x.output.shape, dtype=dtype)
+ evaluation = function(sample)
+
+ output_value = Value.of(evaluation, is_encrypted=x.output.is_encrypted)
+ if output_value.shape != x.output.shape:
+ message = f"Function {function.__name__} cannot be used with cnp.univariate"
+ raise ValueError(message)
+
+ # pylint: disable=protected-access
+ is_direct = Tracer._is_direct
+ # pylint: enable=protected-access
+
+ if is_direct:
+ if outputs is None:
+ message = (
+ "Univariate extension requires "
+ "`outputs` argument for direct circuit definition "
+ "(e.g., cnp.univariate(function, outputs=cnp.uint4)(x))"
+ )
+ raise ValueError(message)
+ output_value.dtype = outputs if isinstance(outputs, BaseDataType) else outputs.dtype
+
+ computation = Node.generic(
+ function.__name__,
+ [x.output],
+ output_value,
+ lambda x: function(x), # pylint: disable=unnecessary-lambda
+ )
+ return Tracer(computation, [x])
+
+ return function(x)
+
+ return wrapper
diff --git a/frontends/concrete-python/concrete/numpy/extensions/zeros.py b/frontends/concrete-python/concrete/numpy/extensions/zeros.py
new file mode 100644
index 000000000..181a38f3d
--- /dev/null
+++ b/frontends/concrete-python/concrete/numpy/extensions/zeros.py
@@ -0,0 +1,56 @@
+"""
+Declaration of `zeros` and `zero` functions, to simplify creation of encrypted zeros.
+"""
+
+from typing import Tuple, Union
+
+import numpy as np
+
+from ..representation import Node
+from ..tracing import Tracer
+from ..values import Value
+
+
+def zeros(shape: Union[int, Tuple[int, ...]]) -> Union[np.ndarray, Tracer]:
+ """
+ Create an encrypted array of zeros.
+
+ Args:
+ shape (Tuple[int, ...]):
+ shape of the array
+
+ Returns:
+ Union[np.ndarray, Tracer]:
+ Tracer that respresents the operation during tracing
+ ndarray filled with zeros otherwise
+ """
+
+ # pylint: disable=protected-access
+ is_tracing = Tracer._is_tracing
+ # pylint: enable=protected-access
+
+ numpy_zeros = np.zeros(shape, dtype=np.int64)
+
+ if is_tracing:
+ computation = Node.generic(
+ "zeros",
+ [],
+ Value.of(numpy_zeros, is_encrypted=True),
+ lambda: np.zeros(shape, dtype=np.int64),
+ )
+ return Tracer(computation, [])
+
+ return numpy_zeros
+
+
+def zero() -> Union[np.ndarray, Tracer]:
+ """
+ Create an encrypted scalar with the value of zero.
+
+ Returns:
+ Union[np.ndarray, Tracer]:
+ Tracer that respresents the operation during tracing
+ ndarray with zero otherwise
+ """
+
+ return zeros(())
diff --git a/frontends/concrete-python/concrete/numpy/internal/__init__.py b/frontends/concrete-python/concrete/numpy/internal/__init__.py
new file mode 100644
index 000000000..c09f84c42
--- /dev/null
+++ b/frontends/concrete-python/concrete/numpy/internal/__init__.py
@@ -0,0 +1,3 @@
+"""
+Export functions that are used internally by other modules for common things (e.g., assertions).
+"""
diff --git a/frontends/concrete-python/concrete/numpy/internal/utils.py b/frontends/concrete-python/concrete/numpy/internal/utils.py
new file mode 100644
index 000000000..9537ebc4b
--- /dev/null
+++ b/frontends/concrete-python/concrete/numpy/internal/utils.py
@@ -0,0 +1,32 @@
+"""
+Declaration of various functions and constants related to the entire project.
+"""
+
+
+def assert_that(condition: bool, message: str = ""):
+ """
+ Assert a condition.
+
+ Args:
+ condition (bool):
+ condition to assert
+
+ message (str):
+ message to give to `AssertionError` if the condition does not hold
+
+ Raises:
+ AssertionError:
+ if the condition does not hold
+ """
+
+ if not condition:
+ raise AssertionError(message)
+
+
+def unreachable():
+ """
+ Raise a RuntimeError to indicate unreachable code is entered.
+ """
+
+ message = "Entered unreachable code"
+ raise RuntimeError(message)
diff --git a/frontends/concrete-python/concrete/numpy/mlir/__init__.py b/frontends/concrete-python/concrete/numpy/mlir/__init__.py
new file mode 100644
index 000000000..648c3ec11
--- /dev/null
+++ b/frontends/concrete-python/concrete/numpy/mlir/__init__.py
@@ -0,0 +1,6 @@
+"""
+Provide `computation graph` to `mlir` functionality.
+"""
+
+from .graph_converter import GraphConverter
+from .node_converter import NodeConverter
diff --git a/frontends/concrete-python/concrete/numpy/mlir/graph_converter.py b/frontends/concrete-python/concrete/numpy/mlir/graph_converter.py
new file mode 100644
index 000000000..1fd4d37e7
--- /dev/null
+++ b/frontends/concrete-python/concrete/numpy/mlir/graph_converter.py
@@ -0,0 +1,739 @@
+"""
+Declaration of `GraphConverter` class.
+"""
+
+# pylint: disable=no-member,no-name-in-module
+
+from copy import deepcopy
+from typing import Any, Dict, List, Optional, cast
+
+import concrete.lang as concretelang
+import networkx as nx
+import numpy as np
+from concrete.lang.dialects import fhe, fhelinalg
+from mlir.dialects import arith, func
+from mlir.ir import (
+ Attribute,
+ Context,
+ InsertionPoint,
+ IntegerAttr,
+ IntegerType,
+ Location,
+ Module,
+ OpResult,
+ RankedTensorType,
+)
+
+from ..dtypes import Integer, SignedInteger
+from ..internal.utils import assert_that
+from ..representation import Graph, Node, Operation
+from ..values import ClearScalar, EncryptedScalar
+from .node_converter import NodeConverter
+from .utils import MAXIMUM_TLU_BIT_WIDTH
+
+# pylint: enable=no-member,no-name-in-module
+
+
+class GraphConverter:
+ """
+ GraphConverter class, to convert computation graphs to their MLIR equivalent.
+ """
+
+ @staticmethod
+ def _check_node_convertibility(graph: Graph, node: Node) -> Optional[str]:
+ """
+ Check node convertibility to MLIR.
+
+ Args:
+ graph (Graph):
+ computation graph of the node
+
+ node (Node):
+ node to be checked
+
+ Returns:
+ Optional[str]:
+ None if node is convertible to MLIR, the reason for inconvertibility otherwise
+ """
+
+ # pylint: disable=too-many-branches,too-many-return-statements,too-many-statements
+
+ inputs = node.inputs
+ output = node.output
+
+ if node.operation == Operation.Constant:
+ assert_that(len(inputs) == 0)
+ if not isinstance(output.dtype, Integer):
+ return "only integer constants are supported"
+
+ elif node.operation == Operation.Input:
+ assert_that(len(inputs) == 1)
+ assert_that(inputs[0] == output)
+ if not isinstance(output.dtype, Integer):
+ return "only integer inputs are supported"
+ if output.dtype.is_signed and output.is_clear:
+ return "only encrypted signed integer inputs are supported"
+
+ else:
+ assert_that(node.operation == Operation.Generic)
+
+ if not isinstance(output.dtype, Integer):
+ return "only integer operations are supported"
+
+ name = node.properties["name"]
+
+ if name == "add":
+ assert_that(len(inputs) == 2)
+
+ elif name == "array":
+ assert_that(len(inputs) > 0)
+ assert_that(all(input.is_scalar for input in inputs))
+
+ elif name == "assign.static":
+ if not inputs[0].is_encrypted:
+ return "only assignment to encrypted tensors are supported"
+
+ elif name in ["bitwise_and", "bitwise_or", "bitwise_xor", "left_shift", "right_shift"]:
+ assert_that(len(inputs) == 2)
+ if all(value.is_encrypted for value in node.inputs):
+ pred_nodes = graph.ordered_preds_of(node)
+ if (
+ name in ["left_shift", "right_shift"]
+ and cast(Integer, pred_nodes[1].output.dtype).bit_width > 4
+ ):
+ return "only up to 4-bit shifts are supported"
+
+ for pred_node in pred_nodes:
+ assert isinstance(pred_node.output.dtype, Integer)
+ if pred_node.output.dtype.is_signed:
+ return "only unsigned bitwise operations are supported"
+
+ elif name == "broadcast_to":
+ assert_that(len(inputs) == 1)
+ if not inputs[0].is_encrypted:
+ return "only encrypted broadcasting is supported"
+
+ elif name == "concatenate":
+ if not all(input.is_encrypted for input in inputs):
+ return "only all encrypted concatenate is supported"
+
+ elif name in ["conv1d", "conv2d", "conv3d"]:
+ assert_that(len(inputs) == 2 or len(inputs) == 3)
+ if not (inputs[0].is_encrypted and inputs[1].is_clear):
+ return f"only {name} with encrypted input and clear weight is supported"
+
+ elif name == "dot":
+ assert_that(len(inputs) == 2)
+ if inputs[0].is_encrypted and inputs[1].is_encrypted:
+ return "only dot product between encrypted and clear is supported"
+
+ elif name in ["equal", "greater", "greater_equal", "less", "less_equal", "not_equal"]:
+ assert_that(len(inputs) == 2)
+
+ elif name == "expand_dims":
+ assert_that(len(inputs) == 1)
+
+ elif name == "index.static":
+ assert_that(len(inputs) == 1)
+ if not inputs[0].is_encrypted:
+ return "only encrypted indexing supported"
+
+ elif name == "matmul":
+ assert_that(len(inputs) == 2)
+ if inputs[0].is_encrypted and inputs[1].is_encrypted:
+ return "only matrix multiplication between encrypted and clear is supported"
+
+ elif name == "maxpool":
+ assert_that(len(inputs) == 1)
+ if not inputs[0].is_encrypted:
+ return "only encrypted maxpool is supported"
+
+ elif name == "multiply":
+ assert_that(len(inputs) == 2)
+ if inputs[0].is_encrypted and inputs[1].is_encrypted:
+ return "only multiplication between encrypted and clear is supported"
+
+ elif name == "negative":
+ assert_that(len(inputs) == 1)
+ if not inputs[0].is_encrypted:
+ return "only encrypted negation is supported"
+
+ elif name == "ones":
+ assert_that(len(inputs) == 0)
+
+ elif name == "reshape":
+ assert_that(len(inputs) == 1)
+ if not inputs[0].is_encrypted:
+ return "only encrypted reshape is supported"
+
+ elif name == "squeeze":
+ assert_that(len(inputs) == 1)
+
+ elif name == "subtract":
+ assert_that(len(inputs) == 2)
+
+ elif name == "sum":
+ assert_that(len(inputs) == 1)
+ if not inputs[0].is_encrypted:
+ return "only encrypted sum is supported"
+
+ elif name == "transpose":
+ assert_that(len(inputs) == 1)
+ if not inputs[0].is_encrypted:
+ return "only encrypted transpose is supported"
+
+ elif name == "zeros":
+ assert_that(len(inputs) == 0)
+
+ else:
+ assert_that(node.converted_to_table_lookup)
+ variable_input_indices = [
+ idx
+ for idx, pred in enumerate(graph.ordered_preds_of(node))
+ if not pred.operation == Operation.Constant
+ ]
+ assert_that(len(variable_input_indices) == 1)
+
+ if len(inputs) > 0 and all(input.is_clear for input in inputs):
+ return "one of the operands must be encrypted"
+
+ return None
+
+ # pylint: enable=too-many-branches,too-many-return-statements,too-many-statements
+
+ @staticmethod
+ def _check_graph_convertibility(graph: Graph):
+ """
+ Check graph convertibility to MLIR.
+
+ Args:
+ graph (Graph):
+ computation graph to be checked
+
+ Raises:
+ RuntimeError:
+ if `graph` is not convertible to MLIR
+ """
+
+ offending_nodes = {}
+
+ if len(graph.output_nodes) > 1:
+ offending_nodes.update(
+ {
+ node: ["only a single output is supported", node.location]
+ for node in graph.output_nodes.values()
+ }
+ )
+
+ if len(offending_nodes) == 0:
+ for node in graph.graph.nodes:
+ reason = GraphConverter._check_node_convertibility(graph, node)
+ if reason is not None:
+ offending_nodes[node] = [reason, node.location]
+
+ if len(offending_nodes) != 0:
+ message = (
+ "Function you are trying to compile cannot be converted to MLIR\n\n"
+ + graph.format(highlighted_nodes=offending_nodes)
+ )
+ raise RuntimeError(message)
+
+ @staticmethod
+ def _update_bit_widths(graph: Graph):
+ """
+ Update bit-widths in a computation graph to be convertible to MLIR.
+
+ Args:
+ graph (Graph):
+ computation graph to be updated
+ """
+
+ offending_nodes: Dict[Node, List[str]] = {}
+
+ max_bit_width = 0
+ max_bit_width_node = None
+
+ first_tlu_node = None
+ first_signed_node = None
+
+ for node in nx.lexicographical_topological_sort(graph.graph):
+ dtype = node.output.dtype
+ assert_that(isinstance(dtype, Integer))
+
+ current_node_bit_width = (
+ dtype.bit_width - 1 if node.output.is_clear else dtype.bit_width
+ )
+ if (
+ all(value.is_encrypted for value in node.inputs)
+ and node.operation == Operation.Generic
+ and node.properties["name"]
+ in [
+ "greater",
+ "greater_equal",
+ "less",
+ "less_equal",
+ ]
+ ):
+ # implementation of these operators require at least 4 bits
+ current_node_bit_width = max(current_node_bit_width, 4)
+
+ if max_bit_width < current_node_bit_width:
+ max_bit_width = current_node_bit_width
+ max_bit_width_node = node
+
+ if node.converted_to_table_lookup and first_tlu_node is None:
+ first_tlu_node = node
+
+ if dtype.is_signed and first_signed_node is None:
+ first_signed_node = node
+
+ if first_tlu_node is not None:
+ if max_bit_width > MAXIMUM_TLU_BIT_WIDTH:
+ assert max_bit_width_node is not None
+ offending_nodes[max_bit_width_node] = [
+ (
+ {
+ Operation.Input: f"this input is {max_bit_width}-bits",
+ Operation.Constant: f"this constant is {max_bit_width}-bits",
+ Operation.Generic: f"this operation results in {max_bit_width}-bits",
+ }[max_bit_width_node.operation]
+ ),
+ max_bit_width_node.location,
+ ]
+ offending_nodes[first_tlu_node] = [
+ f"table lookups are only supported on circuits with "
+ f"up to {MAXIMUM_TLU_BIT_WIDTH}-bits",
+ first_tlu_node.location,
+ ]
+
+ if len(offending_nodes) != 0:
+ raise RuntimeError(
+ "Function you are trying to compile cannot be converted to MLIR:\n\n"
+ + graph.format(highlighted_nodes=offending_nodes)
+ )
+
+ for node in nx.topological_sort(graph.graph):
+ assert isinstance(node.output.dtype, Integer)
+ node.properties["original_bit_width"] = node.output.dtype.bit_width
+
+ for value in node.inputs + [node.output]:
+ dtype = value.dtype
+ assert_that(isinstance(dtype, Integer))
+ dtype.bit_width = max_bit_width + 1 if value.is_clear else max_bit_width
+
+ @staticmethod
+ def _offset_negative_lookup_table_inputs(graph: Graph):
+ """
+ Offset negative table lookup inputs to be convertible to MLIR.
+
+ Args:
+ graph (Graph):
+ computation graph to apply offset
+ """
+
+ # ugly hack to add an offset before entering a TLU
+ # if its variable input node has a signed output.
+ # this makes hardcoded assumptions about the way bit widths are handled in MLIR.
+ # this does not update the TLU input values to allow for proper table generation.
+
+ nx_graph = graph.graph
+ for node in list(nx_graph.nodes):
+ if node.operation == Operation.Generic:
+ if not node.converted_to_table_lookup:
+ continue
+
+ variable_input_index = -1
+
+ preds = graph.ordered_preds_of(node)
+ for index, pred in enumerate(preds):
+ if pred.operation != Operation.Constant:
+ variable_input_index = index
+ break
+
+ variable_input_node = preds[variable_input_index]
+
+ variable_input_value = variable_input_node.output
+ variable_input_dtype = variable_input_value.dtype
+
+ assert_that(isinstance(variable_input_dtype, Integer))
+ variable_input_dtype = cast(Integer, variable_input_dtype)
+
+ if not variable_input_dtype.is_signed:
+ continue
+
+ variable_input_bit_width = variable_input_dtype.bit_width
+ offset_constant_dtype = SignedInteger(variable_input_bit_width + 1)
+
+ offset_constant_value = abs(variable_input_dtype.min())
+
+ offset_constant = Node.constant(offset_constant_value)
+ offset_constant.output.dtype = offset_constant_dtype
+
+ original_bit_width = Integer.that_can_represent(offset_constant_value).bit_width
+ offset_constant.properties["original_bit_width"] = original_bit_width
+
+ add_offset = Node.generic(
+ "add",
+ [variable_input_value, ClearScalar(offset_constant_dtype)],
+ variable_input_value,
+ np.add,
+ )
+
+ original_bit_width = variable_input_node.properties["original_bit_width"]
+ add_offset.properties["original_bit_width"] = original_bit_width
+
+ nx_graph.remove_edge(variable_input_node, node)
+
+ nx_graph.add_edge(variable_input_node, add_offset, input_idx=0)
+ nx_graph.add_edge(offset_constant, add_offset, input_idx=1)
+
+ nx_graph.add_edge(add_offset, node, input_idx=variable_input_index)
+
+ @staticmethod
+ def _broadcast_assignments(graph: Graph):
+ """
+ Broadcast assignments.
+
+ Args:
+ graph (Graph):
+ computation graph to transform
+ """
+
+ nx_graph = graph.graph
+ for node in list(nx_graph.nodes):
+ if node.operation == Operation.Generic and node.properties["name"] == "assign.static":
+ shape = node.inputs[0].shape
+ index = node.properties["kwargs"]["index"]
+
+ assert_that(isinstance(index, tuple))
+ while len(index) < len(shape):
+ index = (*index, slice(None, None, None))
+
+ required_value_shape_list = []
+
+ for i, indexing_element in enumerate(index):
+ if isinstance(indexing_element, slice):
+ n = len(np.zeros(shape[i])[indexing_element])
+ required_value_shape_list.append(n)
+ else:
+ required_value_shape_list.append(1)
+
+ required_value_shape = tuple(required_value_shape_list)
+ actual_value_shape = node.inputs[1].shape
+
+ if required_value_shape != actual_value_shape:
+ preds = graph.ordered_preds_of(node)
+ pred_to_modify = preds[1]
+
+ modified_value = deepcopy(pred_to_modify.output)
+ modified_value.shape = required_value_shape
+
+ try:
+ np.broadcast_to(np.zeros(actual_value_shape), required_value_shape)
+ modified_value.is_encrypted = True
+ modified_value.dtype = node.output.dtype
+ modified_pred = Node.generic(
+ "broadcast_to",
+ [pred_to_modify.output],
+ modified_value,
+ np.broadcast_to,
+ kwargs={"shape": required_value_shape},
+ )
+ except Exception: # pylint: disable=broad-except
+ np.reshape(np.zeros(actual_value_shape), required_value_shape)
+ modified_pred = Node.generic(
+ "reshape",
+ [pred_to_modify.output],
+ modified_value,
+ np.reshape,
+ kwargs={"newshape": required_value_shape},
+ )
+
+ modified_pred.properties["original_bit_width"] = pred_to_modify.properties[
+ "original_bit_width"
+ ]
+
+ nx_graph.add_edge(pred_to_modify, modified_pred, input_idx=0)
+
+ nx_graph.remove_edge(pred_to_modify, node)
+ nx_graph.add_edge(modified_pred, node, input_idx=1)
+
+ node.inputs[1] = modified_value
+
+ @staticmethod
+ def _encrypt_clear_assignments(graph: Graph):
+ """
+ Encrypt clear assignments.
+
+ Args:
+ graph (Graph):
+ computation graph to transform
+ """
+
+ nx_graph = graph.graph
+ for node in list(nx_graph.nodes):
+ if node.operation == Operation.Generic and node.properties["name"] == "assign.static":
+ assigned_value = node.inputs[1]
+ if assigned_value.is_clear:
+ preds = graph.ordered_preds_of(node)
+ assigned_pred = preds[1]
+
+ new_assigned_pred_value = deepcopy(assigned_value)
+ new_assigned_pred_value.is_encrypted = True
+ new_assigned_pred_value.dtype = preds[0].output.dtype
+
+ zero = Node.generic(
+ "zeros",
+ [],
+ EncryptedScalar(new_assigned_pred_value.dtype),
+ lambda: np.zeros((), dtype=np.int64),
+ )
+
+ original_bit_width = 1
+ zero.properties["original_bit_width"] = original_bit_width
+
+ new_assigned_pred = Node.generic(
+ "add",
+ [assigned_pred.output, zero.output],
+ new_assigned_pred_value,
+ np.add,
+ )
+
+ original_bit_width = assigned_pred.properties["original_bit_width"]
+ new_assigned_pred.properties["original_bit_width"] = original_bit_width
+
+ nx_graph.remove_edge(preds[1], node)
+
+ nx_graph.add_edge(preds[1], new_assigned_pred, input_idx=0)
+ nx_graph.add_edge(zero, new_assigned_pred, input_idx=1)
+
+ nx_graph.add_edge(new_assigned_pred, node, input_idx=1)
+
+ @staticmethod
+ def _tensorize_scalars_for_fhelinalg(graph: Graph):
+ """
+ Tensorize scalars if they are used within fhelinalg operations.
+
+ Args:
+ graph (Graph):
+ computation graph to update
+ """
+
+ # pylint: disable=invalid-name
+ OPS_TO_TENSORIZE = [
+ "add",
+ "bitwise_and",
+ "bitwise_or",
+ "bitwise_xor",
+ "broadcast_to",
+ "dot",
+ "equal",
+ "greater",
+ "greater_equal",
+ "left_shift",
+ "less",
+ "less_equal",
+ "multiply",
+ "not_equal",
+ "right_shift",
+ "subtract",
+ ]
+ # pylint: enable=invalid-name
+
+ tensorized_scalars: Dict[Node, Node] = {}
+
+ nx_graph = graph.graph
+ for node in list(nx_graph.nodes):
+ if node.operation == Operation.Generic and node.properties["name"] in OPS_TO_TENSORIZE:
+ assert len(node.inputs) in {1, 2}
+
+ if len(node.inputs) == 2:
+ if {inp.is_scalar for inp in node.inputs} != {True, False}:
+ continue
+ else:
+ if not node.inputs[0].is_scalar:
+ continue
+
+ # for bitwise and comparison operators that can have constants
+ # we don't need broadcasting here
+ if node.converted_to_table_lookup:
+ continue
+
+ pred_to_tensorize: Optional[Node] = None
+ pred_to_tensorize_index = 0
+
+ preds = graph.ordered_preds_of(node)
+ for index, pred in enumerate(preds):
+ if pred.output.is_scalar:
+ pred_to_tensorize = pred
+ pred_to_tensorize_index = index
+ break
+
+ assert pred_to_tensorize is not None
+
+ tensorized_pred = tensorized_scalars.get(pred_to_tensorize)
+ if tensorized_pred is None:
+ tensorized_value = deepcopy(pred_to_tensorize.output)
+ tensorized_value.shape = (1,)
+
+ tensorized_pred = Node.generic(
+ "array",
+ [pred_to_tensorize.output],
+ tensorized_value,
+ lambda *args: np.array(args),
+ )
+
+ original_bit_width = pred_to_tensorize.properties["original_bit_width"]
+ tensorized_pred.properties["original_bit_width"] = original_bit_width
+
+ original_shape = ()
+ tensorized_pred.properties["original_shape"] = original_shape
+
+ nx_graph.add_edge(pred_to_tensorize, tensorized_pred, input_idx=0)
+ tensorized_scalars[pred_to_tensorize] = tensorized_pred
+
+ assert tensorized_pred is not None
+
+ nx_graph.remove_edge(pred_to_tensorize, node)
+ nx_graph.add_edge(tensorized_pred, node, input_idx=pred_to_tensorize_index)
+
+ new_input_value = deepcopy(node.inputs[pred_to_tensorize_index])
+ new_input_value.shape = (1,)
+ node.inputs[pred_to_tensorize_index] = new_input_value
+
+ @staticmethod
+ def _sanitize_signed_inputs(graph: Graph, args: List[Any], ctx: Context) -> List[Any]:
+ """
+ Use subtraction to sanitize signed inputs.
+
+ Args:
+ graph (Graph):
+ computation graph being converted
+
+ args (List[Any]):
+ list of arguments from mlir main
+
+ ctx (Context):
+ mlir context where the conversion is being performed
+
+ Returns:
+ Tuple[List[str], List[Any]]:
+ sanitized args and name of the sanitized variables in MLIR
+ """
+
+ sanitized_args = []
+ for i, arg in enumerate(args):
+ input_node = graph.input_nodes[i]
+ input_value = input_node.output
+
+ assert_that(isinstance(input_value.dtype, Integer))
+ input_dtype = cast(Integer, input_value.dtype)
+
+ if input_dtype.is_signed:
+ assert_that(input_value.is_encrypted)
+ n = input_dtype.bit_width
+
+ sanitizer_type = IntegerType.get_signless(n + 1)
+ sanitizer = 2 ** (n - 1)
+
+ if input_value.is_scalar:
+ sanitizer_attr = IntegerAttr.get(sanitizer_type, sanitizer)
+ else:
+ sanitizer_type = RankedTensorType.get((1,), sanitizer_type)
+ sanitizer_attr = Attribute.parse(f"dense<[{sanitizer}]> : {sanitizer_type}")
+
+ # pylint: disable=too-many-function-args
+ sanitizer_cst = arith.ConstantOp(sanitizer_type, sanitizer_attr)
+ # pylint: enable=too-many-function-args
+
+ resulting_type = NodeConverter.value_to_mlir_type(ctx, input_value)
+ if input_value.is_scalar:
+ sanitized = fhe.SubEintIntOp(resulting_type, arg, sanitizer_cst).result
+ else:
+ sanitized = fhelinalg.SubEintIntOp(resulting_type, arg, sanitizer_cst).result
+
+ sanitized_args.append(sanitized)
+ else:
+ sanitized_args.append(arg)
+
+ return sanitized_args
+
+ @staticmethod
+ def convert(graph: Graph) -> str:
+ """
+ Convert a computation graph to its corresponding MLIR representation.
+
+ Args:
+ graph (Graph):
+ computation graph to be converted
+
+ Returns:
+ str:
+ textual MLIR representation corresponding to `graph`
+ """
+
+ graph = deepcopy(graph)
+
+ GraphConverter._check_graph_convertibility(graph)
+ GraphConverter._update_bit_widths(graph)
+ GraphConverter._offset_negative_lookup_table_inputs(graph)
+ GraphConverter._broadcast_assignments(graph)
+ GraphConverter._encrypt_clear_assignments(graph)
+ GraphConverter._tensorize_scalars_for_fhelinalg(graph)
+
+ from_elements_operations: Dict[OpResult, List[OpResult]] = {}
+
+ with Context() as ctx, Location.unknown():
+ concretelang.register_dialects(ctx)
+
+ module = Module.create()
+ with InsertionPoint(module.body):
+ parameters = [
+ NodeConverter.value_to_mlir_type(ctx, input_node.output)
+ for input_node in graph.ordered_inputs()
+ ]
+
+ @func.FuncOp.from_py_func(*parameters)
+ def main(*args):
+ sanitized_args = GraphConverter._sanitize_signed_inputs(graph, args, ctx)
+
+ ir_to_mlir = {}
+ for arg_num, node in graph.input_nodes.items():
+ ir_to_mlir[node] = sanitized_args[arg_num]
+
+ constant_cache = {}
+ for node in nx.topological_sort(graph.graph):
+ if node.operation == Operation.Input:
+ continue
+
+ preds = [ir_to_mlir[pred] for pred in graph.ordered_preds_of(node)]
+ node_converter = NodeConverter(
+ ctx,
+ graph,
+ node,
+ preds,
+ constant_cache,
+ from_elements_operations,
+ )
+ ir_to_mlir[node] = node_converter.convert()
+
+ results = (ir_to_mlir[output_node] for output_node in graph.ordered_outputs())
+ return results
+
+ direct_replacements = {}
+ for placeholder, elements in from_elements_operations.items():
+ element_names = [NodeConverter.mlir_name(element) for element in elements]
+ actual_value = f"tensor.from_elements {', '.join(element_names)} : {placeholder.type}"
+ direct_replacements[NodeConverter.mlir_name(placeholder)] = actual_value
+
+ module_lines_after_hacks_are_applied = []
+ for line in str(module).split("\n"):
+ mlir_name = line.split("=")[0].strip()
+ if mlir_name not in direct_replacements:
+ module_lines_after_hacks_are_applied.append(line)
+ continue
+
+ new_value = direct_replacements[mlir_name]
+ module_lines_after_hacks_are_applied.append(f" {mlir_name} = {new_value}")
+
+ return "\n".join(module_lines_after_hacks_are_applied).strip()
diff --git a/frontends/concrete-python/concrete/numpy/mlir/node_converter.py b/frontends/concrete-python/concrete/numpy/mlir/node_converter.py
new file mode 100644
index 000000000..fb5a44be7
--- /dev/null
+++ b/frontends/concrete-python/concrete/numpy/mlir/node_converter.py
@@ -0,0 +1,1866 @@
+"""
+Declaration of `NodeConverter` class.
+"""
+
+# pylint: disable=no-member,no-name-in-module,too-many-lines
+
+import re
+from enum import IntEnum
+from typing import Callable, Dict, List, Set, Tuple, cast
+
+import numpy as np
+from concrete.lang.dialects import fhe, fhelinalg
+from concrete.lang.dialects.fhe import EncryptedIntegerType
+from mlir.dialects import arith, tensor
+from mlir.ir import (
+ ArrayAttr,
+ Attribute,
+ BlockArgument,
+ BoolAttr,
+ Context,
+ DenseElementsAttr,
+ IndexType,
+ IntegerAttr,
+ IntegerType,
+ OpResult,
+ RankedTensorType,
+ Type,
+)
+
+from ..dtypes import Integer, UnsignedInteger
+from ..internal.utils import assert_that
+from ..representation import Graph, Node, Operation
+from ..values import EncryptedScalar, Value
+from .utils import construct_deduplicated_tables
+
+# pylint: enable=no-member,no-name-in-module
+
+
+class Comparison(IntEnum):
+ """
+ Comparison enum, to generalize conversion of comparison operators.
+
+ Because comparison result has 3 possibilities, Comparison enum is 2 bits.
+ """
+
+ EQUAL = 0b00
+ LESS = 0b01
+ GREATER = 0b10
+
+ UNUSED = 0b11
+
+
+class NodeConverter:
+ """
+ NodeConverter class, to convert computation graph nodes to their MLIR equivalent.
+ """
+
+ # pylint: disable=too-many-instance-attributes
+
+ ctx: Context
+ graph: Graph
+ node: Node
+ preds: List[OpResult]
+
+ all_of_the_inputs_are_encrypted: bool
+ all_of_the_inputs_are_tensors: bool
+ one_of_the_inputs_is_a_tensor: bool
+
+ constant_cache: Dict[Tuple[Type, Attribute], OpResult]
+ from_elements_operations: Dict[OpResult, List[OpResult]]
+
+ # pylint: enable=too-many-instance-attributes
+
+ @staticmethod
+ def value_to_mlir_type(ctx: Context, value: Value) -> Type:
+ """
+ Convert a `Value` to its corresponding MLIR `Type`.
+
+ Args:
+ ctx (Context):
+ MLIR context to perform the conversion
+
+ value (Value):
+ value to convert
+
+ Returns:
+ Type:
+ MLIR `Type` corresponding to `value`
+ """
+
+ dtype = value.dtype
+
+ if isinstance(dtype, Integer):
+ if value.is_encrypted:
+ result = EncryptedIntegerType.get(ctx, dtype.bit_width)
+ else:
+ result = IntegerType.get_signless(dtype.bit_width)
+
+ return result if value.is_scalar else RankedTensorType.get(value.shape, result)
+
+ # the branch above is always taken due to compatibility checks
+ # still, it's a good idea to raise an appropriate error, just in case
+
+ message = f"{value} cannot be converted to MLIR" # pragma: no cover
+ raise ValueError(message) # pragma: no cover
+
+ @staticmethod
+ def mlir_name(result: OpResult) -> str:
+ """
+ Extract the MLIR variable name of an `OpResult`.
+
+ Args:
+ result (OpResult):
+ op result to extract the name
+
+ Returns:
+ str:
+ MLIR variable name of `result`
+ """
+
+ if isinstance(result, BlockArgument):
+ return f"%arg{result.arg_number}"
+
+ return str(result).replace("Value(", "").split("=", maxsplit=1)[0].strip()
+
+ def __init__(
+ self,
+ ctx: Context,
+ graph: Graph,
+ node: Node,
+ preds: List[OpResult],
+ constant_cache: Dict[Tuple[Type, Attribute], OpResult],
+ from_elements_operations: Dict[OpResult, List[OpResult]],
+ ):
+ self.ctx = ctx
+ self.graph = graph
+ self.node = node
+ self.preds = preds
+
+ self.all_of_the_inputs_are_encrypted = True
+ self.all_of_the_inputs_are_tensors = True
+ self.one_of_the_inputs_is_a_tensor = False
+
+ for pred in graph.ordered_preds_of(node):
+ if not pred.output.is_encrypted:
+ self.all_of_the_inputs_are_encrypted = False
+
+ shape = pred.properties.get("original_shape", pred.output.shape)
+ if shape == ():
+ self.all_of_the_inputs_are_tensors = False
+ else:
+ self.one_of_the_inputs_is_a_tensor = True
+
+ self.constant_cache = constant_cache
+ self.from_elements_operations = from_elements_operations
+
+ def convert(self) -> OpResult:
+ """
+ Convert a node to its corresponding MLIR representation.
+
+ Returns:
+ OpResult:
+ in-memory MLIR representation corresponding to `self.node`
+ """
+
+ if self.node.operation == Operation.Constant:
+ return self._convert_constant()
+
+ assert_that(self.node.operation == Operation.Generic)
+
+ name = self.node.properties["name"]
+ converters = {
+ "add": self._convert_add,
+ "array": self._convert_array,
+ "assign.static": self._convert_static_assignment,
+ "bitwise_and": self._convert_bitwise_and,
+ "bitwise_or": self._convert_bitwise_or,
+ "bitwise_xor": self._convert_bitwise_xor,
+ "broadcast_to": self._convert_broadcast_to,
+ "concatenate": self._convert_concat,
+ "conv1d": self._convert_conv1d,
+ "conv2d": self._convert_conv2d,
+ "conv3d": self._convert_conv3d,
+ "dot": self._convert_dot,
+ "equal": self._convert_equal,
+ "expand_dims": self._convert_reshape,
+ "greater": self._convert_greater,
+ "greater_equal": self._convert_greater_equal,
+ "index.static": self._convert_static_indexing,
+ "left_shift": self._convert_left_shift,
+ "less": self._convert_less,
+ "less_equal": self._convert_less_equal,
+ "matmul": self._convert_matmul,
+ "maxpool": self._convert_maxpool,
+ "multiply": self._convert_mul,
+ "negative": self._convert_neg,
+ "not_equal": self._convert_not_equal,
+ "ones": self._convert_ones,
+ "reshape": self._convert_reshape,
+ "right_shift": self._convert_right_shift,
+ "squeeze": self._convert_squeeze,
+ "subtract": self._convert_sub,
+ "sum": self._convert_sum,
+ "transpose": self._convert_transpose,
+ "zeros": self._convert_zeros,
+ }
+
+ if name in converters:
+ return converters[name]()
+
+ assert_that(self.node.converted_to_table_lookup)
+ return self._convert_tlu()
+
+ # pylint: disable=no-self-use,too-many-branches,too-many-locals,too-many-statements
+
+ def _convert_add(self) -> OpResult:
+ """
+ Convert "add" node to its corresponding MLIR representation.
+
+ Returns:
+ OpResult:
+ in-memory MLIR representation corresponding to `self.node`
+ """
+
+ resulting_type = NodeConverter.value_to_mlir_type(self.ctx, self.node.output)
+ preds = self.preds
+
+ if self.all_of_the_inputs_are_encrypted:
+ if self.one_of_the_inputs_is_a_tensor:
+ result = fhelinalg.AddEintOp(resulting_type, *preds).result
+ else:
+ result = fhe.AddEintOp(resulting_type, *preds).result
+ else:
+ if self.node.inputs[0].is_clear:
+ preds = preds[::-1]
+
+ if self.one_of_the_inputs_is_a_tensor:
+ result = fhelinalg.AddEintIntOp(resulting_type, *preds).result
+ else:
+ result = fhe.AddEintIntOp(resulting_type, *preds).result
+
+ return result
+
+ def _convert_broadcast_to(self) -> OpResult:
+ """
+ Convert "broadcast_to" node to its corresponding MLIR representation.
+
+ Returns:
+ OpResult:
+ in-memory MLIR representation corresponding to `self.node`
+ """
+
+ resulting_type = NodeConverter.value_to_mlir_type(self.ctx, self.node.output)
+
+ zeros = fhe.ZeroTensorOp(resulting_type).result
+ if self.node.inputs[0].is_encrypted:
+ result = fhelinalg.AddEintOp(resulting_type, zeros, self.preds[0]).result
+ else:
+ result = fhelinalg.AddEintIntOp(resulting_type, zeros, self.preds[0]).result
+
+ # TODO: convert this to a single operation once it can be done
+ # (https://github.com/zama-ai/concrete-numpy-internal/issues/1610)
+
+ return result
+
+ def _convert_array(self) -> OpResult:
+ """
+ Convert "array" node to its corresponding MLIR representation.
+
+ Returns:
+ OpResult:
+ in-memory MLIR representation corresponding to `self.node`
+ """
+
+ resulting_type = NodeConverter.value_to_mlir_type(self.ctx, self.node.output)
+ preds = self.preds
+
+ processed_preds = []
+ for pred, value in zip(preds, self.node.inputs):
+ if value.is_encrypted or self.node.output.is_clear:
+ processed_preds.append(pred)
+ continue
+
+ assert isinstance(value.dtype, Integer)
+
+ zero_value = EncryptedScalar(UnsignedInteger(value.dtype.bit_width - 1))
+ zero_type = NodeConverter.value_to_mlir_type(self.ctx, zero_value)
+ zero = fhe.ZeroEintOp(zero_type).result
+
+ encrypted_pred = fhe.AddEintIntOp(zero_type, zero, pred).result
+ processed_preds.append(encrypted_pred)
+
+ # `placeholder_result` will be replaced textually by `actual_value` below in graph converter
+ # `tensor.from_elements` cannot be created from python bindings
+ # that's why we use placeholder values and text manipulation
+
+ if self.node.output.is_clear:
+ attribute = Attribute.parse(f"dense<0> : {resulting_type}")
+ # pylint: disable=too-many-function-args
+ placeholder_result = arith.ConstantOp(resulting_type, attribute).result
+ # pylint: enable=too-many-function-args
+ else:
+ placeholder_result = fhe.ZeroTensorOp(resulting_type).result
+
+ self.from_elements_operations[placeholder_result] = processed_preds
+ return placeholder_result
+
+ def _convert_bitwise_and(self) -> OpResult:
+ """
+ Convert "bitwise_and" node to its corresponding MLIR representation.
+
+ Returns:
+ OpResult:
+ in-memory MLIR representation corresponding to `self.node`
+ """
+
+ return self._convert_bitwise(lambda x, y: x & y)
+
+ def _convert_bitwise_or(self) -> OpResult:
+ """
+ Convert "bitwise_or" node to its corresponding MLIR representation.
+
+ Returns:
+ OpResult:
+ in-memory MLIR representation corresponding to `self.node`
+ """
+
+ return self._convert_bitwise(lambda x, y: x | y)
+
+ def _convert_bitwise_xor(self) -> OpResult:
+ """
+ Convert "bitwise_xor" node to its corresponding MLIR representation.
+
+ Returns:
+ OpResult:
+ in-memory MLIR representation corresponding to `self.node`
+ """
+
+ return self._convert_bitwise(lambda x, y: x ^ y)
+
+ def _convert_concat(self) -> OpResult:
+ """
+ Convert "concatenate" node to its corresponding MLIR representation.
+
+ Returns:
+ OpResult:
+ in-memory MLIR representation corresponding to `self.node`
+ """
+
+ resulting_type = NodeConverter.value_to_mlir_type(self.ctx, self.node.output)
+ axis = self.node.properties["kwargs"].get("axis", 0)
+
+ if axis is not None:
+ if axis < 0:
+ axis += len(self.node.inputs[0].shape)
+ return fhelinalg.ConcatOp(
+ resulting_type,
+ self.preds,
+ axis=IntegerAttr.get(IntegerType.get_signless(64), axis),
+ ).result
+
+ flattened_preds = []
+ for pred, input_value in zip(self.preds, self.node.inputs):
+ input_shape = input_value.shape
+ input_size = np.prod(input_shape)
+
+ flattened_pred_type = RankedTensorType.get(
+ [input_size],
+ NodeConverter.value_to_mlir_type(
+ self.ctx,
+ Value(input_value.dtype, shape=(), is_encrypted=input_value.is_encrypted),
+ ),
+ )
+ flattened_pred = tensor.CollapseShapeOp(
+ flattened_pred_type,
+ pred,
+ ArrayAttr.get(
+ [
+ ArrayAttr.get(
+ [
+ IntegerAttr.get(IntegerType.get_signless(64), i)
+ for i in range(len(input_shape))
+ ]
+ )
+ ]
+ ),
+ ).result
+ flattened_preds.append(flattened_pred)
+
+ return fhelinalg.ConcatOp(
+ resulting_type,
+ flattened_preds,
+ axis=IntegerAttr.get(IntegerType.get_signless(64), 0),
+ ).result
+
+ def _convert_constant(self) -> OpResult:
+ """
+ Convert Operation.Constant node to its corresponding MLIR representation.
+
+ Returns:
+ OpResult:
+ in-memory MLIR representation corresponding to `self.node`
+ """
+
+ resulting_type = NodeConverter.value_to_mlir_type(self.ctx, self.node.output)
+ data = self.node()
+
+ if self.node.output.is_scalar:
+ attr = IntegerAttr.get(resulting_type, data)
+ else:
+ # usage of `Attribute.parse` is the result of some limitations in the MLIR module
+ # provided by LLVM
+
+ # what should have been used is `DenseElementsAttr` but it's impossible to assign
+ # custom bit-widths using it (e.g., uint5)
+
+ # since we couldn't create a `DenseElementsAttr` with a custom bit width using
+ # the python api we use `Attribute.parse` to let the underlying library do it by itself
+
+ attr = Attribute.parse(f"dense<{str(data.tolist())}> : {resulting_type}")
+
+ return self._create_constant(resulting_type, attr).result
+
+ def _convert_conv1d(self) -> OpResult:
+ """
+ Convert "conv1d" node to its corresponding MLIR representation.
+
+ Returns:
+ OpResult:
+ in-memory MLIR representation corresponding to `self.node`
+ """
+
+ message = "conv1d conversion to MLIR is not yet implemented"
+ raise NotImplementedError(message)
+
+ def _convert_conv2d(self) -> OpResult:
+ """
+ Convert "conv2d" node to its corresponding MLIR representation.
+
+ Returns:
+ OpResult:
+ in-memory MLIR representation corresponding to `self.node`
+ """
+
+ resulting_type = NodeConverter.value_to_mlir_type(self.ctx, self.node.output)
+
+ integer_type = IntegerType.get_signless(64, context=self.ctx)
+
+ strides = DenseElementsAttr.get(
+ np.array(list(self.node.properties["kwargs"]["strides"]), dtype=np.uint64),
+ type=integer_type,
+ context=self.ctx,
+ )
+ dilations = DenseElementsAttr.get(
+ np.array(list(self.node.properties["kwargs"]["dilations"]), dtype=np.uint64),
+ type=integer_type,
+ context=self.ctx,
+ )
+ pads = DenseElementsAttr.get(
+ np.array(list(self.node.properties["kwargs"]["pads"]), dtype=np.uint64),
+ type=integer_type,
+ context=self.ctx,
+ )
+ group = IntegerAttr.get(
+ IntegerType.get_signless(64), self.node.properties["kwargs"]["group"]
+ )
+
+ has_bias = len(self.node.inputs) == 3
+ if has_bias:
+ bias = self.preds[2]
+ else:
+ bias = None
+ # input and weight
+ preds = self.preds[:2]
+ return fhelinalg.Conv2dOp(
+ resulting_type,
+ *preds,
+ bias=bias,
+ padding=pads,
+ strides=strides,
+ dilations=dilations,
+ group=group,
+ ).result
+
+ def _convert_conv3d(self) -> OpResult:
+ """
+ Convert "conv3d" node to its corresponding MLIR representation.
+
+ Returns:
+ OpResult:
+ in-memory MLIR representation corresponding to `self.node`
+ """
+
+ message = "conv3d conversion to MLIR is not yet implemented"
+ raise NotImplementedError(message)
+
+ def _convert_dot(self) -> OpResult:
+ """
+ Convert "dot" node to its corresponding MLIR representation.
+
+ Returns:
+ OpResult:
+ in-memory MLIR representation corresponding to `self.node`
+ """
+
+ resulting_type = NodeConverter.value_to_mlir_type(self.ctx, self.node.output)
+ preds = self.preds
+
+ if self.node.inputs[0].is_clear:
+ preds = preds[::-1]
+
+ if self.all_of_the_inputs_are_tensors:
+ # numpy.dot(x, y) where x and y are both vectors = regular dot product
+ result = fhelinalg.Dot(resulting_type, *preds).result
+
+ elif not self.one_of_the_inputs_is_a_tensor:
+ # numpy.dot(x, y) where x and y are both scalars = x * y
+ result = fhe.MulEintIntOp(resulting_type, *preds).result
+
+ else:
+ # numpy.dot(x, y) where one of x or y is a scalar and the other one is a vector = x * y
+ result = fhelinalg.MulEintIntOp(resulting_type, *preds).result
+
+ return result
+
+ def _convert_equal(self) -> OpResult:
+ """
+ Convert "equal" node to its corresponding MLIR representation.
+
+ Returns:
+ OpResult:
+ in-memory MLIR representation corresponding to `self.node`
+ """
+
+ return self._convert_equality(equals=True)
+
+ def _convert_greater(self) -> OpResult:
+ """
+ Convert "greater" node to its corresponding MLIR representation.
+
+ Returns:
+ OpResult:
+ in-memory MLIR representation corresponding to `self.node`
+ """
+
+ return self._convert_compare(invert_operands=True, accept={Comparison.LESS})
+
+ def _convert_greater_equal(self) -> OpResult:
+ """
+ Convert "greater_equal" node to its corresponding MLIR representation.
+
+ Returns:
+ OpResult:
+ in-memory MLIR representation corresponding to `self.node`
+ """
+
+ return self._convert_compare(
+ invert_operands=True, accept={Comparison.LESS, Comparison.EQUAL}
+ )
+
+ def _convert_left_shift(self) -> OpResult:
+ """
+ Convert "left_shift" node to its corresponding MLIR representation.
+
+ Returns:
+ OpResult:
+ in-memory MLIR representation corresponding to `self.node`
+ """
+
+ return self._convert_shift(orientation="left")
+
+ def _convert_less(self) -> OpResult:
+ """
+ Convert "less" node to its corresponding MLIR representation.
+
+ Returns:
+ OpResult:
+ in-memory MLIR representation corresponding to `self.node`
+ """
+
+ return self._convert_compare(invert_operands=False, accept={Comparison.LESS})
+
+ def _convert_less_equal(self) -> OpResult:
+ """
+ Convert "less_equal" node to its corresponding MLIR representation.
+
+ Returns:
+ OpResult:
+ in-memory MLIR representation corresponding to `self.node`
+ """
+
+ return self._convert_compare(
+ invert_operands=False, accept={Comparison.LESS, Comparison.EQUAL}
+ )
+
+ def _convert_matmul(self) -> OpResult:
+ """Convert a MatMul node to its corresponding MLIR representation.
+
+ Returns:
+ str: textual MLIR representation corresponding to self.node
+ """
+
+ resulting_type = NodeConverter.value_to_mlir_type(self.ctx, self.node.output)
+ preds = self.preds
+
+ if self.node.output.shape == ():
+ if self.node.inputs[0].is_clear:
+ preds = preds[::-1]
+ result = fhelinalg.Dot(resulting_type, *preds).result
+
+ elif self.node.inputs[0].is_clear:
+ result = fhelinalg.MatMulIntEintOp(resulting_type, *preds).result
+ else:
+ result = fhelinalg.MatMulEintIntOp(resulting_type, *preds).result
+
+ return result
+
+ def _convert_maxpool(self) -> OpResult:
+ """
+ Convert "maxpool" node to its corresponding MLIR representation.
+
+ Returns:
+ OpResult:
+ in-memory MLIR representation corresponding to `self.node`
+ """
+
+ message = "MaxPool operation cannot be compiled yet"
+ raise NotImplementedError(message)
+
+ def _convert_mul(self) -> OpResult:
+ """
+ Convert "multiply" node to its corresponding MLIR representation.
+
+ Returns:
+ OpResult:
+ in-memory MLIR representation corresponding to `self.node`
+ """
+
+ resulting_type = NodeConverter.value_to_mlir_type(self.ctx, self.node.output)
+ preds = self.preds
+
+ if self.node.inputs[0].is_clear:
+ preds = preds[::-1]
+
+ if self.one_of_the_inputs_is_a_tensor:
+ result = fhelinalg.MulEintIntOp(resulting_type, *preds).result
+ else:
+ result = fhe.MulEintIntOp(resulting_type, *preds).result
+
+ return result
+
+ def _convert_neg(self) -> OpResult:
+ """
+ Convert "negative" node to its corresponding MLIR representation.
+
+ Returns:
+ OpResult:
+ in-memory MLIR representation corresponding to `self.node`
+ """
+
+ resulting_type = NodeConverter.value_to_mlir_type(self.ctx, self.node.output)
+ pred = self.preds[0]
+
+ if self.one_of_the_inputs_is_a_tensor:
+ result = fhelinalg.NegEintOp(resulting_type, pred).result
+ else:
+ result = fhe.NegEintOp(resulting_type, pred).result
+
+ return result
+
+ def _convert_not_equal(self) -> OpResult:
+ """
+ Convert "not_equal" node to its corresponding MLIR representation.
+
+ Returns:
+ OpResult:
+ in-memory MLIR representation corresponding to `self.node`
+ """
+
+ return self._convert_equality(equals=False)
+
+ def _convert_ones(self) -> OpResult:
+ """
+ Convert "ones" node to its corresponding MLIR representation.
+
+ Returns:
+ OpResult:
+ in-memory MLIR representation corresponding to `self.node`
+ """
+
+ resulting_type = NodeConverter.value_to_mlir_type(self.ctx, self.node.output)
+
+ assert isinstance(self.node.output.dtype, Integer)
+ bit_width = self.node.output.dtype.bit_width
+
+ if self.node.output.is_scalar:
+ constant_value = Value(
+ Integer(is_signed=False, bit_width=bit_width + 1),
+ shape=(),
+ is_encrypted=False,
+ )
+ constant_type = NodeConverter.value_to_mlir_type(self.ctx, constant_value)
+ constant_attr = IntegerAttr.get(constant_type, 1)
+
+ zero = fhe.ZeroEintOp(resulting_type).result
+ one = self._create_constant(constant_type, constant_attr).result
+
+ result = fhe.AddEintIntOp(resulting_type, zero, one).result
+ else:
+ constant_value = Value(
+ Integer(is_signed=False, bit_width=bit_width + 1),
+ shape=(1,),
+ is_encrypted=False,
+ )
+ constant_type = NodeConverter.value_to_mlir_type(self.ctx, constant_value)
+ constant_attr = Attribute.parse(f"dense<[1]> : {constant_type}")
+
+ zeros = fhe.ZeroTensorOp(resulting_type).result
+ ones = self._create_constant(constant_type, constant_attr).result
+
+ result = fhelinalg.AddEintIntOp(resulting_type, zeros, ones).result
+
+ return result
+
+ def _convert_reshape(self) -> OpResult:
+ """
+ Convert "reshape" node to its corresponding MLIR representation.
+
+ Returns:
+ OpResult:
+ in-memory MLIR representation corresponding to `self.node`
+ """
+
+ input_shape = self.node.inputs[0].shape
+ output_shape = self.node.output.shape
+
+ pred = self.preds[0]
+ if input_shape == output_shape:
+ return pred
+
+ # we can either collapse or expand, which changes the number of dimensions
+ # this is a limitation of the current compiler, it will be improved in the future (#1060)
+ can_be_converted_directly = len(input_shape) != len(output_shape)
+
+ reassociation: List[List[int]] = []
+ if can_be_converted_directly:
+ if len(output_shape) == 1:
+ # output is 1 dimensional so collapse every dimension into the same dimension
+ reassociation.append(list(range(len(input_shape))))
+ else:
+ # input is m dimensional
+ # output is n dimensional
+ # and m is different from n
+
+ # we don't want to duplicate code, so we forget about input and output,
+ # and we focus on smaller shape and bigger shape
+
+ smaller_shape, bigger_shape = (
+ (output_shape, input_shape)
+ if len(output_shape) < len(input_shape)
+ else (input_shape, output_shape)
+ )
+ s_index, b_index = 0, 0
+
+ # now we will figure out how to group the bigger shape to get the smaller shape
+ # think of the algorithm below as
+ # keep merging the dimensions of the bigger shape
+ # until we have a match on the smaller shape
+ # then try to match the next dimension of the smaller shape
+ # if all dimensions of the smaller shape is matched
+ # we can convert it
+
+ group = []
+ size = 1
+ while s_index < len(smaller_shape) and b_index < len(bigger_shape):
+ # dimension `b_index` of `bigger_shape` belongs to current group
+ group.append(b_index)
+
+ # and current group has `size * bigger_shape[b_index]` elements now
+ size *= bigger_shape[b_index]
+
+ # if current group size matches the dimension `s_index` of `smaller_shape`
+ if size == smaller_shape[s_index]:
+ # we finalize this group and reset everything
+ size = 1
+ reassociation.append(group)
+ group = []
+
+ # now try to match the next dimension of `smaller_shape`
+ s_index += 1
+
+ # now process the next dimension of `bigger_shape`
+ b_index += 1
+
+ # handle the case where bigger shape has proceeding 1s
+ # e.g., (5,) -> (5, 1)
+ while b_index < len(bigger_shape) and bigger_shape[b_index] == 1:
+ reassociation[-1].append(b_index)
+ b_index += 1
+
+ # if not all dimensions of both shapes are processed exactly
+ if s_index != len(smaller_shape) or b_index != len(bigger_shape):
+ # we cannot convert
+ can_be_converted_directly = False
+
+ i64_type = IntegerType.get_signless(64)
+ resulting_type = NodeConverter.value_to_mlir_type(self.ctx, self.node.output)
+
+ if can_be_converted_directly:
+ reassociation_attr = ArrayAttr.get(
+ [
+ ArrayAttr.get([IntegerAttr.get(i64_type, dimension) for dimension in group])
+ for group in reassociation
+ ]
+ )
+ if len(output_shape) < len(input_shape):
+ return tensor.CollapseShapeOp(resulting_type, pred, reassociation_attr).result
+ return tensor.ExpandShapeOp(resulting_type, pred, reassociation_attr).result
+
+ flattened_type = NodeConverter.value_to_mlir_type(
+ self.ctx,
+ Value(
+ dtype=self.node.inputs[0].dtype,
+ shape=(int(np.prod(input_shape)),),
+ is_encrypted=self.node.inputs[0].is_encrypted,
+ ),
+ )
+ flattened_result = tensor.CollapseShapeOp(
+ flattened_type,
+ pred,
+ ArrayAttr.get(
+ [ArrayAttr.get([IntegerAttr.get(i64_type, i) for i in range(len(input_shape))])]
+ ),
+ ).result
+
+ return tensor.ExpandShapeOp(
+ resulting_type,
+ flattened_result,
+ ArrayAttr.get(
+ [ArrayAttr.get([IntegerAttr.get(i64_type, i) for i in range(len(output_shape))])]
+ ),
+ ).result
+
+ def _convert_right_shift(self) -> OpResult:
+ """
+ Convert "right_shift" node to its corresponding MLIR representation.
+
+ Returns:
+ OpResult:
+ in-memory MLIR representation corresponding to `self.node`
+ """
+
+ return self._convert_shift(orientation="right")
+
+ def _convert_static_assignment(self) -> OpResult:
+ """
+ Convert "assign.static" node to its corresponding MLIR representation.
+
+ Returns:
+ OpResult:
+ in-memory MLIR representation corresponding to `self.node`
+ """
+
+ input_value = self.node.inputs[0]
+ input_shape = input_value.shape
+
+ index = list(self.node.properties["kwargs"]["index"])
+
+ while len(index) < input_value.ndim:
+ index.append(slice(None, None, None))
+
+ output_type = NodeConverter.value_to_mlir_type(self.ctx, self.node.output)
+
+ offsets = []
+ sizes = []
+ strides = []
+
+ for indexing_element, dimension_size in zip(index, input_shape):
+
+ if isinstance(indexing_element, slice):
+ size = int(np.zeros(dimension_size)[indexing_element].shape[0])
+ stride = int(indexing_element.step if indexing_element.step is not None else 1)
+ offset = int(
+ (
+ indexing_element.start
+ if indexing_element.start >= 0
+ else indexing_element.start + dimension_size
+ )
+ if indexing_element.start is not None
+ else (0 if stride > 0 else dimension_size - 1)
+ )
+
+ else:
+ size = 1
+ stride = 1
+ offset = int(
+ indexing_element if indexing_element >= 0 else indexing_element + dimension_size
+ )
+
+ offsets.append(offset)
+ sizes.append(size)
+ strides.append(stride)
+
+ i64_type = IntegerType.get_signless(64)
+ return tensor.InsertSliceOp(
+ output_type,
+ self.preds[1],
+ self.preds[0],
+ [],
+ [],
+ [],
+ ArrayAttr.get([IntegerAttr.get(i64_type, value) for value in offsets]),
+ ArrayAttr.get([IntegerAttr.get(i64_type, value) for value in sizes]),
+ ArrayAttr.get([IntegerAttr.get(i64_type, value) for value in strides]),
+ ).result
+
+ def _convert_static_indexing(self) -> OpResult:
+ """
+ Convert "index.static" node to its corresponding MLIR representation.
+
+ Returns:
+ OpResult:
+ in-memory MLIR representation corresponding to `self.node`
+ """
+
+ input_value = self.node.inputs[0]
+ input_shape = input_value.shape
+
+ index = list(self.node.properties["kwargs"]["index"])
+
+ while len(index) < input_value.ndim:
+ index.append(slice(None, None, None))
+
+ output_type = NodeConverter.value_to_mlir_type(self.ctx, self.node.output)
+ if len(index) == len(input_shape) and all(isinstance(i, (int, np.integer)) for i in index):
+ indices = []
+ for value, dimension_size in zip(index, input_shape):
+ value = int(value)
+ attr = IntegerAttr.get(
+ IndexType.parse("index"), value if value >= 0 else value + dimension_size
+ )
+ indices.append(self._create_constant(IndexType.parse("index"), attr).result)
+ return tensor.ExtractOp(output_type, self.preds[0], indices).result
+
+ offsets = []
+ sizes = []
+ strides = []
+
+ destroyed_dimensions = []
+ for dimension, (indexing_element, dimension_size) in enumerate(zip(index, input_shape)):
+
+ if isinstance(indexing_element, slice):
+ size = int(np.zeros(dimension_size)[indexing_element].shape[0])
+ stride = int(indexing_element.step if indexing_element.step is not None else 1)
+ offset = int(
+ (
+ indexing_element.start
+ if indexing_element.start >= 0
+ else indexing_element.start + dimension_size
+ )
+ if indexing_element.start is not None
+ else (0 if stride > 0 else dimension_size - 1)
+ )
+
+ else:
+ destroyed_dimensions.append(dimension)
+ size = 1
+ stride = 1
+ offset = int(
+ indexing_element if indexing_element >= 0 else indexing_element + dimension_size
+ )
+
+ offsets.append(offset)
+ sizes.append(size)
+ strides.append(stride)
+
+ i64_type = IntegerType.get_signless(64)
+ if len(destroyed_dimensions) == 0:
+ return tensor.ExtractSliceOp(
+ output_type,
+ self.preds[0],
+ [],
+ [],
+ [],
+ ArrayAttr.get([IntegerAttr.get(i64_type, value) for value in offsets]),
+ ArrayAttr.get([IntegerAttr.get(i64_type, value) for value in sizes]),
+ ArrayAttr.get([IntegerAttr.get(i64_type, value) for value in strides]),
+ ).result
+
+ output_value = self.node.output
+
+ intermediate_shape = list(output_value.shape)
+ for dimension in destroyed_dimensions:
+ intermediate_shape.insert(dimension, 1)
+
+ intermediate = tensor.ExtractSliceOp(
+ RankedTensorType.get(
+ intermediate_shape,
+ NodeConverter.value_to_mlir_type(
+ self.ctx,
+ Value(output_value.dtype, shape=(), is_encrypted=output_value.is_encrypted),
+ ),
+ ),
+ self.preds[0],
+ [],
+ [],
+ [],
+ ArrayAttr.get([IntegerAttr.get(i64_type, value) for value in offsets]),
+ ArrayAttr.get([IntegerAttr.get(i64_type, value) for value in sizes]),
+ ArrayAttr.get([IntegerAttr.get(i64_type, value) for value in strides]),
+ ).result
+
+ reassociaton = []
+
+ current_intermediate_dimension = 0
+ for _ in range(len(output_value.shape)):
+ indices = [current_intermediate_dimension]
+ while current_intermediate_dimension in destroyed_dimensions:
+ current_intermediate_dimension += 1
+ indices.append(current_intermediate_dimension)
+
+ reassociaton.append(indices)
+ current_intermediate_dimension += 1
+ while current_intermediate_dimension < len(intermediate_shape):
+ reassociaton[-1].append(current_intermediate_dimension)
+ current_intermediate_dimension += 1
+
+ return tensor.CollapseShapeOp(
+ output_type,
+ intermediate,
+ ArrayAttr.get(
+ [
+ ArrayAttr.get(
+ [IntegerAttr.get(i64_type, index) for index in indices],
+ )
+ for indices in reassociaton
+ ],
+ ),
+ ).result
+
+ def _convert_squeeze(self) -> OpResult:
+ """
+ Convert "squeeze" node to its corresponding MLIR representation.
+
+ Returns:
+ OpResult:
+ in-memory MLIR representation corresponding to `self.node`
+ """
+
+ # because of the tracing logic, we have the correct output shape
+
+ # if the output shape is (), it means (1, 1, ..., 1, 1) is squeezed
+ # and the result is a scalar, so we need to do indexing, not reshape
+ if self.node.output.shape == ():
+ assert all(size == 1 for size in self.node.inputs[0].shape)
+ self.node.properties["kwargs"]["index"] = (0,) * self.node.inputs[0].ndim
+ return self._convert_static_indexing()
+
+ # otherwise, a simple reshape would work as we already have the correct shape
+ return self._convert_reshape()
+
+ def _convert_sub(self) -> OpResult:
+ """
+ Convert "subtract" node to its corresponding MLIR representation.
+
+ Returns:
+ OpResult:
+ in-memory MLIR representation corresponding to `self.node`
+ """
+
+ resulting_type = NodeConverter.value_to_mlir_type(self.ctx, self.node.output)
+ preds = self.preds
+
+ if self.all_of_the_inputs_are_encrypted:
+ if self.one_of_the_inputs_is_a_tensor:
+ result = fhelinalg.SubEintOp(resulting_type, *preds).result
+ else:
+ result = fhe.SubEintOp(resulting_type, *preds).result
+
+ elif self.node.inputs[0].is_clear:
+ if self.one_of_the_inputs_is_a_tensor:
+ result = fhelinalg.SubIntEintOp(resulting_type, *preds).result
+ else:
+ result = fhe.SubIntEintOp(resulting_type, *preds).result
+
+ else:
+ if self.one_of_the_inputs_is_a_tensor:
+ result = fhelinalg.SubEintIntOp(resulting_type, *preds).result
+ else:
+ result = fhe.SubEintIntOp(resulting_type, *preds).result
+
+ return result
+
+ def _convert_sum(self) -> OpResult:
+ """
+ Convert "sum" node to its corresponding MLIR representation.
+
+ Returns:
+ OpResult:
+ in-memory MLIR representation corresponding to `self.node`
+ """
+
+ resulting_type = NodeConverter.value_to_mlir_type(self.ctx, self.node.output)
+
+ axes = self.node.properties["kwargs"].get("axis", [])
+ keep_dims = self.node.properties["kwargs"].get("keepdims", False)
+
+ if axes is None:
+ axes = []
+ elif isinstance(axes, int):
+ axes = [axes]
+ elif isinstance(axes, tuple):
+ axes = list(axes)
+
+ input_dimensions = self.node.inputs[0].ndim
+ for i, axis in enumerate(axes):
+ if axis < 0:
+ axes[i] += input_dimensions
+
+ return fhelinalg.SumOp(
+ resulting_type,
+ self.preds[0],
+ axes=ArrayAttr.get(
+ [IntegerAttr.get(IntegerType.get_signless(64), axis) for axis in axes]
+ ),
+ keep_dims=BoolAttr.get(keep_dims),
+ ).result
+
+ def _convert_tlu(self) -> OpResult:
+ """
+ Convert Operation.Generic node to its corresponding MLIR representation.
+
+ Returns:
+ OpResult:
+ in-memory MLIR representation corresponding to `self.node`
+ """
+
+ variable_input_index = -1
+
+ preds = self.graph.ordered_preds_of(self.node)
+ for i, pred in enumerate(preds):
+ if pred.operation != Operation.Constant:
+ variable_input_index = i
+ break
+
+ assert_that(variable_input_index != -1)
+
+ tables = construct_deduplicated_tables(self.node, preds)
+ assert_that(len(tables) > 0)
+
+ lut_shape: Tuple[int, ...] = ()
+ map_shape: Tuple[int, ...] = ()
+
+ if len(tables) == 1:
+ table = tables[0][0]
+
+ # The reduction on 63b is to avoid problems like doing a TLU of
+ # the form T[j] = 2< OpResult:
+ """
+ Convert "transpose" node to its corresponding MLIR representation.
+
+ Returns:
+ OpResult:
+ in-memory MLIR representation corresponding to `self.node`
+ """
+
+ resulting_type = NodeConverter.value_to_mlir_type(self.ctx, self.node.output)
+ pred = self.preds[0]
+
+ axes = self.node.properties["kwargs"].get("axes", [])
+
+ return fhelinalg.TransposeOp(
+ resulting_type,
+ pred,
+ axes=ArrayAttr.get(
+ [IntegerAttr.get(IntegerType.get_signless(64), axis) for axis in axes]
+ ),
+ ).result
+
+ def _convert_zeros(self) -> OpResult:
+ """
+ Convert "zeros" node to its corresponding MLIR representation.
+
+ Returns:
+ OpResult:
+ in-memory MLIR representation corresponding to `self.node`
+ """
+
+ resulting_type = NodeConverter.value_to_mlir_type(self.ctx, self.node.output)
+
+ if self.node.output.is_scalar:
+ result = fhe.ZeroEintOp(resulting_type).result
+ else:
+ result = fhe.ZeroTensorOp(resulting_type).result
+
+ return result
+
+ def _create_add(self, resulting_type, a, b) -> OpResult:
+ if self.one_of_the_inputs_is_a_tensor:
+ return fhelinalg.AddEintOp(resulting_type, a, b).result
+
+ return fhe.AddEintOp(resulting_type, a, b).result
+
+ def _create_add_clear(self, resulting_type, a, b) -> OpResult:
+ if self.one_of_the_inputs_is_a_tensor:
+ return fhelinalg.AddEintIntOp(resulting_type, a, b).result
+
+ return fhe.AddEintIntOp(resulting_type, a, b).result
+
+ def _create_constant(self, mlir_type: Type, mlir_attribute: Attribute) -> OpResult:
+ result = self.constant_cache.get((mlir_type, mlir_attribute))
+ if result is None:
+ # ConstantOp is being decorated, and the init function is supposed to take more
+ # arguments than those pylint is considering
+ # pylint: disable=too-many-function-args
+ result = arith.ConstantOp(mlir_type, mlir_attribute)
+ # pylint: enable=too-many-function-args
+ self.constant_cache[(mlir_type, mlir_attribute)] = result
+ return result
+
+ def _create_constant_integer(self, bit_width, x) -> OpResult:
+ constant_value = Value(
+ Integer(is_signed=True, bit_width=bit_width + 1),
+ shape=(1,) if self.one_of_the_inputs_is_a_tensor else (),
+ is_encrypted=False,
+ )
+ constant_type = NodeConverter.value_to_mlir_type(self.ctx, constant_value)
+
+ if self.one_of_the_inputs_is_a_tensor:
+ return self._create_constant(
+ constant_type,
+ Attribute.parse(f"dense<{[int(x)]}> : {constant_type}"),
+ )
+
+ return self._create_constant(constant_type, IntegerAttr.get(constant_type, x))
+
+ def _create_mul_clear(self, resulting_type, a, b) -> OpResult:
+ if self.one_of_the_inputs_is_a_tensor:
+ return fhelinalg.MulEintIntOp(resulting_type, a, b).result
+
+ return fhe.MulEintIntOp(resulting_type, a, b).result
+
+ def _create_sub(self, resulting_type, a, b) -> OpResult:
+ if self.one_of_the_inputs_is_a_tensor:
+ return fhelinalg.SubEintOp(resulting_type, a, b).result
+
+ return fhe.SubEintOp(resulting_type, a, b).result
+
+ def _create_tlu(self, resulting_type, pred, lut_values) -> OpResult:
+ resulting_type_str = str(resulting_type)
+ bit_width_search = re.search(r"FHE\.eint<([0-9]+)>", resulting_type_str)
+
+ assert bit_width_search is not None
+ bit_width_str = bit_width_search.group(1)
+
+ bit_width = int(bit_width_str)
+ lut_values += [0] * ((2**bit_width) - len(lut_values))
+
+ lut_element_type = IntegerType.get_signless(64, context=self.ctx)
+ lut_type = RankedTensorType.get((len(lut_values),), lut_element_type)
+ lut_attr = Attribute.parse(f"dense<{str(lut_values)}> : {lut_type}")
+ lut = self._create_constant(resulting_type, lut_attr).result
+
+ if self.one_of_the_inputs_is_a_tensor:
+ return fhelinalg.ApplyLookupTableEintOp(resulting_type, pred, lut).result
+
+ return fhe.ApplyLookupTableEintOp(resulting_type, pred, lut).result
+
+ def _convert_bitwise(self, operation: Callable) -> OpResult:
+ """
+ Convert `bitwise_{and,or,xor}` node to their corresponding MLIR representation.
+
+ Returns:
+ OpResult:
+ in-memory MLIR representation corresponding to `self.node`
+ """
+
+ if not self.all_of_the_inputs_are_encrypted:
+ return self._convert_tlu()
+
+ resulting_type = NodeConverter.value_to_mlir_type(self.ctx, self.node.output)
+
+ x = self.preds[0]
+ y = self.preds[1]
+
+ x_type = NodeConverter.value_to_mlir_type(self.ctx, self.node.inputs[0])
+ y_type = NodeConverter.value_to_mlir_type(self.ctx, self.node.inputs[1])
+
+ assert isinstance(self.node.output.dtype, Integer)
+ bit_width = self.node.output.dtype.bit_width
+
+ chunk_size = int(np.floor(bit_width / 2))
+ mask = (2**chunk_size) - 1
+
+ original_bit_width = max(
+ pred_node.properties["original_bit_width"]
+ for pred_node in self.graph.ordered_preds_of(self.node)
+ )
+
+ chunks = []
+ for offset in range(0, original_bit_width, chunk_size):
+ x_lut = [((x >> offset) & mask) << chunk_size for x in range(2**bit_width)]
+ y_lut = [(y >> offset) & mask for y in range(2**bit_width)]
+
+ x_chunk = self._create_tlu(x_type, x, x_lut)
+ y_chunk = self._create_tlu(y_type, y, y_lut)
+
+ packed_x_and_y_chunks = self._create_add(resulting_type, x_chunk, y_chunk)
+ result_chunk = self._create_tlu(
+ resulting_type,
+ packed_x_and_y_chunks,
+ [
+ operation(x, y) << offset
+ for x in range(2**chunk_size)
+ for y in range(2**chunk_size)
+ ],
+ )
+
+ chunks.append(result_chunk)
+
+ # add all chunks together in a tree to maximize dataflow parallelization
+
+ # c1 c2 c3 c4
+ # \/ \/
+ # s2 s1
+ # \ /
+ # \ /
+ # s3
+
+ while len(chunks) > 1:
+ a = chunks.pop()
+ b = chunks.pop()
+
+ result = self._create_add(resulting_type, a, b)
+ chunks.insert(0, result)
+
+ return chunks[0]
+
+ def _convert_compare(self, accept: Set[Comparison], invert_operands: bool = False) -> OpResult:
+ if not self.all_of_the_inputs_are_encrypted:
+ return self._convert_tlu()
+
+ inputs = self.node.inputs
+ preds = self.preds
+
+ if invert_operands:
+ inputs = inputs[::-1]
+ preds = preds[::-1]
+
+ x = preds[0]
+ y = preds[1]
+
+ x_is_signed = cast(Integer, inputs[0].dtype).is_signed
+ y_is_signed = cast(Integer, inputs[1].dtype).is_signed
+
+ x_is_unsigned = not x_is_signed
+ y_is_unsigned = not y_is_signed
+
+ pred_nodes = self.graph.ordered_preds_of(self.node)
+ if invert_operands:
+ pred_nodes = list(reversed(pred_nodes))
+
+ x_bit_width, y_bit_width = [node.properties["original_bit_width"] for node in pred_nodes]
+ comparison_bit_width = max(x_bit_width, y_bit_width)
+
+ x_dtype = Integer(is_signed=x_is_signed, bit_width=x_bit_width)
+ y_dtype = Integer(is_signed=y_is_signed, bit_width=y_bit_width)
+
+ x_minus_y_min = x_dtype.min() - y_dtype.max()
+ x_minus_y_max = x_dtype.max() - y_dtype.min()
+
+ x_minus_y_range = [x_minus_y_min, x_minus_y_max]
+ x_minus_y_dtype = Integer.that_can_represent(x_minus_y_range)
+
+ resulting_type = NodeConverter.value_to_mlir_type(self.ctx, self.node.output)
+
+ bit_width = cast(Integer, self.node.output.dtype).bit_width
+ signed_offset = 2 ** (bit_width - 1)
+
+ sanitizer = self._create_constant_integer(bit_width, signed_offset)
+ if x_minus_y_dtype.bit_width <= bit_width:
+ x_minus_y = self._create_sub(resulting_type, x, y)
+ sanitized_x_minus_y = self._create_add_clear(resulting_type, x_minus_y, sanitizer)
+
+ accept_less = int(Comparison.LESS in accept)
+ accept_equal = int(Comparison.EQUAL in accept)
+ accept_greater = int(Comparison.GREATER in accept)
+
+ all_cells = 2**bit_width
+
+ less_cells = 2 ** (bit_width - 1)
+ equal_cells = 1
+ greater_cells = less_cells - equal_cells
+
+ assert less_cells + equal_cells + greater_cells == all_cells
+
+ table = (
+ [accept_less] * less_cells
+ + [accept_equal] * equal_cells
+ + [accept_greater] * greater_cells
+ )
+ return self._create_tlu(resulting_type, sanitized_x_minus_y, table)
+
+ # Comparison between signed and unsigned is tricky.
+ # To deal with them, we add -min of the signed number to both operands
+ # such that they are both positive. To avoid overflowing
+ # the unsigned operand this addition is done "virtually"
+ # while constructing one of the luts.
+
+ # A flag ("is_unsigned_greater_than_half") is emitted in MLIR to keep track
+ # if the unsigned operand was greater than the max signed number as it
+ # is needed to determine the result of the comparison.
+
+ # Exemple: to compare x and y where x is an int3 and y and uint3, when y
+ # is greater than 4 we are sure than x will be less than x.
+
+ if inputs[0].shape != self.node.output.shape:
+ x = self._create_add(resulting_type, x, self._convert_zeros())
+ if inputs[1].shape != self.node.output.shape:
+ y = self._create_add(resulting_type, y, self._convert_zeros())
+
+ offset_x_by = 0
+ offset_y_by = 0
+
+ if x_is_signed or y_is_signed:
+ if x_is_signed:
+ x = self._create_add_clear(resulting_type, x, sanitizer)
+ else:
+ offset_x_by = signed_offset
+
+ if y_is_signed:
+ y = self._create_add_clear(resulting_type, y, sanitizer)
+ else:
+ offset_y_by = signed_offset
+
+ def compare(x, y):
+ if x < y:
+ return Comparison.LESS
+
+ if x > y:
+ return Comparison.GREATER
+
+ return Comparison.EQUAL
+
+ chunk_size = int(np.floor(bit_width / 2))
+ carries = self._pack_to_chunk_groups_and_map(
+ resulting_type,
+ comparison_bit_width,
+ chunk_size,
+ x,
+ y,
+ lambda i, x, y: compare(x, y) << (min(i, 1) * 2),
+ x_offset=offset_x_by,
+ y_offset=offset_y_by,
+ )
+
+ # This is the reduction step -- we have an array where the entry i is the
+ # result of comparing the chunks of x and y at position i.
+
+ all_comparisons = [Comparison.EQUAL, Comparison.LESS, Comparison.GREATER, Comparison.UNUSED]
+ pick_first_not_equal_lut = [
+ int(
+ current_comparison
+ if previous_comparison == Comparison.EQUAL
+ else previous_comparison
+ )
+ for current_comparison in all_comparisons
+ for previous_comparison in all_comparisons
+ ]
+
+ carry = carries[0]
+ for next_carry in carries[1:]:
+ combined_carries = self._create_add(resulting_type, next_carry, carry)
+ carry = self._create_tlu(resulting_type, combined_carries, pick_first_not_equal_lut)
+
+ if x_is_signed != y_is_signed:
+ carry_bit_width = 2
+ is_less_mask = int(Comparison.LESS)
+
+ is_unsigned_greater_than_half = self._create_tlu(
+ resulting_type,
+ x if x_is_unsigned else y,
+ [int(value >= signed_offset) << carry_bit_width for value in range(2**bit_width)],
+ )
+ packed_carry_and_is_unsigned_greater_than_half = self._create_add(
+ resulting_type,
+ is_unsigned_greater_than_half,
+ carry,
+ )
+
+ # this function is actually converting either
+ # - lhs < rhs
+ # - lhs <= rhs
+
+ # in the implementation, we call
+ # - x = lhs
+ # - y = rhs
+
+ # so if y is unsigned and greater than half
+ # - y is definitely bigger than x
+ # - is_unsigned_greater_than_half == 1
+ # - result == (lhs < rhs) == (x < y) == 1
+
+ # so if x is unsigned and greater than half
+ # - x is definitely bigger than y
+ # - is_unsigned_greater_than_half == 1
+ # - result == (lhs < rhs) == (x < y) == 0
+
+ if y_is_unsigned:
+ result_table = [
+ 1 if (i >> carry_bit_width) else (i & is_less_mask) for i in range(2**3)
+ ]
+ else:
+ result_table = [
+ 0 if (i >> carry_bit_width) else (i & is_less_mask) for i in range(2**3)
+ ]
+
+ result = self._create_tlu(
+ resulting_type,
+ packed_carry_and_is_unsigned_greater_than_half,
+ result_table,
+ )
+ else:
+ boolean_result_lut = [int(comparison in accept) for comparison in all_comparisons]
+ result = self._create_tlu(resulting_type, carry, boolean_result_lut)
+
+ return result
+
+ def _convert_equality(self, equals: bool) -> OpResult:
+ if not self.all_of_the_inputs_are_encrypted:
+ return self._convert_tlu()
+
+ x = self.preds[0]
+ y = self.preds[1]
+
+ x_is_signed = cast(Integer, self.node.inputs[0].dtype).is_signed
+ y_is_signed = cast(Integer, self.node.inputs[1].dtype).is_signed
+
+ x_is_unsigned = not x_is_signed
+ y_is_unsigned = not y_is_signed
+
+ x_bit_width, y_bit_width = [
+ node.properties["original_bit_width"] for node in self.graph.ordered_preds_of(self.node)
+ ]
+ comparison_bit_width = max(x_bit_width, y_bit_width)
+
+ x_dtype = Integer(is_signed=x_is_signed, bit_width=x_bit_width)
+ y_dtype = Integer(is_signed=y_is_signed, bit_width=y_bit_width)
+
+ x_minus_y_min = x_dtype.min() - y_dtype.max()
+ x_minus_y_max = x_dtype.max() - y_dtype.min()
+
+ x_minus_y_range = [x_minus_y_min, x_minus_y_max]
+ x_minus_y_dtype = Integer.that_can_represent(x_minus_y_range)
+
+ resulting_type = NodeConverter.value_to_mlir_type(self.ctx, self.node.output)
+
+ bit_width = cast(Integer, self.node.output.dtype).bit_width
+ signed_offset = 2 ** (bit_width - 1)
+
+ sanitizer = self._create_constant_integer(bit_width, signed_offset)
+ if x_minus_y_dtype.bit_width <= bit_width:
+ x_minus_y = self._create_sub(resulting_type, x, y)
+ sanitized_x_minus_y = self._create_add_clear(resulting_type, x_minus_y, sanitizer)
+
+ zero_position = 2 ** (bit_width - 1)
+ if equals:
+ operation_lut = [int(i == zero_position) for i in range(2**bit_width)]
+ else:
+ operation_lut = [int(i != zero_position) for i in range(2**bit_width)]
+
+ return self._create_tlu(resulting_type, sanitized_x_minus_y, operation_lut)
+
+ chunk_size = int(np.floor(bit_width / 2))
+ number_of_chunks = int(np.ceil(bit_width / chunk_size))
+
+ if x_is_signed != y_is_signed:
+ number_of_chunks += 1
+
+ if self.node.inputs[0].shape != self.node.output.shape:
+ x = self._create_add(resulting_type, x, self._convert_zeros())
+ if self.node.inputs[1].shape != self.node.output.shape:
+ y = self._create_add(resulting_type, y, self._convert_zeros())
+
+ greater_than_half_lut = [
+ int(i >= signed_offset) << (number_of_chunks - 1) for i in range(2**bit_width)
+ ]
+ if x_is_unsigned and y_is_signed:
+ is_unsigned_greater_than_half = self._create_tlu(
+ resulting_type,
+ x,
+ greater_than_half_lut,
+ )
+ elif x_is_signed and y_is_unsigned:
+ is_unsigned_greater_than_half = self._create_tlu(
+ resulting_type,
+ y,
+ greater_than_half_lut,
+ )
+ else:
+ is_unsigned_greater_than_half = None
+
+ offset_x_by = 0
+ offset_y_by = 0
+
+ if x_is_signed or y_is_signed:
+ if x_is_signed:
+ x = self._create_add_clear(resulting_type, x, sanitizer)
+ else:
+ offset_x_by = signed_offset
+
+ if y_is_signed:
+ y = self._create_add_clear(resulting_type, y, sanitizer)
+ else:
+ offset_y_by = signed_offset
+
+ carries = self._pack_to_chunk_groups_and_map(
+ resulting_type,
+ comparison_bit_width,
+ chunk_size,
+ x,
+ y,
+ lambda _, x, y: int(x != y),
+ x_offset=offset_x_by,
+ y_offset=offset_y_by,
+ )
+
+ if is_unsigned_greater_than_half:
+ carries.append(is_unsigned_greater_than_half)
+
+ while len(carries) > 1:
+ a = carries.pop()
+ b = carries.pop()
+
+ result = self._create_add(resulting_type, a, b)
+ carries.insert(0, result)
+
+ carry = carries[0]
+
+ return self._create_tlu(
+ resulting_type,
+ carry,
+ [int(i == 0 if equals else i != 0) for i in range(2**bit_width)],
+ )
+
+ def _convert_shift(self, orientation: str) -> OpResult:
+ if not self.all_of_the_inputs_are_encrypted:
+ return self._convert_tlu()
+
+ resulting_type = NodeConverter.value_to_mlir_type(self.ctx, self.node.output)
+
+ x = self.preds[0]
+ b = self.preds[1]
+
+ b_type = NodeConverter.value_to_mlir_type(self.ctx, self.node.inputs[1])
+
+ assert isinstance(self.node.output.dtype, Integer)
+ bit_width = self.node.output.dtype.bit_width
+
+ pred_nodes = self.graph.ordered_preds_of(self.node)
+
+ x_original_bit_width = pred_nodes[0].properties["original_bit_width"]
+ b_original_bit_width = pred_nodes[1].properties["original_bit_width"]
+
+ if self.node.inputs[0].shape != self.node.output.shape:
+ x = self._create_add(resulting_type, x, self._convert_zeros())
+
+ if x_original_bit_width + b_original_bit_width <= bit_width:
+ shift_multiplier = self._create_constant_integer(bit_width, 2**b_original_bit_width)
+ shifted_x = self._create_mul_clear(resulting_type, x, shift_multiplier)
+ packed_x_and_b = self._create_add(resulting_type, shifted_x, b)
+ return self._create_tlu(
+ resulting_type,
+ packed_x_and_b,
+ [
+ (x << b) if orientation == "left" else (x >> b)
+ for x in range(2**x_original_bit_width)
+ for b in range(2**b_original_bit_width)
+ ],
+ )
+
+ # Left_shifts of x << b can be done as follows:
+ # - left shift of x by 8 if b & 0b1000 > 0
+ # - left shift of x by 4 if b & 0b0100 > 0
+ # - left shift of x by 2 if b & 0b0010 > 0
+ # - left shift of x by 1 if b & 0b0001 > 0
+
+ # Encoding this condition is non-trivial -- however,
+ # it can be done using the following trick:
+
+ # y = (b & 0b1000 > 0) * ((x << 8) - x) + x
+
+ # When b & 0b1000, then:
+ # y = 1 * ((x << 8) - x) + x = (x << 8) - x + x = x << 8
+
+ # When b & 0b1000 == 0 then:
+ # y = 0 * ((x << 8) - x) + x = x
+
+ # The same trick can be used for right shift but with:
+ # y = x - (b & 0b1000 > 0) * (x - (x >> 8))
+
+ original_bit_width = self.node.properties["original_bit_width"]
+ chunk_size = min(original_bit_width, bit_width - 1)
+
+ for i in reversed(range(b_original_bit_width)):
+ to_check = 2**i
+
+ should_shift = self._create_tlu(
+ b_type,
+ b,
+ [int((b & to_check) > 0) for b in range(2**bit_width)],
+ )
+ shifted_x = self._create_tlu(
+ resulting_type,
+ x,
+ (
+ [(x << to_check) - x for x in range(2**bit_width)]
+ if orientation == "left"
+ else [x - (x >> to_check) for x in range(2**bit_width)]
+ ),
+ )
+
+ chunks = []
+ for offset in range(0, original_bit_width, chunk_size):
+ bits_to_process = min(chunk_size, original_bit_width - offset)
+ right_shift_by = original_bit_width - offset - bits_to_process
+ mask = (2**bits_to_process) - 1
+
+ chunk_x = self._create_tlu(
+ resulting_type,
+ shifted_x,
+ [(((x >> right_shift_by) & mask) << 1) for x in range(2**bit_width)],
+ )
+ packed_chunk_x_and_should_shift = self._create_add(
+ resulting_type, chunk_x, should_shift
+ )
+
+ chunk = self._create_tlu(
+ resulting_type,
+ packed_chunk_x_and_should_shift,
+ [
+ (x << right_shift_by) if b else 0
+ for x in range(2**chunk_size)
+ for b in [0, 1]
+ ],
+ )
+ chunks.append(chunk)
+
+ difference = chunks[0]
+ for chunk in chunks[1:]:
+ difference = self._create_add(resulting_type, difference, chunk)
+
+ x = (
+ self._create_add(resulting_type, difference, x)
+ if orientation == "left"
+ else self._create_sub(resulting_type, x, difference)
+ )
+
+ return x
+
+ def _pack_to_chunk_groups_and_map(
+ self,
+ resulting_type: Type,
+ bit_width: int,
+ chunk_size: int,
+ x: OpResult,
+ y: OpResult,
+ mapper: Callable,
+ x_offset: int = 0,
+ y_offset: int = 0,
+ ) -> List[OpResult]:
+ """
+ Split x and y into chunks, pack the chunks and map it to another integer.
+
+ Split x and y into chunks of size `chunk_size`.
+ Pack those chunks into an integer and apply `mapper` function to it.
+ Combine those results into a list and return it.
+
+ If `x_offset` (resp. `y_offset`) is provided, execute the function
+ for `x + offset_x_by` (resp. `y + offset_y_by`) instead of `x` and `y`.
+ """
+
+ result = []
+ for chunk_index, offset in enumerate(range(0, bit_width, chunk_size)):
+ bits_to_process = min(chunk_size, bit_width - offset)
+ right_shift_by = bit_width - offset - bits_to_process
+ mask = (2**bits_to_process) - 1
+
+ chunk_x = self._create_tlu(
+ resulting_type,
+ x,
+ [
+ ((((x + x_offset) >> right_shift_by) & mask) << bits_to_process)
+ for x in range(2**bit_width)
+ ],
+ )
+ chunk_y = self._create_tlu(
+ resulting_type,
+ y,
+ [((y + y_offset) >> right_shift_by) & mask for y in range(2**bit_width)],
+ )
+
+ packed_chunks = self._create_add(resulting_type, chunk_x, chunk_y)
+ mapped_chunks = self._create_tlu(
+ resulting_type,
+ packed_chunks,
+ [mapper(chunk_index, x, y) for x in range(mask + 1) for y in range(mask + 1)],
+ )
+
+ result.append(mapped_chunks)
+
+ return result
+
+ # pylint: enable=no-self-use,too-many-branches,too-many-locals,too-many-statements
diff --git a/frontends/concrete-python/concrete/numpy/mlir/utils.py b/frontends/concrete-python/concrete/numpy/mlir/utils.py
new file mode 100644
index 000000000..5a645fbdb
--- /dev/null
+++ b/frontends/concrete-python/concrete/numpy/mlir/utils.py
@@ -0,0 +1,171 @@
+"""
+Declaration of various functions and constants related to MLIR conversion.
+"""
+
+from collections import defaultdict, deque
+from copy import deepcopy
+from itertools import product
+from typing import Any, DefaultDict, List, Optional, Tuple, Union, cast
+
+import numpy as np
+
+from ..dtypes import Integer
+from ..internal.utils import assert_that
+from ..representation import Node, Operation
+
+MAXIMUM_TLU_BIT_WIDTH = 16
+
+
+class HashableNdarray:
+ """
+ HashableNdarray class, to use numpy arrays in dictionaries.
+ """
+
+ array: np.ndarray
+
+ def __init__(self, array: np.ndarray):
+ self.array = array
+
+ def __eq__(self, other: object) -> bool:
+ return isinstance(other, HashableNdarray) and np.array_equal(self.array, other.array)
+
+ def __hash__(self) -> int:
+ return hash(self.array.tobytes())
+
+
+def flood_replace_none_values(table: list):
+ """
+ Use flooding algorithm to replace `None` values.
+
+ Args:
+ table (list):
+ the list in which there are `None` values that need to be replaced
+ with copies of the closest non `None` data from the list
+ """
+
+ assert_that(any(value is not None for value in table))
+
+ not_none_values_idx = deque(idx for idx, value in enumerate(table) if value is not None)
+ while not_none_values_idx:
+
+ current_idx = not_none_values_idx.popleft()
+ current_value = table[current_idx]
+
+ previous_idx = current_idx - 1
+ next_idx = current_idx + 1
+
+ if previous_idx >= 0 and table[previous_idx] is None:
+ table[previous_idx] = deepcopy(current_value)
+ not_none_values_idx.append(previous_idx)
+
+ if next_idx < len(table) and table[next_idx] is None:
+ table[next_idx] = deepcopy(current_value)
+ not_none_values_idx.append(next_idx)
+
+ assert_that(all(value is not None for value in table))
+
+
+def construct_table(node: Node, preds: List[Node]) -> List[Any]:
+ """
+ Construct the lookup table for an Operation.Generic node.
+
+ Args:
+ node (Node):
+ Operation.Generic to construct the table
+
+ preds (List[Node]):
+ ordered predecessors to `node`
+
+ Returns:
+ List[Any]:
+ lookup table corresponding to `node` and its input value
+ """
+
+ variable_input_index = -1
+ for index, pred in enumerate(preds):
+ if pred.operation != Operation.Constant:
+ variable_input_index = index
+ break
+ assert_that(variable_input_index != -1)
+
+ variable_input_dtype = node.inputs[variable_input_index].dtype
+ variable_input_shape = node.inputs[variable_input_index].shape
+
+ assert_that(isinstance(variable_input_dtype, Integer))
+ variable_input_dtype = cast(Integer, variable_input_dtype)
+
+ inputs: List[Any] = [pred() if pred.operation == Operation.Constant else None for pred in preds]
+
+ table: List[Optional[Union[np.bool_, np.integer, np.floating, np.ndarray]]] = []
+ for value in range(variable_input_dtype.min(), variable_input_dtype.max() + 1):
+ try:
+ inputs[variable_input_index] = np.ones(variable_input_shape, dtype=np.int64) * value
+ table.append(node(*inputs))
+ except Exception: # pylint: disable=broad-except
+ # here we try our best to fill the table
+ # if it fails, we append None and let flooding algoritm replace None values below
+ table.append(None)
+
+ flood_replace_none_values(table)
+
+ return table
+
+
+def construct_deduplicated_tables(
+ node: Node,
+ preds: List[Node],
+) -> Tuple[Tuple[np.ndarray, List[Tuple[int, ...]]], ...]:
+ """
+ Construct lookup tables for each cell of the input for an Operation.Generic node.
+
+ Args:
+ node (Node):
+ Operation.Generic to construct the table
+
+ preds (List[Node]):
+ ordered predecessors to `node`
+
+ Returns:
+ Tuple[Tuple[numpy.ndarray, List[Tuple[int, ...]]], ...]:
+ tuple containing tuples of 2 for
+ - constructed table
+ - list of indices of the input that use the constructed table
+
+ e.g.,
+
+ .. code-block:: python
+
+ (
+ (np.array([3, 1, 2, 4]), [(1, 0), (2, 1)]),
+ (np.array([5, 8, 6, 7]), [(0, 0), (0, 1), (1, 1), (2, 0)]),
+ )
+
+ means the lookup on 3x2 input will result in
+
+ .. code-block:: python
+
+ [ [5, 8, 6, 7][input[0, 0]] , [5, 8, 6, 7][input[0, 1]] ]
+ [ [3, 1, 2, 4][input[1, 0]] , [5, 8, 6, 7][input[1, 1]] ]
+ [ [5, 8, 6, 7][input[2, 0]] , [3, 1, 2, 4][input[2, 1]] ]
+ """
+
+ node_complete_table = np.concatenate(
+ tuple(np.expand_dims(array, -1) for array in construct_table(node, preds)),
+ axis=-1,
+ )
+
+ all_cells_idx = product(*tuple(range(max_val) for max_val in node_complete_table.shape[:-1]))
+ tables_to_cell_idx: DefaultDict[HashableNdarray, List[Tuple[int, ...]]] = defaultdict(list)
+
+ idx: Tuple[int, ...]
+ all_idx_set = set()
+ for idx in all_cells_idx:
+ hashable_array = HashableNdarray(node_complete_table[idx])
+ tables_to_cell_idx[hashable_array].append(idx)
+ all_idx_set.add(idx)
+
+ assert_that(len(all_idx_set) == np.prod(node_complete_table.shape[:-1]))
+
+ return tuple(
+ (hashable_array.array, indices) for hashable_array, indices in tables_to_cell_idx.items()
+ )
diff --git a/frontends/concrete-python/concrete/numpy/representation/__init__.py b/frontends/concrete-python/concrete/numpy/representation/__init__.py
new file mode 100644
index 000000000..cc67d1504
--- /dev/null
+++ b/frontends/concrete-python/concrete/numpy/representation/__init__.py
@@ -0,0 +1,7 @@
+"""
+Define structures used to represent computation.
+"""
+
+from .graph import Graph
+from .node import Node
+from .operation import Operation
diff --git a/frontends/concrete-python/concrete/numpy/representation/evaluator.py b/frontends/concrete-python/concrete/numpy/representation/evaluator.py
new file mode 100644
index 000000000..3f32611f6
--- /dev/null
+++ b/frontends/concrete-python/concrete/numpy/representation/evaluator.py
@@ -0,0 +1,52 @@
+"""
+Declaration of various `Evaluator` classes, to make graphs picklable.
+"""
+
+# ruff: noqa: ARG002
+
+
+class ConstantEvaluator:
+ """
+ ConstantEvaluator class, to evaluate Operation.Constant nodes.
+ """
+
+ def __init__(self, properties):
+ self.properties = properties
+
+ def __call__(self, *args, **kwargs):
+ return self.properties["constant"]
+
+
+class InputEvaluator:
+ """
+ InputEvaluator class, to evaluate Operation.Input nodes.
+ """
+
+ def __call__(self, *args, **kwargs):
+ return args[0]
+
+
+class GenericEvaluator:
+ """
+ GenericEvaluator class, to evaluate Operation.Generic nodes.
+ """
+
+ def __init__(self, operation, properties):
+ self.operation = operation
+ self.properties = properties
+
+ def __call__(self, *args, **kwargs):
+ return self.operation(*args, *self.properties["args"], **self.properties["kwargs"])
+
+
+class GenericTupleEvaluator:
+ """
+ GenericEvaluator class, to evaluate Operation.Generic nodes where args are packed in a tuple.
+ """
+
+ def __init__(self, operation, properties):
+ self.operation = operation
+ self.properties = properties
+
+ def __call__(self, *args, **kwargs):
+ return self.operation(tuple(args), *self.properties["args"], **self.properties["kwargs"])
diff --git a/frontends/concrete-python/concrete/numpy/representation/graph.py b/frontends/concrete-python/concrete/numpy/representation/graph.py
new file mode 100644
index 000000000..ef317c000
--- /dev/null
+++ b/frontends/concrete-python/concrete/numpy/representation/graph.py
@@ -0,0 +1,692 @@
+"""
+Declaration of `Graph` class.
+"""
+
+import math
+import re
+from copy import deepcopy
+from typing import Any, Dict, Iterable, List, Optional, Tuple, Union
+
+import networkx as nx
+import numpy as np
+import scipy.special
+
+from ..dtypes import Float, Integer, UnsignedInteger
+from .node import Node
+from .operation import Operation
+
+P_ERROR_PER_ERROR_SIZE_CACHE: Dict[float, Dict[int, float]] = {}
+
+
+class Graph:
+ """
+ Graph class, to represent computation graphs.
+ """
+
+ graph: nx.MultiDiGraph
+
+ input_nodes: Dict[int, Node]
+ output_nodes: Dict[int, Node]
+
+ input_indices: Dict[Node, int]
+
+ is_direct: bool
+
+ def __init__(
+ self,
+ graph: nx.MultiDiGraph,
+ input_nodes: Dict[int, Node],
+ output_nodes: Dict[int, Node],
+ is_direct: bool = False,
+ ):
+ self.graph = graph
+
+ self.input_nodes = input_nodes
+ self.output_nodes = output_nodes
+
+ self.input_indices = {node: index for index, node in input_nodes.items()}
+
+ self.is_direct = is_direct
+
+ self.prune_useless_nodes()
+
+ def __call__(
+ self,
+ *args: Any,
+ p_error: Optional[float] = None,
+ ) -> Union[
+ np.bool_,
+ np.integer,
+ np.floating,
+ np.ndarray,
+ Tuple[Union[np.bool_, np.integer, np.floating, np.ndarray], ...],
+ ]:
+ evaluation = self.evaluate(*args, p_error=p_error)
+ result = tuple(evaluation[node] for node in self.ordered_outputs())
+ return result if len(result) > 1 else result[0]
+
+ def evaluate(
+ self,
+ *args: Any,
+ p_error: Optional[float] = None,
+ ) -> Dict[Node, Union[np.bool_, np.integer, np.floating, np.ndarray]]:
+ r"""
+ Perform the computation `Graph` represents and get resulting values for all nodes.
+
+ Args:
+ *args (List[Any]):
+ inputs to the computation
+
+ p_error (Optional[float]):
+ probability of error for table lookups
+
+ Returns:
+ Dict[Node, Union[np.bool\_, np.integer, np.floating, np.ndarray]]:
+ nodes and their values during computation
+ """
+
+ # pylint: disable=no-member,too-many-nested-blocks,too-many-branches,too-many-statements
+
+ if p_error is None:
+ p_error = 0.0
+
+ assert isinstance(p_error, float)
+
+ node_results: Dict[Node, Union[np.bool_, np.integer, np.floating, np.ndarray]] = {}
+ for node in nx.topological_sort(self.graph):
+ if node.operation == Operation.Input:
+ node_results[node] = node(args[self.input_indices[node]])
+ continue
+
+ pred_results = [node_results[pred] for pred in self.ordered_preds_of(node)]
+
+ if p_error > 0.0 and node.converted_to_table_lookup:
+ variable_input_indices = [
+ idx
+ for idx, pred in enumerate(self.ordered_preds_of(node))
+ if not pred.operation == Operation.Constant
+ ]
+
+ for index in variable_input_indices:
+ pred_node = self.ordered_preds_of(node)[index]
+ if pred_node.operation != Operation.Input:
+ dtype = node.inputs[index].dtype
+ if isinstance(dtype, Integer):
+ # see https://github.com/zama-ai/concrete-numpy/blob/main/docs/_static/p_error_simulation.pdf # noqa: E501 # pylint: disable=line-too-long
+ # to learn more about the distribution of error
+
+ if p_error not in P_ERROR_PER_ERROR_SIZE_CACHE:
+ std_score = math.sqrt(2) * scipy.special.erfcinv(p_error)
+ p_error_per_error_size = {}
+
+ error_size = 1
+ last_p = 1 - p_error
+ while last_p != 1.0 or error_size == 1:
+ new_std_score = (2 * error_size + 1) * std_score
+ new_p = scipy.special.erf(new_std_score / math.sqrt(2))
+
+ p_error_per_error_size[error_size] = new_p - last_p
+
+ last_p = new_p
+ error_size += 1
+
+ # ordering of `p_error_per_error_size` is relied on
+ # during the introduction of the error below
+ # thus we explicitly sort it to make sure it's ordered
+ p_error_per_error_size = dict(
+ sorted(p_error_per_error_size.items())
+ )
+
+ P_ERROR_PER_ERROR_SIZE_CACHE[p_error] = p_error_per_error_size
+ else: # pragma: no cover
+ p_error_per_error_size = P_ERROR_PER_ERROR_SIZE_CACHE[p_error]
+
+ error = np.random.rand(*pred_results[index].shape)
+
+ accumulated_p_error = 0.0
+ for error_size, p_error_for_size in p_error_per_error_size.items():
+ accumulated_p_error += p_error_for_size
+ error = np.where(error < accumulated_p_error, error_size, error)
+
+ error = np.where(error < 1, 0, error).astype(np.int64)
+
+ error_sign = np.random.rand(*pred_results[index].shape)
+ error_sign = np.where(error_sign < 0.5, 1, -1).astype(np.int64)
+
+ new_result = pred_results[index] + (error * error_sign)
+
+ if new_result.shape == (): # pragma: no cover
+ if new_result < dtype.min():
+ new_result = dtype.max() - (dtype.min() - new_result) + 1
+ elif new_result > dtype.max():
+ new_result = dtype.min() - (new_result - dtype.max()) - 1
+
+ else:
+ underflow_indices = np.where(new_result < dtype.min())
+ new_result[underflow_indices] = (
+ dtype.max() - (dtype.min() - new_result[underflow_indices]) + 1
+ )
+
+ overflow_indices = np.where(new_result > dtype.max())
+ new_result[overflow_indices] = (
+ dtype.min() + (new_result[overflow_indices] - dtype.max()) - 1
+ )
+
+ pred_results[index] = new_result
+
+ try:
+ node_results[node] = node(*pred_results)
+ except Exception as error:
+ raise RuntimeError(
+ "Evaluation of the graph failed\n\n"
+ + self.format(
+ highlighted_nodes={node: ["evaluation of this node failed"]},
+ show_bounds=False,
+ )
+ ) from error
+
+ return node_results
+
+ def format(
+ self,
+ maximum_constant_length: int = 25,
+ highlighted_nodes: Optional[Dict[Node, List[str]]] = None,
+ show_types: bool = True,
+ show_bounds: bool = True,
+ show_tags: bool = True,
+ show_locations: bool = False,
+ ) -> str:
+ """
+ Get the textual representation of the `Graph`.
+
+ Args:
+ maximum_constant_length (int, default = 25):
+ maximum length of formatted constants
+
+ highlighted_nodes (Optional[Dict[Node, List[str]]], default = None):
+ nodes to be highlighted and their corresponding messages
+
+ show_types (bool, default = True):
+ whether to show types of nodes
+
+ show_bounds (bool, default = True):
+ whether to show bounds of nodes
+
+ show_tags (bool, default = True):
+ whether to show tags of nodes
+
+ show_locations (bool, default = False):
+ whether to show line information of nodes
+
+ Returns:
+ str:
+ textual representation of the `Graph`
+ """
+
+ # pylint: disable=too-many-branches,too-many-locals,too-many-statements
+ # ruff: noqa: ERA001
+
+ if self.is_direct:
+ show_bounds = False
+
+ # node -> identifier
+ # e.g., id_map[node1] = 2
+ # means line for node1 is in this form %2 = node1.format(...)
+ id_map: Dict[Node, int] = {}
+
+ # lines that will be merged at the end
+ lines: List[str] = []
+
+ # metadata to add to each line
+ # (for alignment, this is done after lines are determined)
+ line_metadata: List[Dict[str, str]] = []
+
+ # default highlighted nodes is empty
+ highlighted_nodes = highlighted_nodes if highlighted_nodes is not None else {}
+
+ # highlight information for lines, this is required because highlights are added to lines
+ # after their type information is added, and we only have line numbers, not nodes
+ highlighted_lines: Dict[int, List[str]] = {}
+
+ # subgraphs to format after the main graph is formatted
+ subgraphs: Dict[str, Graph] = {}
+
+ # format nodes
+ for node in nx.lexicographical_topological_sort(self.graph):
+ # assign a unique id to outputs of node
+ id_map[node] = len(id_map)
+
+ # remember highlights of the node
+ if node in highlighted_nodes:
+ highlighted_lines[len(lines)] = highlighted_nodes[node]
+
+ # extract predecessors and their ids
+ predecessors = []
+ for predecessor in self.ordered_preds_of(node):
+ predecessors.append(f"%{id_map[predecessor]}")
+
+ # start the build the line for the node
+ line = ""
+
+ # add output information to the line
+ line += f"%{id_map[node]}"
+
+ # add node information to the line
+ line += " = "
+ line += node.format(predecessors, maximum_constant_length)
+
+ # append line to list of lines
+ lines.append(line)
+
+ # if exists, save the subgraph
+ if node.operation == Operation.Generic and "subgraph" in node.properties["kwargs"]:
+ subgraphs[line] = node.properties["kwargs"]["subgraph"]
+
+ # get formatted bounds
+ bounds = ""
+ if node.bounds is not None:
+ bounds += "∈ ["
+
+ lower, upper = node.bounds
+ assert type(lower) == type(upper) # pylint: disable=unidiomatic-typecheck
+
+ if isinstance(lower, (float, np.float32, np.float64)):
+ bounds += f"{round(lower, 6)}, {round(upper, 6)}"
+ else:
+ bounds += f"{int(lower)}, {int(upper)}"
+
+ bounds += "]"
+
+ # remember metadata of the node
+ line_metadata.append(
+ {
+ "type": f"# {node.output}",
+ "bounds": bounds,
+ "tag": (f"@ {node.tag}" if node.tag != "" else ""),
+ "location": node.location,
+ },
+ )
+
+ # align = signs
+ #
+ # e.g.,
+ #
+ # %1 = ...
+ # %2 = ...
+ # ...
+ # %8 = ...
+ # %9 = ...
+ # %10 = ...
+ # %11 = ...
+ # ...
+ longest_length_before_equals_sign = max(len(line.split("=")[0]) for line in lines)
+ for i, line in enumerate(lines):
+ length_before_equals_sign = len(line.split("=")[0])
+ lines[i] = (
+ " " * (longest_length_before_equals_sign - length_before_equals_sign)
+ ) + line
+
+ # determine which metadata to show
+ shown_metadata_keys = []
+ if show_types:
+ shown_metadata_keys.append("type")
+ if show_bounds:
+ shown_metadata_keys.append("bounds")
+ if show_tags:
+ shown_metadata_keys.append("tag")
+ if show_locations:
+ shown_metadata_keys.append("location")
+
+ # show requested metadata
+ indent = 8
+ for metadata_key in shown_metadata_keys:
+ longest_line_length = max(len(line) for line in lines)
+ lines = [
+ line + (" " * ((longest_line_length - len(line)) + indent)) + metadata[metadata_key]
+ for line, metadata in zip(lines, line_metadata)
+ ]
+
+ # strip whitespaces
+ lines = [line.rstrip() for line in lines]
+
+ # add highlights (this is done in reverse to keep indices consistent)
+ for i in reversed(range(len(lines))):
+ if i in highlighted_lines:
+ for j, message in enumerate(highlighted_lines[i]):
+ highlight = "^" if j == 0 else " "
+ lines.insert(i + 1 + j, f"{highlight * len(lines[i])} {message}")
+
+ # add return information
+ # (if there is a single return, it's in the form `return %id`
+ # (otherwise, it's in the form `return (%id1, %id2, ..., %idN)`
+ returns: List[str] = []
+ for node in self.output_nodes.values():
+ returns.append(f"%{id_map[node]}")
+ lines.append("return " + (returns[0] if len(returns) == 1 else f"({', '.join(returns)})"))
+
+ # format subgraphs after the actual graph
+ result = "\n".join(lines)
+ if len(subgraphs) > 0:
+ result += "\n\n"
+ result += "Subgraphs:"
+ for line, subgraph in subgraphs.items():
+ subgraph_lines = subgraph.format(
+ maximum_constant_length=maximum_constant_length,
+ highlighted_nodes={},
+ show_types=show_types,
+ show_bounds=False, # doesn't make sense as we don't measure bounds in subgraphs
+ show_tags=show_tags,
+ show_locations=show_locations,
+ ).split("\n")
+
+ result += "\n\n"
+ result += f" {line}:\n\n"
+ result += "\n".join(f" {line}" for line in subgraph_lines)
+
+ return result
+
+ # pylint: enable=too-many-branches,too-many-locals,too-many-statements
+
+ def measure_bounds(
+ self,
+ inputset: Union[Iterable[Any], Iterable[Tuple[Any, ...]]],
+ ) -> Dict[Node, Dict[str, Union[np.integer, np.floating]]]:
+ """
+ Evaluate the `Graph` using an inputset and measure bounds.
+
+ inputset is either an iterable of anything
+ for a single parameter
+
+ or
+
+ an iterable of tuples of anything (of rank number of parameters)
+ for multiple parameters
+
+ e.g.,
+
+ .. code-block:: python
+
+ inputset = [1, 3, 5, 2, 4]
+ def f(x):
+ ...
+
+ inputset = [(1, 2), (2, 4), (3, 1), (2, 2)]
+ def g(x, y):
+ ...
+
+ Args:
+ inputset (Union[Iterable[Any], Iterable[Tuple[Any, ...]]]):
+ inputset to use
+
+ Returns:
+ Dict[Node, Dict[str, Union[np.integer, np.floating]]]:
+ bounds of each node in the `Graph`
+ """
+
+ bounds = {}
+
+ inputset_iterator = iter(inputset)
+
+ sample = next(inputset_iterator)
+ if not isinstance(sample, tuple):
+ sample = (sample,)
+
+ index = 0
+ try:
+ evaluation = self.evaluate(*sample)
+ for node, value in evaluation.items():
+ bounds[node] = {
+ "min": value.min(),
+ "max": value.max(),
+ }
+
+ for sample in inputset_iterator:
+ index += 1
+ if not isinstance(sample, tuple):
+ sample = (sample,)
+
+ evaluation = self.evaluate(*sample)
+ for node, value in evaluation.items():
+ bounds[node] = {
+ "min": np.minimum(bounds[node]["min"], value.min()),
+ "max": np.maximum(bounds[node]["max"], value.max()),
+ }
+
+ except Exception as error:
+ message = f"Bound measurement using inputset[{index}] failed"
+ raise RuntimeError(message) from error
+
+ return bounds
+
+ def update_with_bounds(self, bounds: Dict[Node, Dict[str, Union[np.integer, np.floating]]]):
+ """
+ Update `Value`s within the `Graph` according to measured bounds.
+
+ Args:
+ bounds (Dict[Node, Dict[str, Union[np.integer, np.floating]]]):
+ bounds of each node in the `Graph`
+ """
+
+ for node in self.graph.nodes():
+ if node in bounds:
+ min_bound = bounds[node]["min"]
+ max_bound = bounds[node]["max"]
+
+ node.bounds = (min_bound, max_bound)
+
+ new_value = deepcopy(node.output)
+
+ if isinstance(min_bound, np.integer):
+ new_value.dtype = Integer.that_can_represent(np.array([min_bound, max_bound]))
+ else:
+ new_value.dtype = {
+ np.bool_: UnsignedInteger(1),
+ np.float64: Float(64),
+ np.float32: Float(32),
+ np.float16: Float(16),
+ }[type(min_bound)]
+
+ node.output = new_value
+
+ if node.operation == Operation.Input:
+ node.inputs[0] = new_value
+
+ for successor in self.graph.successors(node):
+ edge_data = self.graph.get_edge_data(node, successor)
+ for edge in edge_data.values():
+ input_idx = edge["input_idx"]
+ successor.inputs[input_idx] = node.output
+
+ def ordered_inputs(self) -> List[Node]:
+ """
+ Get the input nodes of the `Graph`, ordered by their indices.
+
+ Returns:
+ List[Node]:
+ ordered input nodes
+ """
+
+ return [self.input_nodes[idx] for idx in range(len(self.input_nodes))]
+
+ def ordered_outputs(self) -> List[Node]:
+ """
+ Get the output nodes of the `Graph`, ordered by their indices.
+
+ Returns:
+ List[Node]:
+ ordered output nodes
+ """
+
+ return [self.output_nodes[idx] for idx in range(len(self.output_nodes))]
+
+ def ordered_preds_of(self, node: Node) -> List[Node]:
+ """
+ Get predecessors of `node`, ordered by their indices.
+
+ Args:
+ node (Node):
+ node whose predecessors are requested
+
+ Returns:
+ List[Node]:
+ ordered predecessors of `node`.
+ """
+
+ idx_to_pred: Dict[int, Node] = {}
+ for pred in self.graph.predecessors(node):
+ edge_data = self.graph.get_edge_data(pred, node)
+ idx_to_pred.update((data["input_idx"], pred) for data in edge_data.values())
+ return [idx_to_pred[i] for i in range(len(idx_to_pred))]
+
+ def prune_useless_nodes(self):
+ """
+ Remove unreachable nodes from the graph.
+ """
+
+ useful_nodes: Dict[Node, None] = {}
+
+ current_nodes = {node: None for node in self.ordered_outputs()}
+ while current_nodes:
+ useful_nodes.update(current_nodes)
+
+ next_nodes: Dict[Node, None] = {}
+ for node in current_nodes:
+ next_nodes.update({node: None for node in self.graph.predecessors(node)})
+
+ current_nodes = next_nodes
+
+ useless_nodes = [node for node in self.graph.nodes() if node not in useful_nodes]
+ self.graph.remove_nodes_from(useless_nodes)
+
+ def query_nodes(
+ self,
+ tag_filter: Optional[Union[str, List[str], re.Pattern]] = None,
+ operation_filter: Optional[Union[str, List[str], re.Pattern]] = None,
+ ) -> List[Node]:
+ """
+ Query nodes within the graph.
+
+ Filters work like so:
+ str -> nodes without exact match is skipped
+ List[str] -> nodes without exact match with one of the strings in the list is skipped
+ re.Pattern -> nodes without pattern match is skipped
+
+ Args:
+ tag_filter (Optional[Union[str, List[str], re.Pattern]], default = None):
+ filter for tags
+
+ operation_filter (Optional[Union[str, List[str], re.Pattern]], default = None):
+ filter for operations
+
+ Returns:
+ List[Node]:
+ filtered nodes
+ """
+
+ def match_text_filter(text_filter, text):
+ if text_filter is None:
+ return True
+
+ if isinstance(text_filter, str):
+ return text == text_filter
+
+ if isinstance(text_filter, re.Pattern):
+ return text_filter.match(text)
+
+ return any(text == alternative for alternative in text_filter)
+
+ def get_operation_name(node):
+ result: str
+
+ if node.operation == Operation.Input:
+ result = "input"
+ elif node.operation == Operation.Constant:
+ result = "constant"
+ else:
+ result = node.properties["name"]
+
+ return result
+
+ return [
+ node
+ for node in self.graph.nodes()
+ if (
+ match_text_filter(tag_filter, node.tag)
+ and match_text_filter(operation_filter, get_operation_name(node))
+ )
+ ]
+
+ def maximum_integer_bit_width(
+ self,
+ tag_filter: Optional[Union[str, List[str], re.Pattern]] = None,
+ operation_filter: Optional[Union[str, List[str], re.Pattern]] = None,
+ ) -> int:
+ """
+ Get maximum integer bit-width within the graph.
+
+ Only nodes after filtering will be used to calculate the result.
+
+ Args:
+ tag_filter (Optional[Union[str, List[str], re.Pattern]], default = None):
+ filter for tags
+
+ operation_filter (Optional[Union[str, List[str], re.Pattern]], default = None):
+ filter for operations
+
+ Returns:
+ int:
+ maximum integer bit-width within the graph
+ if there are no integer nodes matching the query, result is -1
+ """
+
+ filtered_bit_widths = (
+ node.output.dtype.bit_width
+ for node in self.query_nodes(tag_filter, operation_filter)
+ if isinstance(node.output.dtype, Integer)
+ )
+ return max(filtered_bit_widths, default=-1)
+
+ def integer_range(
+ self,
+ tag_filter: Optional[Union[str, List[str], re.Pattern]] = None,
+ operation_filter: Optional[Union[str, List[str], re.Pattern]] = None,
+ ) -> Optional[Tuple[int, int]]:
+ """
+ Get integer range of the graph.
+
+ Only nodes after filtering will be used to calculate the result.
+
+ Args:
+ tag_filter (Optional[Union[str, List[str], re.Pattern]], default = None):
+ filter for tags
+
+ operation_filter (Optional[Union[str, List[str], re.Pattern]], default = None):
+ filter for operations
+
+ Returns:
+ Optional[Tuple[int, int]]:
+ minimum and maximum integer value observed during inputset evaluation
+ if there are no integer nodes matching the query, result is None
+ """
+
+ result: Optional[Tuple[int, int]] = None
+
+ if not self.is_direct:
+ filtered_bounds = (
+ node.bounds
+ for node in self.query_nodes(tag_filter, operation_filter)
+ if isinstance(node.output.dtype, Integer) and node.bounds is not None
+ )
+ for min_bound, max_bound in filtered_bounds:
+ assert isinstance(min_bound, np.integer) and isinstance(max_bound, np.integer)
+
+ if result is None:
+ result = (int(min_bound), int(max_bound))
+ else:
+ old_min_bound, old_max_bound = result # pylint: disable=unpacking-non-sequence
+ result = (
+ min(old_min_bound, int(min_bound)),
+ max(old_max_bound, int(max_bound)),
+ )
+
+ return result
diff --git a/frontends/concrete-python/concrete/numpy/representation/node.py b/frontends/concrete-python/concrete/numpy/representation/node.py
new file mode 100644
index 000000000..c8f083047
--- /dev/null
+++ b/frontends/concrete-python/concrete/numpy/representation/node.py
@@ -0,0 +1,434 @@
+"""
+Declaration of `Node` class.
+"""
+
+import os
+import time
+import traceback
+from copy import deepcopy
+from typing import Any, Callable, Dict, List, Optional, Tuple, Union
+
+import numpy as np
+
+from ..internal.utils import assert_that
+from ..values import Value
+from .evaluator import ConstantEvaluator, GenericEvaluator, GenericTupleEvaluator, InputEvaluator
+from .operation import Operation
+from .utils import KWARGS_IGNORED_IN_FORMATTING, format_constant, format_indexing_element
+
+
+class Node:
+ """
+ Node class, to represent computation in a computation graph.
+ """
+
+ inputs: List[Value]
+ output: Value
+
+ operation: Operation
+ evaluator: Callable
+
+ bounds: Optional[Tuple[Union[int, float], Union[int, float]]]
+ properties: Dict[str, Any]
+
+ location: str
+ tag: str
+ created_at: float
+
+ @staticmethod
+ def constant(constant: Any) -> "Node":
+ """
+ Create an Operation.Constant node.
+
+ Args:
+ constant (Any):
+ constant to represent
+
+ Returns:
+ Node:
+ node representing constant
+
+ Raises:
+ ValueError:
+ if the constant is not representable
+ """
+
+ try:
+ value = Value.of(constant)
+ except Exception as error:
+ message = f"Constant {repr(constant)} is not supported"
+ raise ValueError(message) from error
+
+ properties = {"constant": np.array(constant)}
+ return Node([], value, Operation.Constant, ConstantEvaluator(properties), properties)
+
+ @staticmethod
+ def generic(
+ name: str,
+ inputs: List[Value],
+ output: Value,
+ operation: Callable,
+ args: Optional[Tuple[Any, ...]] = None,
+ kwargs: Optional[Dict[str, Any]] = None,
+ attributes: Optional[Dict[str, Any]] = None,
+ ):
+ """
+ Create an Operation.Generic node.
+
+ Args:
+ name (str):
+ name of the operation
+
+ inputs (List[Value]):
+ inputs to the operation
+
+ output (Value):
+ output of the operation
+
+ operation (Callable):
+ operation itself
+
+ args (Optional[Tuple[Any, ...]]):
+ args to pass to operation during evaluation
+
+ kwargs (Optional[Dict[str, Any]]):
+ kwargs to pass to operation during evaluation
+
+ attributes (Optional[Dict[str, Any]]):
+ attributes of the operation
+
+ Returns:
+ Node:
+ node representing operation
+ """
+
+ properties = {
+ "name": name,
+ "args": args if args is not None else (),
+ "kwargs": kwargs if kwargs is not None else {},
+ "attributes": attributes if attributes is not None else {},
+ }
+
+ return Node(
+ inputs,
+ output,
+ Operation.Generic,
+ (
+ GenericTupleEvaluator(operation, properties) # type: ignore
+ if name in ["concatenate"]
+ else GenericEvaluator(operation, properties) # type: ignore
+ ),
+ properties,
+ )
+
+ @staticmethod
+ def input(name: str, value: Value) -> "Node":
+ """
+ Create an Operation.Input node.
+
+ Args:
+ name (Any):
+ name of the input
+
+ value (Any):
+ value of the input
+
+ Returns:
+ Node:
+ node representing input
+ """
+
+ return Node([value], value, Operation.Input, InputEvaluator(), {"name": name})
+
+ def __init__(
+ self,
+ inputs: List[Value],
+ output: Value,
+ operation: Operation,
+ evaluator: Callable,
+ properties: Optional[Dict[str, Any]] = None,
+ ):
+ self.inputs = inputs
+ self.output = output
+
+ self.operation = operation
+ self.evaluator = evaluator # type: ignore
+
+ self.bounds = None
+ self.properties = properties if properties is not None else {}
+
+ # pylint: disable=cyclic-import,import-outside-toplevel
+
+ import concrete.numpy as cnp
+
+ cnp_directory = os.path.dirname(cnp.__file__)
+
+ import concrete.onnx as coonx
+
+ coonx_directory = os.path.dirname(coonx.__file__)
+
+ # pylint: enable=cyclic-import,import-outside-toplevel
+
+ for frame in reversed(traceback.extract_stack()):
+ if frame.filename == "<__array_function__ internals>":
+ continue
+
+ if frame.filename.startswith(cnp_directory):
+ continue
+
+ if frame.filename.startswith(coonx_directory):
+ continue
+
+ self.location = f"{frame.filename}:{frame.lineno}"
+ break
+
+ # pylint: disable=cyclic-import,import-outside-toplevel
+
+ from ..extensions.tag import tag_context
+
+ self.tag = ".".join(tag_context.stack)
+
+ # pylint: enable=cyclic-import,import-outside-toplevel
+
+ self.created_at = time.time()
+
+ def __call__(self, *args: List[Any]) -> Union[np.bool_, np.integer, np.floating, np.ndarray]:
+ def generic_error_message() -> str:
+ result = f"Evaluation of {self.operation.value} '{self.label()}' node"
+ if len(args) != 0:
+ result += f" using {', '.join(repr(arg) for arg in args)}"
+ return result
+
+ if len(args) != len(self.inputs):
+ message = f"{generic_error_message()} failed because of invalid number of arguments"
+ raise ValueError(message)
+
+ for arg, input_ in zip(args, self.inputs):
+ try:
+ arg_value = Value.of(arg)
+ except Exception as error:
+ arg_str = "the argument" if len(args) == 1 else f"argument {repr(arg)}"
+ message = f"{generic_error_message()} failed because {arg_str} is not valid"
+ raise ValueError(message) from error
+
+ if input_.shape != arg_value.shape:
+ arg_str = "the argument" if len(args) == 1 else f"argument {repr(arg)}"
+ message = (
+ f"{generic_error_message()} failed because "
+ f"{arg_str} does not have the expected "
+ f"shape of {input_.shape}"
+ )
+ raise ValueError(message)
+
+ result = self.evaluator(*args)
+
+ if isinstance(result, int) and -(2**63) < result < (2**63) - 1:
+ result = np.int64(result)
+ if isinstance(result, float):
+ result = np.float64(result)
+
+ if isinstance(result, list):
+ try:
+ np_result = np.array(result)
+ result = np_result
+ except Exception: # pylint: disable=broad-except
+ # here we try our best to convert the list to np.ndarray
+ # if it fails we raise the exception below
+ pass
+
+ if not isinstance(result, (np.bool_, np.integer, np.floating, np.ndarray)):
+ message = (
+ f"{generic_error_message()} resulted in {repr(result)} "
+ f"of type {result.__class__.__name__} "
+ f"which is not acceptable either because of the type or because of overflow"
+ )
+ raise ValueError(message)
+
+ if isinstance(result, np.ndarray):
+ dtype = result.dtype
+ if (
+ not np.issubdtype(dtype, np.integer)
+ and not np.issubdtype(dtype, np.floating)
+ and not np.issubdtype(dtype, np.bool_)
+ ):
+ message = (
+ f"{generic_error_message()} resulted in {repr(result)} "
+ f"of type np.ndarray and of underlying type '{type(dtype).__name__}' "
+ f"which is not acceptable because of the underlying type"
+ )
+ raise ValueError(message)
+
+ if result.shape != self.output.shape:
+ message = (
+ f"{generic_error_message()} resulted in {repr(result)} "
+ f"which does not have the expected "
+ f"shape of {self.output.shape}"
+ )
+ raise ValueError(message)
+
+ return result
+
+ def format(self, predecessors: List[str], maximum_constant_length: int = 45) -> str:
+ """
+ Get the textual representation of the `Node` (dependent to preds).
+
+ Args:
+ predecessors (List[str]):
+ predecessor names to this node
+
+ maximum_constant_length (int, default = 45):
+ maximum length of formatted constants
+
+ Returns:
+ str:
+ textual representation of the `Node` (dependent to preds)
+ """
+
+ if self.operation == Operation.Constant:
+ return format_constant(self(), maximum_constant_length)
+
+ if self.operation == Operation.Input:
+ return self.properties["name"]
+
+ assert_that(self.operation == Operation.Generic)
+
+ name = self.properties["name"]
+
+ if name == "index.static":
+ index = self.properties["kwargs"]["index"]
+ elements = [format_indexing_element(element) for element in index]
+ return f"{predecessors[0]}[{', '.join(elements)}]"
+
+ if name == "assign.static":
+ index = self.properties["kwargs"]["index"]
+ elements = [format_indexing_element(element) for element in index]
+ return f"({predecessors[0]}[{', '.join(elements)}] = {predecessors[1]})"
+
+ if name == "concatenate":
+ args = [f"({', '.join(predecessors)})"]
+ else:
+ args = deepcopy(predecessors)
+
+ if name == "array":
+ values = str(np.array(predecessors).reshape(self.output.shape).tolist()).replace(
+ "'", ""
+ )
+ return f"array({format_constant(values, maximum_constant_length)})"
+
+ args.extend(
+ format_constant(value, maximum_constant_length) for value in self.properties["args"]
+ )
+ args.extend(
+ f"{name}={format_constant(value, maximum_constant_length)}"
+ for name, value in self.properties["kwargs"].items()
+ if name not in KWARGS_IGNORED_IN_FORMATTING
+ )
+
+ return f"{name}({', '.join(args)})"
+
+ def label(self) -> str:
+ """
+ Get the textual representation of the `Node` (independent of preds).
+
+ Returns:
+ str:
+ textual representation of the `Node` (independent of preds).
+ """
+
+ if self.operation == Operation.Constant:
+ return format_constant(self(), maximum_length=45, keep_newlines=True)
+
+ if self.operation == Operation.Input:
+ return self.properties["name"]
+
+ assert_that(self.operation == Operation.Generic)
+
+ name = self.properties["name"]
+
+ if name == "index.static":
+ name = self.format(["□"])
+
+ if name == "assign.static":
+ name = self.format(["□", "□"])[1:-1]
+
+ return name
+
+ @property
+ def converted_to_table_lookup(self) -> bool:
+ """
+ Get whether the node is converted to a table lookup during MLIR conversion.
+
+ Returns:
+ bool:
+ True if the node is converted to a table lookup, False otherwise
+ """
+
+ if (
+ all(value.is_encrypted for value in self.inputs)
+ and self.operation == Operation.Generic
+ and self.properties["name"]
+ in [
+ "bitwise_and",
+ "bitwise_or",
+ "bitwise_xor",
+ "equal",
+ "greater",
+ "greater_equal",
+ "left_shift",
+ "less",
+ "less_equal",
+ "not_equal",
+ "right_shift",
+ ]
+ ):
+ return False
+
+ return self.operation == Operation.Generic and self.properties["name"] not in [
+ "add",
+ "array",
+ "assign.static",
+ "broadcast_to",
+ "concatenate",
+ "conv1d",
+ "conv2d",
+ "conv3d",
+ "dot",
+ "expand_dims",
+ "index.static",
+ "matmul",
+ "maxpool",
+ "multiply",
+ "negative",
+ "ones",
+ "reshape",
+ "squeeze",
+ "subtract",
+ "sum",
+ "transpose",
+ "zeros",
+ ]
+
+ @property
+ def is_fusable(self) -> bool:
+ """
+ Get whether the node is can be fused into a table lookup.
+
+ Returns:
+ bool:
+ True if the node can be fused into a table lookup, False otherwise
+ """
+
+ if self.converted_to_table_lookup:
+ return True
+
+ return self.operation != Operation.Generic or self.properties["name"] in [
+ "add",
+ "multiply",
+ "negative",
+ "ones",
+ "subtract",
+ "zeros",
+ ]
+
+ def __lt__(self, other) -> bool:
+ return self.created_at < other.created_at
diff --git a/frontends/concrete-python/concrete/numpy/representation/operation.py b/frontends/concrete-python/concrete/numpy/representation/operation.py
new file mode 100644
index 000000000..bc025e7b9
--- /dev/null
+++ b/frontends/concrete-python/concrete/numpy/representation/operation.py
@@ -0,0 +1,29 @@
+"""
+Declaration of `Operation` enum.
+"""
+
+from enum import Enum
+
+
+class Operation(Enum):
+ """
+ Operation enum, to distinguish nodes within a computation graph.
+ """
+
+ # pylint: disable=invalid-name
+
+ Constant = "constant"
+ Generic = "generic"
+ Input = "input"
+
+ # pylint: enable=invalid-name
+
+
+# https://graphviz.org/doc/info/colors.html#svg
+
+OPERATION_COLOR_MAPPING = {
+ Operation.Constant: "grey",
+ Operation.Generic: "black",
+ Operation.Input: "crimson",
+ "output": "gold",
+}
diff --git a/frontends/concrete-python/concrete/numpy/representation/utils.py b/frontends/concrete-python/concrete/numpy/representation/utils.py
new file mode 100644
index 000000000..9c819a46e
--- /dev/null
+++ b/frontends/concrete-python/concrete/numpy/representation/utils.py
@@ -0,0 +1,114 @@
+"""
+Declaration of various functions and constants related to representation of computation.
+"""
+
+from typing import Any, Dict, Hashable, Set, Union
+
+import numpy as np
+
+from ..internal.utils import assert_that
+
+KWARGS_IGNORED_IN_FORMATTING: Set[str] = {
+ "subgraph",
+ "terminal_node",
+}
+
+SPECIAL_OBJECT_MAPPING: Dict[Any, str] = {
+ np.float16: "float16",
+ np.float32: "float32",
+ np.float64: "float64",
+ np.int8: "int8",
+ np.int16: "int16",
+ np.int32: "int32",
+ np.int64: "int64",
+ np.uint8: "uint8",
+ np.uint16: "uint16",
+ np.uint32: "uint32",
+ np.uint64: "uint64",
+ np.byte: "byte",
+ np.short: "short",
+ np.intc: "intc",
+ np.int_: "int_",
+ np.longlong: "longlong",
+ np.ubyte: "ubyte",
+ np.ushort: "ushort",
+ np.uintc: "uintc",
+ np.uint: "uint",
+ np.ulonglong: "ulonglong",
+}
+
+
+def format_constant(constant: Any, maximum_length: int = 45, keep_newlines: bool = False) -> str:
+ """
+ Get the textual representation of a constant.
+
+ Args:
+ constant (Any):
+ constant to format
+
+ maximum_length (int, default = 45):
+ maximum length of the resulting string
+
+ keep_newlines (bool, default = False):
+ whether to keep newlines or not
+
+ Returns:
+ str:
+ textual representation of `constant`
+ """
+
+ if isinstance(constant, Hashable) and constant in SPECIAL_OBJECT_MAPPING:
+ return SPECIAL_OBJECT_MAPPING[constant]
+
+ # maximum_length should not be smaller than 7 characters because
+ # the constant will be formatted to `x ... y`
+ # where x and y are part of the constant, and they are at least 1 character
+ assert_that(maximum_length >= 7)
+
+ result = str(constant)
+ if not keep_newlines:
+ result = result.replace("\n", "")
+
+ if len(result) > maximum_length:
+ from_start = (maximum_length - 5) // 2
+ from_end = (maximum_length - 5) - from_start
+
+ if keep_newlines and "\n" in result:
+ result = f"{result[:from_start]}\n...\n{result[-from_end:]}"
+ else:
+ result = f"{result[:from_start]} ... {result[-from_end:]}"
+
+ return result
+
+
+def format_indexing_element(indexing_element: Union[int, np.integer, slice]):
+ """
+ Format an indexing element.
+
+ This is required mainly for slices. The reason is that string representation of slices
+ are very long and verbose. To give an example, `x[:, 2:]` will have the following index
+ `[slice(None, None, None), slice(2, None, None)]` if printed naively. With this helper,
+ it will be formatted as `[:, 2:]`.
+
+ Args:
+ indexing_element (Union[int, np.integer, slice]):
+ indexing element to format
+
+ Returns:
+ str:
+ textual representation of `indexing_element`
+ """
+
+ result = ""
+ if isinstance(indexing_element, slice):
+ if indexing_element.start is not None:
+ result += str(indexing_element.start)
+ result += ":"
+ if indexing_element.stop is not None:
+ result += str(indexing_element.stop)
+ if indexing_element.step is not None:
+ result += ":"
+ result += str(indexing_element.step)
+ else:
+ result += str(indexing_element)
+ return result.replace("\n", " ")
diff --git a/frontends/concrete-python/concrete/numpy/tracing/__init__.py b/frontends/concrete-python/concrete/numpy/tracing/__init__.py
new file mode 100644
index 000000000..e0f30edfa
--- /dev/null
+++ b/frontends/concrete-python/concrete/numpy/tracing/__init__.py
@@ -0,0 +1,5 @@
+"""
+Provide `function` to `computation graph` functionality.
+"""
+
+from .tracer import ScalarAnnotation, TensorAnnotation, Tracer
diff --git a/frontends/concrete-python/concrete/numpy/tracing/tracer.py b/frontends/concrete-python/concrete/numpy/tracing/tracer.py
new file mode 100644
index 000000000..49abb53b1
--- /dev/null
+++ b/frontends/concrete-python/concrete/numpy/tracing/tracer.py
@@ -0,0 +1,867 @@
+"""
+Declaration of `Tracer` class.
+"""
+
+import inspect
+from copy import deepcopy
+from typing import Any, Callable, Dict, List, Optional, Set, Tuple, Type, Union, cast
+
+import networkx as nx
+import numpy as np
+from numpy.typing import DTypeLike
+
+from ..dtypes import BaseDataType, Float, Integer
+from ..internal.utils import assert_that
+from ..representation import Graph, Node, Operation
+from ..representation.utils import format_indexing_element
+from ..values import Value
+
+
+class Tracer:
+ """
+ Tracer class, to create computation graphs from python functions.
+ """
+
+ computation: Node
+ input_tracers: List["Tracer"]
+ output: Value
+
+ # property to keep track of assignments
+ last_version: Optional["Tracer"] = None
+
+ # variables to control the behavior of certain functions
+ _is_tracing: bool = False
+ _is_direct: bool = False
+
+ @staticmethod
+ def trace(function: Callable, parameters: Dict[str, Value], is_direct: bool = False) -> Graph:
+ """
+ Trace `function` and create the `Graph` that represents it.
+
+ Args:
+ function (Callable):
+ function to trace
+
+ parameters (Dict[str, Value]):
+ parameters of function to trace
+ e.g. parameter x is an EncryptedScalar holding a 7-bit UnsignedInteger
+
+ is_direct (bool, default = False):
+ whether the tracing is done on actual parameters or placeholders
+
+ Returns:
+ Graph:
+ computation graph corresponding to `function`
+ """
+
+ # pylint: disable=too-many-statements
+
+ signature = inspect.signature(function)
+
+ missing_args = list(signature.parameters)
+ for arg in parameters.keys():
+ missing_args.remove(arg)
+ assert_that(len(missing_args) == 0)
+
+ arguments = {}
+ input_indices = {}
+
+ for index, param in enumerate(signature.parameters.keys()):
+ node = Node.input(param, parameters[param])
+ arguments[param] = Tracer(node, [])
+ input_indices[node] = index
+
+ Tracer._is_direct = is_direct
+
+ Tracer._is_tracing = True
+ output_tracers: Any = function(**arguments)
+ Tracer._is_tracing = False
+
+ if not isinstance(output_tracers, tuple):
+ output_tracers = (output_tracers,)
+
+ output_tracer_list = list(output_tracers)
+ for i, output_tracer in enumerate(output_tracer_list):
+ if isinstance(output_tracer, Tracer) and output_tracer.last_version is not None:
+ output_tracer_list[i] = output_tracer.last_version
+ output_tracers = tuple(output_tracer_list)
+
+ sanitized_tracers = []
+ for tracer in output_tracers:
+ if isinstance(tracer, Tracer):
+ sanitized_tracers.append(tracer)
+ continue
+
+ try:
+ sanitized_tracers.append(Tracer.sanitize(tracer))
+ except Exception as error:
+ message = (
+ f"Function '{function.__name__}' "
+ f"returned '{tracer}', "
+ f"which is not supported"
+ )
+ raise ValueError(message) from error
+
+ output_tracers = tuple(sanitized_tracers)
+
+ def create_graph_from_output_tracers(
+ arguments: Dict[str, Tracer],
+ output_tracers: Tuple[Tracer, ...],
+ ) -> nx.MultiDiGraph:
+ graph = nx.MultiDiGraph()
+
+ visited_tracers: Set[Tracer] = set()
+ current_tracers = {tracer: None for tracer in output_tracers}
+
+ while current_tracers:
+ next_tracers: Dict[Tracer, None] = {}
+ for tracer in current_tracers:
+ if tracer not in visited_tracers:
+ current_node = tracer.computation
+ graph.add_node(current_node)
+
+ for input_idx, input_tracer in enumerate(tracer.input_tracers):
+ pred_node = input_tracer.computation
+
+ graph.add_node(pred_node)
+ graph.add_edge(
+ pred_node,
+ current_node,
+ input_idx=input_idx,
+ )
+
+ if input_tracer not in visited_tracers:
+ next_tracers.update({input_tracer: None})
+
+ visited_tracers.add(tracer)
+
+ current_tracers = next_tracers
+
+ assert_that(nx.algorithms.dag.is_directed_acyclic_graph(graph))
+
+ unique_edges = {
+ (pred, succ, tuple((k, v) for k, v in edge_data.items()))
+ for pred, succ, edge_data in graph.edges(data=True)
+ }
+ assert_that(len(unique_edges) == len(graph.edges))
+
+ for tracer in arguments.values():
+ graph.add_node(tracer.computation)
+
+ return graph
+
+ graph = create_graph_from_output_tracers(arguments, output_tracers)
+ input_nodes = {
+ input_indices[node]: node
+ for node in graph.nodes()
+ if len(graph.pred[node]) == 0 and node.operation == Operation.Input
+ }
+ output_nodes = {
+ output_idx: tracer.computation for output_idx, tracer in enumerate(output_tracers)
+ }
+
+ return Graph(graph, input_nodes, output_nodes, is_direct)
+
+ # pylint: enable=too-many-statements
+
+ def __init__(self, computation: Node, input_tracers: List["Tracer"]):
+ self.computation = computation
+ self.input_tracers = input_tracers
+ self.output = computation.output
+
+ for i, tracer in enumerate(self.input_tracers):
+ self.input_tracers[i] = tracer if tracer.last_version is None else tracer.last_version
+
+ def __hash__(self) -> int:
+ return id(self)
+
+ def __bool__(self) -> bool:
+ # pylint: disable=invalid-bool-returned
+
+ message = "Branching within circuits is not possible"
+ raise RuntimeError(message)
+
+ @staticmethod
+ def sanitize(value: Any) -> Any:
+ """
+ Try to create a tracer from a value.
+
+ Args:
+ value (Any):
+ value to use
+
+ Returns:
+ Any:
+ resulting tracer
+ """
+
+ if isinstance(value, tuple):
+ return tuple(Tracer.sanitize(item) for item in value)
+
+ if isinstance(value, Tracer):
+ return value
+
+ computation = Node.constant(value)
+ return Tracer(computation, [])
+
+ SUPPORTED_NUMPY_OPERATORS: Set[Any] = {
+ np.abs,
+ np.absolute,
+ np.add,
+ np.arccos,
+ np.arccosh,
+ np.arcsin,
+ np.arcsinh,
+ np.arctan,
+ np.arctan2,
+ np.arctanh,
+ np.around,
+ np.bitwise_and,
+ np.bitwise_or,
+ np.bitwise_xor,
+ np.broadcast_to,
+ np.cbrt,
+ np.ceil,
+ np.clip,
+ np.concatenate,
+ np.copysign,
+ np.cos,
+ np.cosh,
+ np.deg2rad,
+ np.degrees,
+ np.divide,
+ np.dot,
+ np.equal,
+ np.exp,
+ np.exp2,
+ np.expand_dims,
+ np.expm1,
+ np.fabs,
+ np.float_power,
+ np.floor,
+ np.floor_divide,
+ np.fmax,
+ np.fmin,
+ np.fmod,
+ np.gcd,
+ np.greater,
+ np.greater_equal,
+ np.heaviside,
+ np.hypot,
+ np.invert,
+ np.isfinite,
+ np.isinf,
+ np.isnan,
+ np.lcm,
+ np.ldexp,
+ np.left_shift,
+ np.less,
+ np.less_equal,
+ np.log,
+ np.log10,
+ np.log1p,
+ np.log2,
+ np.logaddexp,
+ np.logaddexp2,
+ np.logical_and,
+ np.logical_not,
+ np.logical_or,
+ np.logical_xor,
+ np.matmul,
+ np.maximum,
+ np.minimum,
+ np.mod,
+ np.multiply,
+ np.negative,
+ np.nextafter,
+ np.not_equal,
+ np.ones_like,
+ np.positive,
+ np.power,
+ np.rad2deg,
+ np.radians,
+ np.reciprocal,
+ np.remainder,
+ np.reshape,
+ np.right_shift,
+ np.rint,
+ np.round_,
+ np.sign,
+ np.signbit,
+ np.sin,
+ np.sinh,
+ np.spacing,
+ np.sqrt,
+ np.square,
+ np.squeeze,
+ np.subtract,
+ np.sum,
+ np.tan,
+ np.tanh,
+ np.transpose,
+ np.true_divide,
+ np.trunc,
+ np.where,
+ np.zeros_like,
+ }
+
+ SUPPORTED_KWARGS: Dict[Any, Set[str]] = {
+ np.around: {
+ "decimals",
+ },
+ np.broadcast_to: {
+ "shape",
+ },
+ np.concatenate: {
+ "axis",
+ },
+ np.expand_dims: {
+ "axis",
+ },
+ np.ones_like: {
+ "dtype",
+ },
+ np.reshape: {
+ "newshape",
+ },
+ np.round_: {
+ "decimals",
+ },
+ np.squeeze: {
+ "axis",
+ },
+ np.sum: {
+ "axis",
+ "keepdims",
+ },
+ np.transpose: {
+ "axes",
+ },
+ np.zeros_like: {
+ "dtype",
+ },
+ }
+
+ @staticmethod
+ def _trace_numpy_operation(operation: Callable, *args, **kwargs) -> "Tracer":
+ """
+ Trace an arbitrary numpy operation into an Operation.Generic node.
+
+ Args:
+ operation (Callable):
+ operation to trace
+
+ args (List[Any]):
+ args of the arbitrary computation
+
+ kwargs (Dict[str, Any]):
+ kwargs of the arbitrary computation
+
+ Returns:
+ Tracer:
+ tracer representing the arbitrary computation
+ """
+
+ if operation not in Tracer.SUPPORTED_NUMPY_OPERATORS:
+ message = f"Function 'np.{operation.__name__}' is not supported"
+ raise RuntimeError(message)
+
+ supported_kwargs = Tracer.SUPPORTED_KWARGS.get(operation, set())
+ for kwarg in kwargs:
+ if kwarg not in supported_kwargs:
+ message = (
+ f"Function 'np.{operation.__name__}' is not supported with kwarg '{kwarg}'"
+ )
+ raise RuntimeError(message)
+
+ if operation == np.ones_like: # pylint: disable=comparison-with-callable
+ dtype = kwargs.get("dtype", np.int64)
+ return Tracer(Node.constant(np.ones(args[0].shape, dtype=dtype)), [])
+
+ if operation == np.zeros_like: # pylint: disable=comparison-with-callable
+ dtype = kwargs.get("dtype", np.int64)
+ return Tracer(Node.constant(np.zeros(args[0].shape, dtype=dtype)), [])
+
+ def sampler(arg: Any) -> Any:
+ if isinstance(arg, tuple):
+ return tuple(sampler(item) for item in arg)
+
+ output = arg.output
+ assert_that(isinstance(output.dtype, (Float, Integer)))
+
+ dtype: Any = np.int64
+ if isinstance(output.dtype, Float):
+ assert_that(output.dtype.bit_width in [16, 32, 64])
+ dtype = {64: np.float64, 32: np.float32, 16: np.float16}[output.dtype.bit_width]
+
+ if output.shape == ():
+ return dtype(1)
+
+ return np.ones(output.shape, dtype=dtype)
+
+ sample = [sampler(arg) for arg in args]
+ evaluation = operation(*sample, **kwargs)
+
+ def extract_tracers(arg: Any, tracers: List[Tracer]):
+ if isinstance(arg, tuple):
+ for item in arg:
+ extract_tracers(item, tracers)
+
+ if isinstance(arg, Tracer):
+ tracers.append(arg)
+
+ tracers: List[Tracer] = []
+ for arg in args:
+ extract_tracers(arg, tracers)
+
+ output_value = Value.of(evaluation)
+ output_value.is_encrypted = any(tracer.output.is_encrypted for tracer in tracers)
+
+ if Tracer._is_direct and isinstance(output_value.dtype, Integer):
+
+ assert all(isinstance(tracer.output.dtype, Integer) for tracer in tracers)
+ dtypes = cast(List[Integer], [tracer.output.dtype for tracer in tracers])
+
+ output_value.dtype.bit_width = max(dtype.bit_width for dtype in dtypes)
+ output_value.dtype.is_signed = any(dtype.is_signed for dtype in dtypes)
+
+ computation = Node.generic(
+ operation.__name__,
+ [tracer.output for tracer in tracers],
+ output_value,
+ operation,
+ kwargs=kwargs,
+ )
+ return Tracer(computation, tracers)
+
+ def __array_ufunc__(self, ufunc, method, *args, **kwargs):
+ """
+ Numpy ufunc hook.
+
+ (https://numpy.org/doc/stable/user/basics.dispatch.html#basics-dispatch)
+ """
+
+ if method == "__call__":
+ sanitized_args = [self.sanitize(arg) for arg in args]
+ return Tracer._trace_numpy_operation(ufunc, *sanitized_args, **kwargs)
+
+ message = "Only __call__ hook is supported for numpy ufuncs"
+ raise RuntimeError(message)
+
+ def __array_function__(self, func, _types, args, kwargs):
+ """
+ Numpy function hook.
+
+ (https://numpy.org/doc/stable/user/basics.dispatch.html#basics-dispatch)
+ """
+
+ if func is np.broadcast_to:
+ sanitized_args = [self.sanitize(args[0])]
+ if len(args) > 1:
+ kwargs["shape"] = args[1]
+ elif func is np.reshape:
+ sanitized_args = [self.sanitize(args[0])]
+ if len(args) > 1:
+ kwargs["newshape"] = args[1]
+ elif func is np.transpose:
+ sanitized_args = [self.sanitize(args[0])]
+ if len(args) > 1:
+ kwargs["axes"] = args[1]
+ else:
+ sanitized_args = [self.sanitize(arg) for arg in args]
+
+ return Tracer._trace_numpy_operation(func, *sanitized_args, **kwargs)
+
+ def __add__(self, other: Any) -> "Tracer":
+ return Tracer._trace_numpy_operation(np.add, self, self.sanitize(other))
+
+ def __radd__(self, other: Any) -> "Tracer":
+ return Tracer._trace_numpy_operation(np.add, self.sanitize(other), self)
+
+ def __sub__(self, other: Any) -> "Tracer":
+ return Tracer._trace_numpy_operation(np.subtract, self, self.sanitize(other))
+
+ def __rsub__(self, other) -> "Tracer":
+ return Tracer._trace_numpy_operation(np.subtract, self.sanitize(other), self)
+
+ def __mul__(self, other: Any) -> "Tracer":
+ return Tracer._trace_numpy_operation(np.multiply, self, self.sanitize(other))
+
+ def __rmul__(self, other: Any) -> "Tracer":
+ return Tracer._trace_numpy_operation(np.multiply, self.sanitize(other), self)
+
+ def __truediv__(self, other: Any) -> "Tracer":
+ return Tracer._trace_numpy_operation(np.true_divide, self, self.sanitize(other))
+
+ def __rtruediv__(self, other: Any) -> "Tracer":
+ return Tracer._trace_numpy_operation(np.true_divide, self.sanitize(other), self)
+
+ def __floordiv__(self, other: Any) -> "Tracer":
+ return Tracer._trace_numpy_operation(np.floor_divide, self, self.sanitize(other))
+
+ def __rfloordiv__(self, other: Any) -> "Tracer":
+ return Tracer._trace_numpy_operation(np.floor_divide, self.sanitize(other), self)
+
+ def __pow__(self, other: Any) -> "Tracer":
+ return Tracer._trace_numpy_operation(np.power, self, self.sanitize(other))
+
+ def __rpow__(self, other: Any) -> "Tracer":
+ return Tracer._trace_numpy_operation(np.power, self.sanitize(other), self)
+
+ def __mod__(self, other: Any) -> "Tracer":
+ return Tracer._trace_numpy_operation(np.mod, self, self.sanitize(other))
+
+ def __rmod__(self, other: Any) -> "Tracer":
+ return Tracer._trace_numpy_operation(np.mod, self.sanitize(other), self)
+
+ def __matmul__(self, other: Any) -> "Tracer":
+ return Tracer._trace_numpy_operation(np.matmul, self, self.sanitize(other))
+
+ def __rmatmul__(self, other: Any) -> "Tracer":
+ return Tracer._trace_numpy_operation(np.matmul, self.sanitize(other), self)
+
+ def __neg__(self) -> "Tracer":
+ return Tracer._trace_numpy_operation(np.negative, self)
+
+ def __pos__(self) -> "Tracer":
+ return Tracer._trace_numpy_operation(np.positive, self)
+
+ def __abs__(self):
+ return Tracer._trace_numpy_operation(np.absolute, self)
+
+ def __round__(self, ndigits=None):
+ if ndigits is None:
+ result = Tracer._trace_numpy_operation(np.around, self)
+ if self._is_direct:
+ message = (
+ "'round(x)' cannot be used in direct definition (you may use np.around instead)"
+ )
+ raise RuntimeError(message)
+ return result.astype(np.int64)
+
+ return Tracer._trace_numpy_operation(np.around, self, decimals=ndigits)
+
+ def __invert__(self):
+ return Tracer._trace_numpy_operation(np.invert, self)
+
+ def __and__(self, other: Any) -> "Tracer":
+ return Tracer._trace_numpy_operation(np.bitwise_and, self, self.sanitize(other))
+
+ def __rand__(self, other: Any) -> "Tracer":
+ return Tracer._trace_numpy_operation(np.bitwise_and, self.sanitize(other), self)
+
+ def __or__(self, other: Any) -> "Tracer":
+ return Tracer._trace_numpy_operation(np.bitwise_or, self, self.sanitize(other))
+
+ def __ror__(self, other: Any) -> "Tracer":
+ return Tracer._trace_numpy_operation(np.bitwise_or, self.sanitize(other), self)
+
+ def __xor__(self, other: Any) -> "Tracer":
+ return Tracer._trace_numpy_operation(np.bitwise_xor, self, self.sanitize(other))
+
+ def __rxor__(self, other: Any) -> "Tracer":
+ return Tracer._trace_numpy_operation(np.bitwise_xor, self.sanitize(other), self)
+
+ def __lshift__(self, other: Any) -> "Tracer":
+ return Tracer._trace_numpy_operation(np.left_shift, self, self.sanitize(other))
+
+ def __rlshift__(self, other: Any) -> "Tracer":
+ return Tracer._trace_numpy_operation(np.left_shift, self.sanitize(other), self)
+
+ def __rshift__(self, other: Any) -> "Tracer":
+ return Tracer._trace_numpy_operation(np.right_shift, self, self.sanitize(other))
+
+ def __rrshift__(self, other: Any) -> "Tracer":
+ return Tracer._trace_numpy_operation(np.right_shift, self.sanitize(other), self)
+
+ def __gt__(self, other: Any) -> "Tracer": # type: ignore
+ return Tracer._trace_numpy_operation(np.greater, self, self.sanitize(other))
+
+ def __ge__(self, other: Any) -> "Tracer": # type: ignore
+ return Tracer._trace_numpy_operation(np.greater_equal, self, self.sanitize(other))
+
+ def __lt__(self, other: Any) -> "Tracer": # type: ignore
+ return Tracer._trace_numpy_operation(np.less, self, self.sanitize(other))
+
+ def __le__(self, other: Any) -> "Tracer": # type: ignore
+ return Tracer._trace_numpy_operation(np.less_equal, self, self.sanitize(other))
+
+ def __eq__(self, other: Any) -> Union[bool, "Tracer"]: # type: ignore
+ return (
+ self is other
+ if not self._is_tracing
+ else Tracer._trace_numpy_operation(np.equal, self, self.sanitize(other))
+ )
+
+ def __ne__(self, other: Any) -> Union[bool, "Tracer"]: # type: ignore
+ return (
+ self is not other
+ if not self._is_tracing
+ else Tracer._trace_numpy_operation(np.not_equal, self, self.sanitize(other))
+ )
+
+ def astype(self, dtype: Union[DTypeLike, Type["ScalarAnnotation"]]) -> "Tracer":
+ """
+ Trace numpy.ndarray.astype(dtype).
+ """
+
+ if Tracer._is_direct:
+ output_value = deepcopy(self.output)
+
+ if isinstance(dtype, type) and issubclass(dtype, ScalarAnnotation):
+ output_value.dtype = dtype.dtype
+ else:
+ message = (
+ "`astype` method must be called with a concrete.numpy type "
+ "for direct circuit definition (e.g., value.astype(cnp.uint4))"
+ )
+ raise ValueError(message)
+
+ computation = Node.generic(
+ "astype",
+ [self.output],
+ output_value,
+ lambda x: x, # unused for direct definition
+ )
+ return Tracer(computation, [self])
+
+ if isinstance(dtype, type) and issubclass(dtype, ScalarAnnotation):
+ message = (
+ "`astype` method must be called with a "
+ "numpy type for compilation (e.g., value.astype(np.int64))"
+ )
+ raise ValueError(message)
+
+ dtype = np.dtype(dtype).type
+ if np.issubdtype(dtype, np.integer) and dtype != np.int64:
+ print(
+ "Warning: When using `value.astype(newtype)` "
+ "with an integer newtype, "
+ "only use `np.int64` as the newtype "
+ "to avoid unexpected overflows "
+ "during inputset evaluation"
+ )
+
+ output_value = deepcopy(self.output)
+ output_value.dtype = Value.of(dtype(0)).dtype # type: ignore
+
+ if np.issubdtype(dtype, np.integer):
+
+ def evaluator(x, dtype):
+ if np.any(np.isnan(x)):
+ message = "A `NaN` value is tried to be converted to integer"
+ raise ValueError(message)
+ if np.any(np.isinf(x)):
+ message = "An `Inf` value is tried to be converted to integer"
+ raise ValueError(message)
+ return x.astype(dtype)
+
+ else:
+
+ def evaluator(x, dtype):
+ return x.astype(dtype)
+
+ computation = Node.generic(
+ "astype",
+ [self.output],
+ output_value,
+ evaluator,
+ kwargs={"dtype": dtype},
+ )
+ return Tracer(computation, [self])
+
+ def clip(self, minimum: Any, maximum: Any) -> "Tracer":
+ """
+ Trace numpy.ndarray.clip().
+ """
+
+ return Tracer._trace_numpy_operation(
+ np.clip, self, self.sanitize(minimum), self.sanitize(maximum)
+ )
+
+ def dot(self, other: Any) -> "Tracer":
+ """
+ Trace numpy.ndarray.dot().
+ """
+
+ return Tracer._trace_numpy_operation(np.dot, self, self.sanitize(other))
+
+ def flatten(self) -> "Tracer":
+ """
+ Trace numpy.ndarray.flatten().
+ """
+
+ return Tracer._trace_numpy_operation(np.reshape, self, newshape=(self.output.size,))
+
+ def reshape(self, newshape: Tuple[Any, ...]) -> "Tracer":
+ """
+ Trace numpy.ndarray.reshape(newshape).
+ """
+
+ return Tracer._trace_numpy_operation(np.reshape, self, newshape=newshape)
+
+ def round(self, decimals: int = 0) -> "Tracer":
+ """
+ Trace numpy.ndarray.round().
+ """
+
+ return Tracer._trace_numpy_operation(np.around, self, decimals=decimals)
+
+ def transpose(self, axes: Optional[Tuple[int, ...]] = None) -> "Tracer":
+ """
+ Trace numpy.ndarray.transpose().
+ """
+
+ if axes is None:
+ return Tracer._trace_numpy_operation(np.transpose, self)
+
+ return Tracer._trace_numpy_operation(np.transpose, self, axes=axes)
+
+ def __getitem__(
+ self,
+ index: Union[int, np.integer, slice, Tuple[Union[int, np.integer, slice], ...]],
+ ) -> "Tracer":
+ if not isinstance(index, tuple):
+ index = (index,)
+
+ for indexing_element in index:
+ valid = isinstance(indexing_element, (int, np.integer, slice))
+
+ if isinstance(indexing_element, slice):
+ if (
+ not (
+ indexing_element.start is None
+ or isinstance(indexing_element.start, (int, np.integer))
+ )
+ or not (
+ indexing_element.stop is None
+ or isinstance(indexing_element.stop, (int, np.integer))
+ )
+ or not (
+ indexing_element.step is None
+ or isinstance(indexing_element.step, (int, np.integer))
+ )
+ ):
+ valid = False
+
+ if not valid:
+ message = (
+ f"Indexing with '{format_indexing_element(indexing_element)}' is not supported"
+ )
+ raise ValueError(message)
+
+ output_value = deepcopy(self.output)
+ output_value.shape = np.zeros(output_value.shape)[index].shape
+
+ computation = Node.generic(
+ "index.static",
+ [self.output],
+ output_value,
+ lambda x, index: x[index],
+ kwargs={"index": index},
+ )
+ return Tracer(computation, [self])
+
+ def __setitem__(
+ self,
+ index: Union[int, np.integer, slice, Tuple[Union[int, np.integer, slice], ...]],
+ value: Any,
+ ):
+ if not isinstance(index, tuple):
+ index = (index,)
+
+ for indexing_element in index:
+ valid = isinstance(indexing_element, (int, np.integer, slice))
+
+ if isinstance(indexing_element, slice):
+ if (
+ not (
+ indexing_element.start is None
+ or isinstance(indexing_element.start, (int, np.integer))
+ )
+ or not (
+ indexing_element.stop is None
+ or isinstance(indexing_element.stop, (int, np.integer))
+ )
+ or not (
+ indexing_element.step is None
+ or isinstance(indexing_element.step, (int, np.integer))
+ )
+ ):
+ valid = False
+
+ if not valid:
+ message = (
+ f"Assigning to '{format_indexing_element(indexing_element)}' is not supported"
+ )
+ raise ValueError(message)
+
+ np.zeros(self.output.shape)[index] = 1
+
+ def assign(x, value, index):
+ x[index] = value
+ return x
+
+ sanitized_value = self.sanitize(value)
+ computation = Node.generic(
+ "assign.static",
+ [self.output, sanitized_value.output],
+ self.output,
+ assign,
+ kwargs={"index": index},
+ )
+ new_version = Tracer(computation, [self, sanitized_value])
+
+ self.last_version = new_version
+
+ @property
+ def shape(self) -> Tuple[int, ...]:
+ """
+ Trace numpy.ndarray.shape.
+ """
+
+ return self.output.shape
+
+ @property
+ def ndim(self) -> int:
+ """
+ Trace numpy.ndarray.ndim.
+ """
+
+ return self.output.ndim
+
+ @property
+ def size(self) -> int:
+ """
+ Trace numpy.ndarray.size.
+ """
+
+ return self.output.size
+
+ @property
+ def T(self) -> "Tracer": # pylint: disable=invalid-name # noqa: N802
+ """
+ Trace numpy.ndarray.T.
+ """
+
+ return Tracer._trace_numpy_operation(np.transpose, self)
+
+
+class Annotation(Tracer):
+ """
+ Base annotation for direct definition.
+ """
+
+
+class ScalarAnnotation(Annotation):
+ """
+ Base scalar annotation for direct definition.
+ """
+
+ dtype: BaseDataType
+
+
+class TensorAnnotation(Annotation):
+ """
+ Base tensor annotation for direct definition.
+ """
diff --git a/frontends/concrete-python/concrete/numpy/tracing/typing.py b/frontends/concrete-python/concrete/numpy/tracing/typing.py
new file mode 100644
index 000000000..c2bcbaac6
--- /dev/null
+++ b/frontends/concrete-python/concrete/numpy/tracing/typing.py
@@ -0,0 +1,1223 @@
+"""
+Declaration of type annotation.
+"""
+
+from typing import Any
+
+from ..dtypes import Float, SignedInteger, UnsignedInteger
+from ..values import Value
+from .tracer import ScalarAnnotation, TensorAnnotation
+
+# pylint: disable=function-redefined,invalid-name,no-self-use,too-many-lines,using-constant-test
+# ruff: noqa
+
+
+# We'll pull a little trick on mypy
+# Basically, this branch is never executed during runtime
+# But, mypy will use the information within anyway
+# So, it'll think our types are `Any` and it'll stop complaining when used with numpy
+
+if False:
+
+ f32 = Any
+ f64 = Any
+
+ int1 = Any
+ int2 = Any
+ int3 = Any
+ int4 = Any
+ int5 = Any
+ int6 = Any
+ int7 = Any
+ int8 = Any
+ int9 = Any
+ int10 = Any
+ int11 = Any
+ int12 = Any
+ int13 = Any
+ int14 = Any
+ int15 = Any
+ int16 = Any
+ int17 = Any
+ int18 = Any
+ int19 = Any
+ int20 = Any
+ int21 = Any
+ int22 = Any
+ int23 = Any
+ int24 = Any
+ int25 = Any
+ int26 = Any
+ int27 = Any
+ int28 = Any
+ int29 = Any
+ int30 = Any
+ int31 = Any
+ int32 = Any
+ int33 = Any
+ int34 = Any
+ int35 = Any
+ int36 = Any
+ int37 = Any
+ int38 = Any
+ int39 = Any
+ int40 = Any
+ int41 = Any
+ int42 = Any
+ int43 = Any
+ int44 = Any
+ int45 = Any
+ int46 = Any
+ int47 = Any
+ int48 = Any
+ int49 = Any
+ int50 = Any
+ int51 = Any
+ int52 = Any
+ int53 = Any
+ int54 = Any
+ int55 = Any
+ int56 = Any
+ int57 = Any
+ int58 = Any
+ int59 = Any
+ int60 = Any
+ int61 = Any
+ int62 = Any
+ int63 = Any
+ int64 = Any
+
+ uint1 = Any
+ uint2 = Any
+ uint3 = Any
+ uint4 = Any
+ uint5 = Any
+ uint6 = Any
+ uint7 = Any
+ uint8 = Any
+ uint9 = Any
+ uint10 = Any
+ uint11 = Any
+ uint12 = Any
+ uint13 = Any
+ uint14 = Any
+ uint15 = Any
+ uint16 = Any
+ uint17 = Any
+ uint18 = Any
+ uint19 = Any
+ uint20 = Any
+ uint21 = Any
+ uint22 = Any
+ uint23 = Any
+ uint24 = Any
+ uint25 = Any
+ uint26 = Any
+ uint27 = Any
+ uint28 = Any
+ uint29 = Any
+ uint30 = Any
+ uint31 = Any
+ uint32 = Any
+ uint33 = Any
+ uint34 = Any
+ uint35 = Any
+ uint36 = Any
+ uint37 = Any
+ uint38 = Any
+ uint39 = Any
+ uint40 = Any
+ uint41 = Any
+ uint42 = Any
+ uint43 = Any
+ uint44 = Any
+ uint45 = Any
+ uint46 = Any
+ uint47 = Any
+ uint48 = Any
+ uint49 = Any
+ uint50 = Any
+ uint51 = Any
+ uint52 = Any
+ uint53 = Any
+ uint54 = Any
+ uint55 = Any
+ uint56 = Any
+ uint57 = Any
+ uint58 = Any
+ uint59 = Any
+ uint60 = Any
+ uint61 = Any
+ uint62 = Any
+ uint63 = Any
+ uint64 = Any
+ tensor = Any
+
+
+class f32(ScalarAnnotation): # type: ignore
+ """
+ Scalar f32 annotation.
+ """
+
+ dtype = Float(32)
+
+
+class f64(ScalarAnnotation): # type: ignore
+ """
+ Scalar f64 annotation.
+ """
+
+ dtype = Float(64)
+
+
+class int1(ScalarAnnotation): # type: ignore
+ """
+ Scalar int1 annotation.
+ """
+
+ dtype = SignedInteger(1)
+
+
+class int2(ScalarAnnotation): # type: ignore
+ """
+ Scalar int2 annotation.
+ """
+
+ dtype = SignedInteger(2)
+
+
+class int3(ScalarAnnotation): # type: ignore
+ """
+ Scalar int3 annotation.
+ """
+
+ dtype = SignedInteger(3)
+
+
+class int4(ScalarAnnotation): # type: ignore
+ """
+ Scalar int4 annotation.
+ """
+
+ dtype = SignedInteger(4)
+
+
+class int5(ScalarAnnotation): # type: ignore
+ """
+ Scalar int5 annotation.
+ """
+
+ dtype = SignedInteger(5)
+
+
+class int6(ScalarAnnotation): # type: ignore
+ """
+ Scalar int6 annotation.
+ """
+
+ dtype = SignedInteger(6)
+
+
+class int7(ScalarAnnotation): # type: ignore
+ """
+ Scalar int7 annotation.
+ """
+
+ dtype = SignedInteger(7)
+
+
+class int8(ScalarAnnotation): # type: ignore
+ """
+ Scalar int8 annotation.
+ """
+
+ dtype = SignedInteger(8)
+
+
+class int9(ScalarAnnotation): # type: ignore
+ """
+ Scalar int9 annotation.
+ """
+
+ dtype = SignedInteger(9)
+
+
+class int10(ScalarAnnotation): # type: ignore
+ """
+ Scalar int10 annotation.
+ """
+
+ dtype = SignedInteger(10)
+
+
+class int11(ScalarAnnotation): # type: ignore
+ """
+ Scalar int11 annotation.
+ """
+
+ dtype = SignedInteger(11)
+
+
+class int12(ScalarAnnotation): # type: ignore
+ """
+ Scalar int12 annotation.
+ """
+
+ dtype = SignedInteger(12)
+
+
+class int13(ScalarAnnotation): # type: ignore
+ """
+ Scalar int13 annotation.
+ """
+
+ dtype = SignedInteger(13)
+
+
+class int14(ScalarAnnotation): # type: ignore
+ """
+ Scalar int14 annotation.
+ """
+
+ dtype = SignedInteger(14)
+
+
+class int15(ScalarAnnotation): # type: ignore
+ """
+ Scalar int15 annotation.
+ """
+
+ dtype = SignedInteger(15)
+
+
+class int16(ScalarAnnotation): # type: ignore
+ """
+ Scalar int16 annotation.
+ """
+
+ dtype = SignedInteger(16)
+
+
+class int17(ScalarAnnotation): # type: ignore
+ """
+ Scalar int17 annotation.
+ """
+
+ dtype = SignedInteger(17)
+
+
+class int18(ScalarAnnotation): # type: ignore
+ """
+ Scalar int18 annotation.
+ """
+
+ dtype = SignedInteger(18)
+
+
+class int19(ScalarAnnotation): # type: ignore
+ """
+ Scalar int19 annotation.
+ """
+
+ dtype = SignedInteger(19)
+
+
+class int20(ScalarAnnotation): # type: ignore
+ """
+ Scalar int20 annotation.
+ """
+
+ dtype = SignedInteger(20)
+
+
+class int21(ScalarAnnotation): # type: ignore
+ """
+ Scalar int21 annotation.
+ """
+
+ dtype = SignedInteger(21)
+
+
+class int22(ScalarAnnotation): # type: ignore
+ """
+ Scalar int22 annotation.
+ """
+
+ dtype = SignedInteger(22)
+
+
+class int23(ScalarAnnotation): # type: ignore
+ """
+ Scalar int23 annotation.
+ """
+
+ dtype = SignedInteger(23)
+
+
+class int24(ScalarAnnotation): # type: ignore
+ """
+ Scalar int24 annotation.
+ """
+
+ dtype = SignedInteger(24)
+
+
+class int25(ScalarAnnotation): # type: ignore
+ """
+ Scalar int25 annotation.
+ """
+
+ dtype = SignedInteger(25)
+
+
+class int26(ScalarAnnotation): # type: ignore
+ """
+ Scalar int26 annotation.
+ """
+
+ dtype = SignedInteger(26)
+
+
+class int27(ScalarAnnotation): # type: ignore
+ """
+ Scalar int27 annotation.
+ """
+
+ dtype = SignedInteger(27)
+
+
+class int28(ScalarAnnotation): # type: ignore
+ """
+ Scalar int28 annotation.
+ """
+
+ dtype = SignedInteger(28)
+
+
+class int29(ScalarAnnotation): # type: ignore
+ """
+ Scalar int29 annotation.
+ """
+
+ dtype = SignedInteger(29)
+
+
+class int30(ScalarAnnotation): # type: ignore
+ """
+ Scalar int30 annotation.
+ """
+
+ dtype = SignedInteger(30)
+
+
+class int31(ScalarAnnotation): # type: ignore
+ """
+ Scalar int31 annotation.
+ """
+
+ dtype = SignedInteger(31)
+
+
+class int32(ScalarAnnotation): # type: ignore
+ """
+ Scalar int32 annotation.
+ """
+
+ dtype = SignedInteger(32)
+
+
+class int33(ScalarAnnotation): # type: ignore
+ """
+ Scalar int33 annotation.
+ """
+
+ dtype = SignedInteger(33)
+
+
+class int34(ScalarAnnotation): # type: ignore
+ """
+ Scalar int34 annotation.
+ """
+
+ dtype = SignedInteger(34)
+
+
+class int35(ScalarAnnotation): # type: ignore
+ """
+ Scalar int35 annotation.
+ """
+
+ dtype = SignedInteger(35)
+
+
+class int36(ScalarAnnotation): # type: ignore
+ """
+ Scalar int36 annotation.
+ """
+
+ dtype = SignedInteger(36)
+
+
+class int37(ScalarAnnotation): # type: ignore
+ """
+ Scalar int37 annotation.
+ """
+
+ dtype = SignedInteger(37)
+
+
+class int38(ScalarAnnotation): # type: ignore
+ """
+ Scalar int38 annotation.
+ """
+
+ dtype = SignedInteger(38)
+
+
+class int39(ScalarAnnotation): # type: ignore
+ """
+ Scalar int39 annotation.
+ """
+
+ dtype = SignedInteger(39)
+
+
+class int40(ScalarAnnotation): # type: ignore
+ """
+ Scalar int40 annotation.
+ """
+
+ dtype = SignedInteger(40)
+
+
+class int41(ScalarAnnotation): # type: ignore
+ """
+ Scalar int41 annotation.
+ """
+
+ dtype = SignedInteger(41)
+
+
+class int42(ScalarAnnotation): # type: ignore
+ """
+ Scalar int42 annotation.
+ """
+
+ dtype = SignedInteger(42)
+
+
+class int43(ScalarAnnotation): # type: ignore
+ """
+ Scalar int43 annotation.
+ """
+
+ dtype = SignedInteger(43)
+
+
+class int44(ScalarAnnotation): # type: ignore
+ """
+ Scalar int44 annotation.
+ """
+
+ dtype = SignedInteger(44)
+
+
+class int45(ScalarAnnotation): # type: ignore
+ """
+ Scalar int45 annotation.
+ """
+
+ dtype = SignedInteger(45)
+
+
+class int46(ScalarAnnotation): # type: ignore
+ """
+ Scalar int46 annotation.
+ """
+
+ dtype = SignedInteger(46)
+
+
+class int47(ScalarAnnotation): # type: ignore
+ """
+ Scalar int47 annotation.
+ """
+
+ dtype = SignedInteger(47)
+
+
+class int48(ScalarAnnotation): # type: ignore
+ """
+ Scalar int48 annotation.
+ """
+
+ dtype = SignedInteger(48)
+
+
+class int49(ScalarAnnotation): # type: ignore
+ """
+ Scalar int49 annotation.
+ """
+
+ dtype = SignedInteger(49)
+
+
+class int50(ScalarAnnotation): # type: ignore
+ """
+ Scalar int50 annotation.
+ """
+
+ dtype = SignedInteger(50)
+
+
+class int51(ScalarAnnotation): # type: ignore
+ """
+ Scalar int51 annotation.
+ """
+
+ dtype = SignedInteger(51)
+
+
+class int52(ScalarAnnotation): # type: ignore
+ """
+ Scalar int52 annotation.
+ """
+
+ dtype = SignedInteger(52)
+
+
+class int53(ScalarAnnotation): # type: ignore
+ """
+ Scalar int53 annotation.
+ """
+
+ dtype = SignedInteger(53)
+
+
+class int54(ScalarAnnotation): # type: ignore
+ """
+ Scalar int54 annotation.
+ """
+
+ dtype = SignedInteger(54)
+
+
+class int55(ScalarAnnotation): # type: ignore
+ """
+ Scalar int55 annotation.
+ """
+
+ dtype = SignedInteger(55)
+
+
+class int56(ScalarAnnotation): # type: ignore
+ """
+ Scalar int56 annotation.
+ """
+
+ dtype = SignedInteger(56)
+
+
+class int57(ScalarAnnotation): # type: ignore
+ """
+ Scalar int57 annotation.
+ """
+
+ dtype = SignedInteger(57)
+
+
+class int58(ScalarAnnotation): # type: ignore
+ """
+ Scalar int58 annotation.
+ """
+
+ dtype = SignedInteger(58)
+
+
+class int59(ScalarAnnotation): # type: ignore
+ """
+ Scalar int59 annotation.
+ """
+
+ dtype = SignedInteger(59)
+
+
+class int60(ScalarAnnotation): # type: ignore
+ """
+ Scalar int60 annotation.
+ """
+
+ dtype = SignedInteger(60)
+
+
+class int61(ScalarAnnotation): # type: ignore
+ """
+ Scalar int61 annotation.
+ """
+
+ dtype = SignedInteger(61)
+
+
+class int62(ScalarAnnotation): # type: ignore
+ """
+ Scalar int62 annotation.
+ """
+
+ dtype = SignedInteger(62)
+
+
+class int63(ScalarAnnotation): # type: ignore
+ """
+ Scalar int63 annotation.
+ """
+
+ dtype = SignedInteger(63)
+
+
+class int64(ScalarAnnotation): # type: ignore
+ """
+ Scalar int64 annotation.
+ """
+
+ dtype = SignedInteger(64)
+
+
+class uint1(ScalarAnnotation): # type: ignore
+ """
+ Scalar uint1 annotation.
+ """
+
+ dtype = UnsignedInteger(1)
+
+
+class uint2(ScalarAnnotation): # type: ignore
+ """
+ Scalar uint2 annotation.
+ """
+
+ dtype = UnsignedInteger(2)
+
+
+class uint3(ScalarAnnotation): # type: ignore
+ """
+ Scalar uint3 annotation.
+ """
+
+ dtype = UnsignedInteger(3)
+
+
+class uint4(ScalarAnnotation): # type: ignore
+ """
+ Scalar uint4 annotation.
+ """
+
+ dtype = UnsignedInteger(4)
+
+
+class uint5(ScalarAnnotation): # type: ignore
+ """
+ Scalar uint5 annotation.
+ """
+
+ dtype = UnsignedInteger(5)
+
+
+class uint6(ScalarAnnotation): # type: ignore
+ """
+ Scalar uint6 annotation.
+ """
+
+ dtype = UnsignedInteger(6)
+
+
+class uint7(ScalarAnnotation): # type: ignore
+ """
+ Scalar uint7 annotation.
+ """
+
+ dtype = UnsignedInteger(7)
+
+
+class uint8(ScalarAnnotation): # type: ignore
+ """
+ Scalar uint8 annotation.
+ """
+
+ dtype = UnsignedInteger(8)
+
+
+class uint9(ScalarAnnotation): # type: ignore
+ """
+ Scalar uint9 annotation.
+ """
+
+ dtype = UnsignedInteger(9)
+
+
+class uint10(ScalarAnnotation): # type: ignore
+ """
+ Scalar uint10 annotation.
+ """
+
+ dtype = UnsignedInteger(10)
+
+
+class uint11(ScalarAnnotation): # type: ignore
+ """
+ Scalar uint11 annotation.
+ """
+
+ dtype = UnsignedInteger(11)
+
+
+class uint12(ScalarAnnotation): # type: ignore
+ """
+ Scalar uint12 annotation.
+ """
+
+ dtype = UnsignedInteger(12)
+
+
+class uint13(ScalarAnnotation): # type: ignore
+ """
+ Scalar uint13 annotation.
+ """
+
+ dtype = UnsignedInteger(13)
+
+
+class uint14(ScalarAnnotation): # type: ignore
+ """
+ Scalar uint14 annotation.
+ """
+
+ dtype = UnsignedInteger(14)
+
+
+class uint15(ScalarAnnotation): # type: ignore
+ """
+ Scalar uint15 annotation.
+ """
+
+ dtype = UnsignedInteger(15)
+
+
+class uint16(ScalarAnnotation): # type: ignore
+ """
+ Scalar uint16 annotation.
+ """
+
+ dtype = UnsignedInteger(16)
+
+
+class uint17(ScalarAnnotation): # type: ignore
+ """
+ Scalar uint17 annotation.
+ """
+
+ dtype = UnsignedInteger(17)
+
+
+class uint18(ScalarAnnotation): # type: ignore
+ """
+ Scalar uint18 annotation.
+ """
+
+ dtype = UnsignedInteger(18)
+
+
+class uint19(ScalarAnnotation): # type: ignore
+ """
+ Scalar uint19 annotation.
+ """
+
+ dtype = UnsignedInteger(19)
+
+
+class uint20(ScalarAnnotation): # type: ignore
+ """
+ Scalar uint20 annotation.
+ """
+
+ dtype = UnsignedInteger(20)
+
+
+class uint21(ScalarAnnotation): # type: ignore
+ """
+ Scalar uint21 annotation.
+ """
+
+ dtype = UnsignedInteger(21)
+
+
+class uint22(ScalarAnnotation): # type: ignore
+ """
+ Scalar uint22 annotation.
+ """
+
+ dtype = UnsignedInteger(22)
+
+
+class uint23(ScalarAnnotation): # type: ignore
+ """
+ Scalar uint23 annotation.
+ """
+
+ dtype = UnsignedInteger(23)
+
+
+class uint24(ScalarAnnotation): # type: ignore
+ """
+ Scalar uint24 annotation.
+ """
+
+ dtype = UnsignedInteger(24)
+
+
+class uint25(ScalarAnnotation): # type: ignore
+ """
+ Scalar uint25 annotation.
+ """
+
+ dtype = UnsignedInteger(25)
+
+
+class uint26(ScalarAnnotation): # type: ignore
+ """
+ Scalar uint26 annotation.
+ """
+
+ dtype = UnsignedInteger(26)
+
+
+class uint27(ScalarAnnotation): # type: ignore
+ """
+ Scalar uint27 annotation.
+ """
+
+ dtype = UnsignedInteger(27)
+
+
+class uint28(ScalarAnnotation): # type: ignore
+ """
+ Scalar uint28 annotation.
+ """
+
+ dtype = UnsignedInteger(28)
+
+
+class uint29(ScalarAnnotation): # type: ignore
+ """
+ Scalar uint29 annotation.
+ """
+
+ dtype = UnsignedInteger(29)
+
+
+class uint30(ScalarAnnotation): # type: ignore
+ """
+ Scalar uint30 annotation.
+ """
+
+ dtype = UnsignedInteger(30)
+
+
+class uint31(ScalarAnnotation): # type: ignore
+ """
+ Scalar uint31 annotation.
+ """
+
+ dtype = UnsignedInteger(31)
+
+
+class uint32(ScalarAnnotation): # type: ignore
+ """
+ Scalar uint32 annotation.
+ """
+
+ dtype = UnsignedInteger(32)
+
+
+class uint33(ScalarAnnotation): # type: ignore
+ """
+ Scalar uint33 annotation.
+ """
+
+ dtype = UnsignedInteger(33)
+
+
+class uint34(ScalarAnnotation): # type: ignore
+ """
+ Scalar uint34 annotation.
+ """
+
+ dtype = UnsignedInteger(34)
+
+
+class uint35(ScalarAnnotation): # type: ignore
+ """
+ Scalar uint35 annotation.
+ """
+
+ dtype = UnsignedInteger(35)
+
+
+class uint36(ScalarAnnotation): # type: ignore
+ """
+ Scalar uint36 annotation.
+ """
+
+ dtype = UnsignedInteger(36)
+
+
+class uint37(ScalarAnnotation): # type: ignore
+ """
+ Scalar uint37 annotation.
+ """
+
+ dtype = UnsignedInteger(37)
+
+
+class uint38(ScalarAnnotation): # type: ignore
+ """
+ Scalar uint38 annotation.
+ """
+
+ dtype = UnsignedInteger(38)
+
+
+class uint39(ScalarAnnotation): # type: ignore
+ """
+ Scalar uint39 annotation.
+ """
+
+ dtype = UnsignedInteger(39)
+
+
+class uint40(ScalarAnnotation): # type: ignore
+ """
+ Scalar uint40 annotation.
+ """
+
+ dtype = UnsignedInteger(40)
+
+
+class uint41(ScalarAnnotation): # type: ignore
+ """
+ Scalar uint41 annotation.
+ """
+
+ dtype = UnsignedInteger(41)
+
+
+class uint42(ScalarAnnotation): # type: ignore
+ """
+ Scalar uint42 annotation.
+ """
+
+ dtype = UnsignedInteger(42)
+
+
+class uint43(ScalarAnnotation): # type: ignore
+ """
+ Scalar uint43 annotation.
+ """
+
+ dtype = UnsignedInteger(43)
+
+
+class uint44(ScalarAnnotation): # type: ignore
+ """
+ Scalar uint44 annotation.
+ """
+
+ dtype = UnsignedInteger(44)
+
+
+class uint45(ScalarAnnotation): # type: ignore
+ """
+ Scalar uint45 annotation.
+ """
+
+ dtype = UnsignedInteger(45)
+
+
+class uint46(ScalarAnnotation): # type: ignore
+ """
+ Scalar uint46 annotation.
+ """
+
+ dtype = UnsignedInteger(46)
+
+
+class uint47(ScalarAnnotation): # type: ignore
+ """
+ Scalar uint47 annotation.
+ """
+
+ dtype = UnsignedInteger(47)
+
+
+class uint48(ScalarAnnotation): # type: ignore
+ """
+ Scalar uint48 annotation.
+ """
+
+ dtype = UnsignedInteger(48)
+
+
+class uint49(ScalarAnnotation): # type: ignore
+ """
+ Scalar uint49 annotation.
+ """
+
+ dtype = UnsignedInteger(49)
+
+
+class uint50(ScalarAnnotation): # type: ignore
+ """
+ Scalar uint50 annotation.
+ """
+
+ dtype = UnsignedInteger(50)
+
+
+class uint51(ScalarAnnotation): # type: ignore
+ """
+ Scalar uint51 annotation.
+ """
+
+ dtype = UnsignedInteger(51)
+
+
+class uint52(ScalarAnnotation): # type: ignore
+ """
+ Scalar uint52 annotation.
+ """
+
+ dtype = UnsignedInteger(52)
+
+
+class uint53(ScalarAnnotation): # type: ignore
+ """
+ Scalar uint53 annotation.
+ """
+
+ dtype = UnsignedInteger(53)
+
+
+class uint54(ScalarAnnotation): # type: ignore
+ """
+ Scalar uint54 annotation.
+ """
+
+ dtype = UnsignedInteger(54)
+
+
+class uint55(ScalarAnnotation): # type: ignore
+ """
+ Scalar uint55 annotation.
+ """
+
+ dtype = UnsignedInteger(55)
+
+
+class uint56(ScalarAnnotation): # type: ignore
+ """
+ Scalar uint56 annotation.
+ """
+
+ dtype = UnsignedInteger(56)
+
+
+class uint57(ScalarAnnotation): # type: ignore
+ """
+ Scalar uint57 annotation.
+ """
+
+ dtype = UnsignedInteger(57)
+
+
+class uint58(ScalarAnnotation): # type: ignore
+ """
+ Scalar uint58 annotation.
+ """
+
+ dtype = UnsignedInteger(58)
+
+
+class uint59(ScalarAnnotation): # type: ignore
+ """
+ Scalar uint59 annotation.
+ """
+
+ dtype = UnsignedInteger(59)
+
+
+class uint60(ScalarAnnotation): # type: ignore
+ """
+ Scalar uint60 annotation.
+ """
+
+ dtype = UnsignedInteger(60)
+
+
+class uint61(ScalarAnnotation): # type: ignore
+ """
+ Scalar uint61 annotation.
+ """
+
+ dtype = UnsignedInteger(61)
+
+
+class uint62(ScalarAnnotation): # type: ignore
+ """
+ Scalar uint62 annotation.
+ """
+
+ dtype = UnsignedInteger(62)
+
+
+class uint63(ScalarAnnotation): # type: ignore
+ """
+ Scalar uint63 annotation.
+ """
+
+ dtype = UnsignedInteger(63)
+
+
+class uint64(ScalarAnnotation): # type: ignore
+ """
+ Scalar uint64 annotation.
+ """
+
+ dtype = UnsignedInteger(64)
+
+
+class tensor(TensorAnnotation): # type: ignore
+ """
+ Tensor annotation.
+ """
+
+ def __class_getitem__(cls, item):
+ if not isinstance(item, tuple):
+ item = (item,)
+
+ annotation = item[0]
+ if not issubclass(annotation, ScalarAnnotation):
+ raise ValueError(
+ f"First argument to tensor annotations should be a "
+ f"concrete-numpy data type (e.g., cnp.uint4) "
+ f"not {annotation.__name__ if hasattr(annotation, '__name__') else str(annotation)}"
+ )
+
+ if len(item) == 1:
+ raise ValueError(
+ "Tensor annotations should have a shape (e.g., cnp.tensor[cnp.uint4, 3, 2])"
+ )
+
+ shape = item[1:]
+ if not all(isinstance(x, int) for x in shape):
+ raise ValueError("Tensor annotation shape elements must be 'int'")
+
+ return Value(dtype=annotation.dtype, shape=shape, is_encrypted=False)
diff --git a/frontends/concrete-python/concrete/numpy/values/__init__.py b/frontends/concrete-python/concrete/numpy/values/__init__.py
new file mode 100644
index 000000000..c47bbbd84
--- /dev/null
+++ b/frontends/concrete-python/concrete/numpy/values/__init__.py
@@ -0,0 +1,7 @@
+"""
+Define the available values and their semantics.
+"""
+
+from .scalar import ClearScalar, EncryptedScalar
+from .tensor import ClearTensor, EncryptedTensor
+from .value import Value
diff --git a/frontends/concrete-python/concrete/numpy/values/scalar.py b/frontends/concrete-python/concrete/numpy/values/scalar.py
new file mode 100644
index 000000000..249287f2c
--- /dev/null
+++ b/frontends/concrete-python/concrete/numpy/values/scalar.py
@@ -0,0 +1,44 @@
+"""
+Declaration of `ClearScalar` and `EncryptedScalar` wrappers.
+"""
+
+from ..dtypes import BaseDataType
+from .value import Value
+
+
+def clear_scalar_builder(dtype: BaseDataType) -> Value:
+ """
+ Build a clear scalar value.
+
+ Args:
+ dtype (BaseDataType):
+ dtype of the value
+
+ Returns:
+ Value:
+ clear scalar value with given dtype
+ """
+
+ return Value(dtype=dtype, shape=(), is_encrypted=False)
+
+
+ClearScalar = clear_scalar_builder
+
+
+def encrypted_scalar_builder(dtype: BaseDataType) -> Value:
+ """
+ Build an encrypted scalar value.
+
+ Args:
+ dtype (BaseDataType):
+ dtype of the value
+
+ Returns:
+ Value:
+ encrypted scalar value with given dtype
+ """
+
+ return Value(dtype=dtype, shape=(), is_encrypted=True)
+
+
+EncryptedScalar = encrypted_scalar_builder
diff --git a/frontends/concrete-python/concrete/numpy/values/tensor.py b/frontends/concrete-python/concrete/numpy/values/tensor.py
new file mode 100644
index 000000000..abf9a6956
--- /dev/null
+++ b/frontends/concrete-python/concrete/numpy/values/tensor.py
@@ -0,0 +1,52 @@
+"""
+Declaration of `ClearTensor` and `EncryptedTensor` wrappers.
+"""
+
+from typing import Tuple
+
+from ..dtypes import BaseDataType
+from .value import Value
+
+
+def clear_tensor_builder(dtype: BaseDataType, shape: Tuple[int, ...]) -> Value:
+ """
+ Build a clear tensor value.
+
+ Args:
+ dtype (BaseDataType):
+ dtype of the value
+
+ shape (Tuple[int, ...]):
+ shape of the value
+
+ Returns:
+ Value:
+ clear tensor value with given dtype and shape
+ """
+
+ return Value(dtype=dtype, shape=shape, is_encrypted=False)
+
+
+ClearTensor = clear_tensor_builder
+
+
+def encrypted_tensor_builder(dtype: BaseDataType, shape: Tuple[int, ...]) -> Value:
+ """
+ Build an encrypted tensor value.
+
+ Args:
+ dtype (BaseDataType):
+ dtype of the value
+
+ shape (Tuple[int, ...]):
+ shape of the value
+
+ Returns:
+ Value:
+ encrypted tensor value with given dtype and shape
+ """
+
+ return Value(dtype=dtype, shape=shape, is_encrypted=True)
+
+
+EncryptedTensor = encrypted_tensor_builder
diff --git a/frontends/concrete-python/concrete/numpy/values/value.py b/frontends/concrete-python/concrete/numpy/values/value.py
new file mode 100644
index 000000000..5edcaf3b5
--- /dev/null
+++ b/frontends/concrete-python/concrete/numpy/values/value.py
@@ -0,0 +1,162 @@
+"""
+Declaration of `Value` class.
+"""
+
+from typing import Any, Tuple
+
+import numpy as np
+
+from ..dtypes import BaseDataType, Float, Integer, UnsignedInteger
+
+
+class Value:
+ """
+ Value class, to combine data type, shape, and encryption status into a single object.
+ """
+
+ dtype: BaseDataType
+ shape: Tuple[int, ...]
+ is_encrypted: bool
+
+ @staticmethod
+ def of(value: Any, is_encrypted: bool = False) -> "Value": # pylint: disable=invalid-name
+ """
+ Get the `Value` that can represent `value`.
+
+ Args:
+ value (Any):
+ value that needs to be represented
+
+ is_encrypted (bool, default = False):
+ whether the resulting `Value` is encrypted or not
+
+ Returns:
+ Value:
+ `Value` that can represent `value`
+
+ Raises:
+ ValueError:
+ if `value` cannot be represented by `Value`
+ """
+
+ # pylint: disable=too-many-branches,too-many-return-statements
+
+ if isinstance(value, (bool, np.bool_)):
+ return Value(dtype=UnsignedInteger(1), shape=(), is_encrypted=is_encrypted)
+
+ if isinstance(value, (int, np.integer)):
+ return Value(
+ dtype=Integer.that_can_represent(value),
+ shape=(),
+ is_encrypted=is_encrypted,
+ )
+
+ if isinstance(value, (float, np.float64)):
+ return Value(dtype=Float(64), shape=(), is_encrypted=is_encrypted)
+
+ if isinstance(value, np.float32):
+ return Value(dtype=Float(32), shape=(), is_encrypted=is_encrypted)
+
+ if isinstance(value, np.float16):
+ return Value(dtype=Float(16), shape=(), is_encrypted=is_encrypted)
+
+ if isinstance(value, list):
+ try:
+ value = np.array(value)
+ except Exception: # pylint: disable=broad-except
+ # here we try our best to convert the list to np.ndarray
+ # if it fails we raise the exception at the end of the function
+ pass
+
+ if isinstance(value, np.ndarray):
+
+ if np.issubdtype(value.dtype, np.bool_):
+ return Value(dtype=UnsignedInteger(1), shape=value.shape, is_encrypted=is_encrypted)
+
+ if np.issubdtype(value.dtype, np.integer):
+ return Value(
+ dtype=Integer.that_can_represent(value),
+ shape=value.shape,
+ is_encrypted=is_encrypted,
+ )
+
+ if np.issubdtype(value.dtype, np.float64):
+ return Value(dtype=Float(64), shape=value.shape, is_encrypted=is_encrypted)
+
+ if np.issubdtype(value.dtype, np.float32):
+ return Value(dtype=Float(32), shape=value.shape, is_encrypted=is_encrypted)
+
+ if np.issubdtype(value.dtype, np.float16):
+ return Value(dtype=Float(16), shape=value.shape, is_encrypted=is_encrypted)
+
+ message = f"Value cannot represent {repr(value)}"
+ raise ValueError(message)
+
+ # pylint: enable=too-many-branches,too-many-return-statements
+
+ def __init__(self, dtype: BaseDataType, shape: Tuple[int, ...], is_encrypted: bool):
+ self.dtype = dtype
+ self.shape = shape
+ self.is_encrypted = is_encrypted
+
+ def __eq__(self, other: object) -> bool:
+ return (
+ isinstance(other, Value)
+ and self.dtype == other.dtype
+ and self.shape == other.shape
+ and self.is_encrypted == other.is_encrypted
+ )
+
+ def __str__(self) -> str:
+ encrypted_or_clear_str = "Encrypted" if self.is_encrypted else "Clear"
+ scalar_or_tensor_str = "Scalar" if self.is_scalar else "Tensor"
+ shape_str = f", shape={self.shape}" if not self.is_scalar else ""
+ return f"{encrypted_or_clear_str}{scalar_or_tensor_str}<{str(self.dtype)}{shape_str}>"
+
+ @property
+ def is_clear(self) -> bool:
+ """
+ Get whether the value is clear or not.
+
+ Returns:
+ bool:
+ True if value is not encrypted, False otherwise
+ """
+
+ return not self.is_encrypted
+
+ @property
+ def is_scalar(self) -> bool:
+ """
+ Get whether the value is scalar or not.
+
+ Returns:
+ bool:
+ True if shape of the value is (), False otherwise
+ """
+
+ return self.shape == ()
+
+ @property
+ def ndim(self) -> int:
+ """
+ Get number of dimensions of the value.
+
+ Returns:
+ int:
+ number of dimensions of the value
+ """
+
+ return len(self.shape)
+
+ @property
+ def size(self) -> int:
+ """
+ Get number of elements in the value.
+
+ Returns:
+ int:
+ number of elements in the value
+ """
+
+ return int(np.prod(self.shape))
diff --git a/frontends/concrete-python/concrete/onnx/__init__.py b/frontends/concrete-python/concrete/onnx/__init__.py
new file mode 100644
index 000000000..f0bd1ce27
--- /dev/null
+++ b/frontends/concrete-python/concrete/onnx/__init__.py
@@ -0,0 +1,6 @@
+"""
+Implement machine learning operations as specified by ONNX.
+"""
+
+from .convolution import conv
+from .maxpool import maxpool
diff --git a/frontends/concrete-python/concrete/onnx/convolution.py b/frontends/concrete-python/concrete/onnx/convolution.py
new file mode 100644
index 000000000..e45a1aef4
--- /dev/null
+++ b/frontends/concrete-python/concrete/onnx/convolution.py
@@ -0,0 +1,683 @@
+"""
+Convolution operations' tracing and evaluation.
+"""
+
+import math
+from typing import Callable, List, Optional, Tuple, Union, cast
+
+import numpy as np
+import torch
+
+from ..numpy.internal.utils import assert_that
+from ..numpy.representation import Node
+from ..numpy.tracing import Tracer
+from ..numpy.values import EncryptedTensor
+
+SUPPORTED_AUTO_PAD = {
+ "NOTSET",
+}
+
+
+# pylint: disable=too-many-branches,too-many-statements
+
+
+def conv(
+ x: Union[np.ndarray, Tracer],
+ weight: Union[np.ndarray, Tracer],
+ bias: Optional[Union[np.ndarray, Tracer]] = None,
+ pads: Optional[Union[Tuple[int, ...], List[int]]] = None,
+ strides: Optional[Union[Tuple[int, ...], List[int]]] = None,
+ dilations: Optional[Union[Tuple[int, ...], List[int]]] = None,
+ kernel_shape: Optional[Union[Tuple[int, ...], List[int]]] = None,
+ group: int = 1,
+ auto_pad: str = "NOTSET",
+) -> Union[np.ndarray, Tracer]:
+ """
+ Trace and evaluate convolution operations.
+
+ Refer to https://github.com/onnx/onnx/blob/main/docs/Operators.md#Conv for more info.
+
+ Args:
+ x (Union[np.ndarray, Tracer]): input of shape (N, C, D1, ..., DN)
+ weight (Union[np.ndarray, Tracer]): kernel of shape (F, C / group, K1, ..., KN)
+ bias (Optional[Union[np.ndarray, Tracer]], optional): bias of shape (F,). Defaults to None.
+ pads (Optional[Union[Tuple[int, ...], List[int]]], optional):
+ padding for the beginning and ending along each spatial axis
+ (D1_begin, D2_begin, ..., D1_end, D2_end, ...).
+ Will be set to 0 along each spatial axis if not set.
+ strides (Optional[Union[Tuple[int, ...], List[int]]], optional):
+ stride along each spatial axis. Will be set to 1 along each spatial axis if not set.
+ dilations (Optional[Union[Tuple[int, ...], List[int]]], optional):
+ dilation along each spatial axis. Will be set to 1 along each spatial axis if not set.
+ kernel_shape (Optional[Union[Tuple[int, ...], List[int]]], optional):
+ shape of the convolution kernel. Inferred from input weight if not present
+ group (int, optional):
+ number of groups input channels and output channels are divided into. Defaults to 1.
+ auto_pad (str, optional): padding strategy. Defaults to "NOTSET".
+
+ Raises:
+ ValueError: if arguments are not appropriate
+ TypeError: unexpected types
+ NotImplementedError: a convolution that we don't support
+
+ Returns:
+ Union[np.ndarray, Tracer]: evaluation result or traced computation
+ """
+ if kernel_shape is not None and (
+ (weight.ndim - 2) != len(kernel_shape) or not np.all(weight.shape[2:] == kernel_shape)
+ ):
+ message = f"expected kernel_shape to be {weight.shape[2:]}, but got {kernel_shape}"
+ raise ValueError(message)
+
+ if isinstance(x, np.ndarray):
+ if not isinstance(weight, np.ndarray):
+ message = "expected weight to be of same type as x"
+ raise TypeError(message)
+ if bias is not None and not isinstance(bias, np.ndarray):
+ message = "expected bias to be of same type as x"
+ raise TypeError(message)
+ elif isinstance(x, Tracer):
+ if not isinstance(weight, (Tracer, np.ndarray)):
+ message = "expected weight to be of type Tracer or ndarray"
+ raise TypeError(message)
+ if bias is not None and not isinstance(bias, (Tracer, np.ndarray)):
+ message = "expected bias to be of type Tracer or ndarray"
+ raise TypeError(message)
+
+ if x.ndim <= 2:
+ message = (
+ f"expected input x to have at least 3 dimensions (N, C, D1, ...), but got {x.ndim}"
+ )
+ raise ValueError(message)
+
+ if weight.ndim <= 2:
+ message = (
+ f"expected weight to have at least 3 dimensions (F, C / group, K1, ...), but got "
+ f"{weight.ndim}"
+ )
+ raise ValueError(message)
+
+ if bias is not None and bias.ndim != 1:
+ message = f"expected bias to have a single dimension (F,), but got {bias.ndim}"
+ raise ValueError(message)
+
+ if not isinstance(group, int) or group <= 0:
+ message = f"expected group to be an integer > 0, but got {group}"
+ raise ValueError(message)
+
+ if auto_pad not in SUPPORTED_AUTO_PAD:
+ message = f"auto_pad should be in {SUPPORTED_AUTO_PAD}, but got {repr(auto_pad)}"
+ raise ValueError(message)
+
+ n_channels = x.shape[1]
+ if weight.shape[1] != n_channels / group:
+ message = (
+ f"expected number of channel in weight to be {n_channels / group} (C / group), but got "
+ f"{weight.shape[1]}"
+ )
+ raise ValueError(message)
+
+ if weight.shape[0] % group != 0:
+ message = (
+ f"expected number of feature maps ({weight.shape[0]}) to be a multiple of group "
+ f"({group})"
+ )
+ raise ValueError(message)
+
+ dims = x.ndim - 2
+ if dims == 1:
+ pads = (0, 0) if pads is None else pads
+ strides = (1,) if strides is None else strides
+ dilations = (1,) if dilations is None else dilations
+ return _conv1d(
+ x,
+ weight,
+ bias=bias,
+ pads=pads,
+ strides=strides,
+ dilations=dilations,
+ group=group,
+ auto_pad=auto_pad,
+ )
+ if dims == 2:
+ pads = (0, 0, 0, 0) if pads is None else pads
+ strides = (1, 1) if strides is None else strides
+ dilations = (1, 1) if dilations is None else dilations
+ return _conv2d(
+ x,
+ weight,
+ bias=bias,
+ pads=pads,
+ strides=strides,
+ dilations=dilations,
+ group=group,
+ auto_pad=auto_pad,
+ )
+ if dims == 3:
+ pads = (0, 0, 0, 0, 0, 0) if pads is None else pads
+ strides = (1, 1, 1) if strides is None else strides
+ dilations = (1, 1, 1) if dilations is None else dilations
+ return _conv3d(
+ x,
+ weight,
+ bias=bias,
+ pads=pads,
+ strides=strides,
+ dilations=dilations,
+ group=group,
+ auto_pad=auto_pad,
+ )
+
+ message = "only 1D, 2D, and 3D convolutions are supported"
+ raise NotImplementedError(message)
+
+
+# pylint: enable=too-many-branches
+
+
+def _conv1d(
+ x: Union[np.ndarray, Tracer],
+ weight: Union[np.ndarray, Tracer],
+ bias: Optional[Union[np.ndarray, Tracer]],
+ pads: Union[Tuple[int, ...], List[int]],
+ strides: Union[Tuple[int, ...], List[int]],
+ dilations: Union[Tuple[int, ...], List[int]],
+ group: int,
+ auto_pad: str, # pylint: disable=unused-argument
+) -> Union[np.ndarray, Tracer]:
+ """
+ Trace or evaluate 1D convolution.
+
+ Args:
+ x (Union[np.ndarray, Tracer]): input of shape (N, C, D)
+ weight (Union[np.ndarray, Tracer]): kernel of shape (F, C, D)
+ bias (Optional[Union[np.ndarray, Tracer]]): bias of shape (F,)
+ pads (Union[Tuple[int, ...], List[int]]):
+ padding over dimension D (D_beg, D_end)
+ strides (Union[Tuple[int, ...], List[int]]): stride over dimension D
+ dilations (Union[Tuple[int, ...], List[int]]): dilation over dimension D
+ group (int, optional):
+ number of groups input channels and output channels are divided into.
+ auto_pad (str, optional): padding strategy.
+
+ Raises:
+ ValueError: if arguments are not appropriate
+
+ Returns:
+ Union[np.ndarray, Tracer]: evaluation result or traced computation
+ """
+
+ assert_that(
+ x.ndim == 3,
+ f"expected input x to be of shape (N, C, D) when performing 1D convolution, but "
+ f"got {x.shape}",
+ )
+
+ assert_that(
+ weight.ndim == 3,
+ f"expected weight to be of shape (F, C, D) when performing 1D convolution, but "
+ f"got {weight.shape}",
+ )
+
+ if len(pads) != 2:
+ message = (
+ f"pads should be of form "
+ f"(D_begin_pad, D_end_pad) when performing "
+ f"1D convolution, but it's {pads}"
+ )
+ raise ValueError(message)
+ if len(strides) != 1:
+ message = (
+ f"strides should be of form (D_stride,) when performing 1D "
+ f"convolution, but it's {strides}"
+ )
+ raise ValueError(message)
+ if len(dilations) != 1:
+ message = (
+ f"dilations should be of form (D_dilation,) when performing 1D "
+ f"convolution, but it's {dilations}"
+ )
+ raise ValueError(message)
+
+ return _trace_or_eval(x, weight, bias, pads, strides, dilations, group)
+
+
+def _conv2d(
+ x: Union[np.ndarray, Tracer],
+ weight: Union[np.ndarray, Tracer],
+ bias: Optional[Union[np.ndarray, Tracer]],
+ pads: Union[Tuple[int, ...], List[int]],
+ strides: Union[Tuple[int, ...], List[int]],
+ dilations: Union[Tuple[int, ...], List[int]],
+ group: int,
+ auto_pad: str, # pylint: disable=unused-argument
+) -> Union[np.ndarray, Tracer]:
+ """
+ Trace or evaluate 2D convolution.
+
+ Args:
+ x (Union[np.ndarray, Tracer]): input of shape (N, C, H, W)
+ weight (Union[np.ndarray, Tracer]): kernel of shape (F, C, H, W)
+ bias (Optional[Union[np.ndarray, Tracer]]): bias of shape (F,)
+ pads (Union[Tuple[int, ...], List[int]]):
+ padding over each height and width (H_beg, W_beg, H_end, W_end)
+ strides (Union[Tuple[int, ...], List[int]]): stride over height and width
+ dilations (Union[Tuple[int, ...], List[int]]): dilation over height and width
+ group (int, optional):
+ number of groups input channels and output channels are divided into.
+ auto_pad (str, optional): padding strategy.
+
+ Raises:
+ ValueError: if arguments are not appropriate
+
+ Returns:
+ Union[np.ndarray, Tracer]: evaluation result or traced computation
+ """
+
+ assert_that(
+ x.ndim == 4,
+ f"expected input x to be of shape (N, C, H, W) when performing 2D convolution, but "
+ f"got {x.shape}",
+ )
+
+ assert_that(
+ weight.ndim == 4,
+ f"expected weight to be of shape (F, C, H, W) when performing 2D convolution, but "
+ f"got {weight.shape}",
+ )
+
+ if len(pads) != 4:
+ message = (
+ f"pads should be of form "
+ f"(height_begin_pad, width_begin_pad, height_end_pad, width_end_pad) when performing "
+ f"2D convolution, but it's {pads}"
+ )
+ raise ValueError(message)
+ if len(strides) != 2:
+ message = (
+ f"strides should be of form (height_stride, width_stride) when performing 2D "
+ f"convolution, but it's {strides}"
+ )
+ raise ValueError(message)
+ if len(dilations) != 2:
+ message = (
+ f"dilations should be of form (height_dilation, width_dilation) when performing 2D "
+ f"convolution, but it's {dilations}"
+ )
+ raise ValueError(message)
+
+ return _trace_or_eval(x, weight, bias, pads, strides, dilations, group)
+
+
+def _conv3d(
+ x: Union[np.ndarray, Tracer],
+ weight: Union[np.ndarray, Tracer],
+ bias: Optional[Union[np.ndarray, Tracer]],
+ pads: Union[Tuple[int, ...], List[int]],
+ strides: Union[Tuple[int, ...], List[int]],
+ dilations: Union[Tuple[int, ...], List[int]],
+ group: int,
+ auto_pad: str, # pylint: disable=unused-argument
+) -> Union[np.ndarray, Tracer]:
+ """
+ Trace or evaluate 3D convolution.
+
+ Args:
+ x (Union[np.ndarray, Tracer]): input of shape (N, C, D, H, W)
+ weight (Union[np.ndarray, Tracer]): kernel of shape (F, C, D, H, W)
+ bias (Optional[Union[np.ndarray, Tracer]]): bias of shape (F,)
+ pads (Union[Tuple[int, ...], List[int]]):
+ padding over each spatial axis (D_beg, H_beg, W_beg, D_end, H_end, W_end)
+ strides (Union[Tuple[int, ...], List[int]]): stride over each spatial axis
+ dilations (Union[Tuple[int, ...], List[int]]): dilation over each spatial axis
+ group (int, optional):
+ number of groups input channels and output channels are divided into.
+ auto_pad (str, optional): padding strategy.
+
+ Raises:
+ ValueError: if arguments are not appropriate
+
+ Returns:
+ Union[np.ndarray, Tracer]: evaluation result or traced computation
+ """
+
+ assert_that(
+ x.ndim == 5,
+ f"expected input x to be of shape (N, C, D, H, W) when performing 3D convolution, but "
+ f"got {x.shape}",
+ )
+
+ assert_that(
+ weight.ndim == 5,
+ f"expected weight to be of shape (F, C, D, H, W) when performing 3D convolution, but "
+ f"got {weight.shape}",
+ )
+
+ if len(pads) != 6:
+ message = (
+ f"pads should be of form "
+ f"(D_begin_pad, height_begin_pad, width_begin_pad, "
+ f"D_end_pad, height_end_pad, width_end_pad) when performing "
+ f"3D convolution, but it's {pads}"
+ )
+ raise ValueError(message)
+ if len(strides) != 3:
+ message = (
+ f"strides should be of form (D_stride, height_stride, width_stride) when performing "
+ f"3D convolution, but it's {strides}"
+ )
+ raise ValueError(message)
+ if len(dilations) != 3:
+ message = (
+ f"dilations should be of form (D_dilation, height_dilation, width_dilation) when "
+ f"performing 3D convolution, but it's {dilations}"
+ )
+ raise ValueError(message)
+
+ return _trace_or_eval(x, weight, bias, pads, strides, dilations, group)
+
+
+def _trace_or_eval(
+ x: Union[np.ndarray, Tracer],
+ weight: Union[np.ndarray, Tracer],
+ bias: Optional[Union[np.ndarray, Tracer]],
+ pads: Union[Tuple[int, ...], List[int]],
+ strides: Union[Tuple[int, ...], List[int]],
+ dilations: Union[Tuple[int, ...], List[int]],
+ group: int,
+) -> Union[np.ndarray, Tracer]:
+ """
+ Trace or evaluate convolution.
+
+ Args:
+ x (Union[np.ndarray, Tracer]): input of shape (N, C, D1, ..., DN)
+ weight (Union[np.ndarray, Tracer]): kernel of shape (F, C / group, K1, ..., KN)
+ bias (Optional[Union[np.ndarray, Tracer]]): bias of shape (F,)
+ pads (Union[Tuple[int, ...], List[int]]):
+ padding for the beginning and ending along each spatial axis
+ (D1_begin, D2_begin, ..., D1_end, D2_end, ...).
+ strides (Union[Tuple[int, ...], List[int]]): stride along each spatial axis.
+ dilations (Union[Tuple[int, ...], List[int]]): dilation along each spatial axis.
+ group (int, optional):
+ number of groups input channels and output channels are divided into.
+
+ Returns:
+ Union[np.ndarray, Tracer]: evaluation result or traced computation
+ """
+ assert_that(x.ndim in [3, 4, 5], "only support 1D, 2D, and 3D conv")
+ if x.ndim == 3:
+ conv_func = "conv1d"
+ elif x.ndim == 4:
+ conv_func = "conv2d"
+ else: # x.ndim == 5
+ conv_func = "conv3d"
+
+ if isinstance(x, Tracer):
+ return _trace_conv(x, weight, bias, pads, strides, dilations, group, conv_func)
+
+ assert isinstance(x, np.ndarray)
+ assert isinstance(weight, np.ndarray)
+
+ dtype = (
+ np.float64
+ if np.issubdtype(x.dtype, np.floating) or np.issubdtype(weight.dtype, np.floating)
+ else np.int64
+ )
+ bias = np.zeros(weight.shape[0], dtype=dtype) if bias is None else bias
+
+ assert isinstance(bias, np.ndarray)
+
+ return _evaluate_conv(x, weight, bias, pads, strides, dilations, group, conv_func)
+
+
+def _trace_conv(
+ x: Tracer,
+ weight: Union[np.ndarray, Tracer],
+ bias: Optional[Union[np.ndarray, Tracer]],
+ pads: Union[Tuple[int, ...], List[int]],
+ strides: Union[Tuple[int, ...], List[int]],
+ dilations: Union[Tuple[int, ...], List[int]],
+ group: int,
+ conv_func: str,
+) -> Tracer:
+ """
+ Trace convolution.
+
+ Args:
+ x (Tracer): input of shape (N, C, D1, ..., DN)
+ weight (Union[np.ndarray, Tracer]): kernel of shape (F, C / group, K1, ..., KN)
+ bias (Optional[Union[np.ndarray, Tracer]]): bias of shape (F,)
+ pads (Union[Tuple[int, int, int, int], List[int]]):
+ padding for the beginning and ending along each spatial axis
+ (D1_begin, D2_begin, ..., D1_end, D2_end, ...).
+ strides (Union[Tuple[int, int], List[int]]): stride along each spatial axis.
+ dilations (Union[Tuple[int, int], List[int]]): dilation along each spatial axis.
+ group (int, optional):
+ number of groups input channels and output channels are divided into.
+ conv_func (str): convolution to apply, should be one of {conv1d,conv2d,conv3d}
+
+ Returns:
+ Tracer:
+ traced computation
+ """
+
+ conv_eval_funcs = {
+ "conv1d": _evaluate_conv1d,
+ "conv2d": _evaluate_conv2d,
+ "conv3d": _evaluate_conv3d,
+ }
+ eval_func = conv_eval_funcs.get(conv_func, None)
+ assert_that(
+ eval_func is not None,
+ f"expected conv_func to be one of {list(conv_eval_funcs.keys())}, but got {conv_func}",
+ )
+ eval_func = cast(Callable, eval_func)
+
+ weight = weight if isinstance(weight, Tracer) else Tracer(Node.constant(weight), [])
+
+ input_values = [x.output, weight.output]
+ inputs = [x, weight]
+
+ if bias is not None:
+ bias = bias if isinstance(bias, Tracer) else Tracer(Node.constant(bias), [])
+ input_values.append(bias.output)
+ inputs.append(bias)
+
+ batch_size = x.output.shape[0]
+ n_filters = weight.output.shape[0]
+
+ n_dim = x.ndim - 2 # remove batch_size and channel dims
+ total_pads_per_dim = []
+ for dim in range(n_dim):
+ total_pads_per_dim.append(pads[dim] + pads[n_dim + dim])
+
+ output_shape = [batch_size, n_filters]
+ for dim in range(n_dim):
+ input_dim_at_dim = x.output.shape[dim + 2]
+ weight_dim_at_dim = weight.output.shape[dim + 2]
+ output_shape.append(
+ math.floor(
+ (
+ input_dim_at_dim
+ + total_pads_per_dim[dim]
+ - dilations[dim] * (weight_dim_at_dim - 1)
+ - 1
+ )
+ / strides[dim]
+ )
+ + 1
+ )
+ output_value = EncryptedTensor(dtype=x.output.dtype, shape=tuple(output_shape))
+
+ computation = Node.generic(
+ conv_func, # "conv1d" or "conv2d" or "conv3d"
+ input_values,
+ output_value,
+ eval_func,
+ args=() if bias is not None else (np.zeros(n_filters, dtype=np.int64),),
+ kwargs={"pads": pads, "strides": strides, "dilations": dilations, "group": group},
+ )
+ return Tracer(computation, inputs)
+
+
+def _evaluate_conv1d(
+ x: np.ndarray,
+ weight: np.ndarray,
+ bias: np.ndarray,
+ pads: Union[Tuple[int, ...], List[int]],
+ strides: Union[Tuple[int, ...], List[int]],
+ dilations: Union[Tuple[int, ...], List[int]],
+ group: int,
+) -> np.ndarray:
+ """
+ Evaluate 1D convolution.
+
+ Args:
+ x (np.ndarray): input of shape (N, C, D)
+ weight (np.ndarray): kernel of shape (F, C / group, D)
+ bias (np.ndarray): bias of shape (F,)
+ pads (Union[Tuple[int, ...], List[int]]):
+ padding over each axis (D_beg, D_end)
+ strides (Union[Tuple[int, ...], List[int]]): stride over dimension D
+ dilations (Union[Tuple[int, ...], List[int]]): dilation over dimension D
+ group (int, optional):
+ number of groups input channels and output channels are divided into.
+
+ Returns:
+ np.ndarray: result of the convolution
+ """
+ return _evaluate_conv(x, weight, bias, pads, strides, dilations, group, "conv1d")
+
+
+def _evaluate_conv2d(
+ x: np.ndarray,
+ weight: np.ndarray,
+ bias: np.ndarray,
+ pads: Union[Tuple[int, ...], List[int]],
+ strides: Union[Tuple[int, ...], List[int]],
+ dilations: Union[Tuple[int, ...], List[int]],
+ group: int,
+) -> np.ndarray:
+ """
+ Evaluate 2D convolution.
+
+ Args:
+ x (np.ndarray): input of shape (N, C, H, W)
+ weight (np.ndarray): kernel of shape (F, C / group, H, W)
+ bias (np.ndarray): bias of shape (F,)
+ pads (Union[Tuple[int, ...], List[int]]):
+ padding over each axis (H_beg, W_beg, H_end, W_end)
+ strides (Union[Tuple[int, ...], List[int]]): stride over height and width
+ dilations (Union[Tuple[int, ...], List[int]]): dilation over height and width
+ group (int, optional):
+ number of groups input channels and output channels are divided into.
+
+ Returns:
+ np.ndarray: result of the convolution
+ """
+ return _evaluate_conv(x, weight, bias, pads, strides, dilations, group, "conv2d")
+
+
+def _evaluate_conv3d(
+ x: np.ndarray,
+ weight: np.ndarray,
+ bias: np.ndarray,
+ pads: Union[Tuple[int, ...], List[int]],
+ strides: Union[Tuple[int, ...], List[int]],
+ dilations: Union[Tuple[int, ...], List[int]],
+ group: int,
+) -> np.ndarray:
+ """
+ Evaluate 3D convolution.
+
+ Args:
+ x (np.ndarray): input of shape (N, C, D, H, W)
+ weight (np.ndarray): kernel of shape (F, C / group, D, H, W)
+ bias (np.ndarray): bias of shape (F,)
+ pads (Union[Tuple[int, ...], List[int]]):
+ padding over each axis (D_beg, H_beg, W_beg, D_end, H_end, W_end)
+ strides (Union[Tuple[int, ...], List[int]]): stride over D, height, and width
+ dilations (Union[Tuple[int, ...], List[int]]): dilation over D, height, and width
+ group (int, optional):
+ number of groups input channels and output channels are divided into.
+
+ Returns:
+ np.ndarray: result of the convolution
+ """
+ return _evaluate_conv(x, weight, bias, pads, strides, dilations, group, "conv3d")
+
+
+def _evaluate_conv(
+ x: np.ndarray,
+ weight: np.ndarray,
+ bias: np.ndarray,
+ pads: Union[Tuple[int, ...], List[int]], # pylint: disable=unused-argument
+ strides: Union[Tuple[int, ...], List[int]],
+ dilations: Union[Tuple[int, ...], List[int]],
+ group: int,
+ conv_func: str,
+) -> np.ndarray:
+ """
+ Evaluate 2D convolution.
+
+ Args:
+ x (np.ndarray): input of shape (N, C, D1, ..., DN)
+ weight (np.ndarray): kernel of shape (F, C / group, K1, ..., KN)
+ bias (np.ndarray): bias of shape (F,)
+ pads (Union[Tuple[int, ...], List[int]]):
+ padding for the beginning and ending along each spatial axis
+ (D1_begin, D2_begin, ..., D1_end, D2_end, ...).
+ strides (Union[Tuple[int, ...], List[int]]): stride along each spatial axis.
+ dilations (Union[Tuple[int, ...], List[int]]): dilation along each spatial axis.
+ group (int, optional):
+ number of groups input channels and output channels are divided into.
+ conv_func (str): convolution to apply, should be one of {conv1d,conv2d,conv3d}
+
+ Returns:
+ np.ndarray: result of the convolution
+ """
+
+ # pylint: disable=no-member
+ conv_funcs = {
+ "conv1d": torch.conv1d,
+ "conv2d": torch.conv2d,
+ "conv3d": torch.conv3d,
+ }
+
+ torch_conv_func = conv_funcs.get(conv_func, None)
+ assert_that(
+ torch_conv_func is not None,
+ f"expected conv_func to be one of {list(conv_funcs.keys())}, but got {conv_func}",
+ )
+ torch_conv_func = cast(Callable, torch_conv_func)
+
+ n_dim = x.ndim - 2 # remove batch_size and channel dims
+ torch_padding = []
+ for dim in range(n_dim):
+ if pads[dim] != pads[n_dim + dim]:
+ message = (
+ f"padding should be the same for the beginning of the dimension and its end, but "
+ f"got {pads[dim]} in the beginning, and {pads[n_dim + dim]} at the end for "
+ f"dimension {dim}"
+ )
+ raise ValueError(message)
+ torch_padding.append(pads[dim])
+
+ dtype = (
+ torch.float64
+ if np.issubdtype(x.dtype, np.floating)
+ or np.issubdtype(weight.dtype, np.floating)
+ or np.issubdtype(bias.dtype, np.floating)
+ else torch.long
+ )
+ return torch_conv_func(
+ torch.tensor(x, dtype=dtype),
+ torch.tensor(weight, dtype=dtype),
+ torch.tensor(bias, dtype=dtype),
+ stride=strides,
+ padding=torch_padding,
+ dilation=dilations,
+ groups=group,
+ ).numpy()
+
+ # pylint: enable=no-member
diff --git a/frontends/concrete-python/concrete/onnx/maxpool.py b/frontends/concrete-python/concrete/onnx/maxpool.py
new file mode 100644
index 000000000..eeb628ba2
--- /dev/null
+++ b/frontends/concrete-python/concrete/onnx/maxpool.py
@@ -0,0 +1,336 @@
+"""
+Tracing and evaluation of maxpool function.
+"""
+
+from typing import List, Optional, Tuple, Union
+
+import numpy as np
+import torch
+
+from ..numpy.internal.utils import assert_that
+from ..numpy.representation import Node
+from ..numpy.tracing import Tracer
+from ..numpy.values import Value
+
+# pylint: disable=too-many-branches,too-many-statements
+
+
+AVAILABLE_AUTO_PAD = {
+ "NOTSET",
+ "SAME_UPPER",
+ "SAME_LOWER",
+ "VALID",
+}
+
+AVAILABLE_CEIL_MODE = {
+ 0,
+ 1,
+}
+
+AVAILABLE_STORAGE_ORDER = {
+ 0,
+ 1,
+}
+
+
+SUPPORTED_AUTO_PAD = {
+ "NOTSET",
+}
+
+SUPPORTED_CEIL_MODE = {
+ 0,
+}
+
+SUPPORTED_STORAGE_ORDER = {
+ 0,
+}
+
+
+# pylint: disable=no-member
+
+_EVALUATORS = {
+ 1: torch.max_pool1d,
+ 2: torch.max_pool2d,
+ 3: torch.max_pool3d,
+}
+
+# pylint: enable=no-member
+
+
+def maxpool(
+ x: Union[np.ndarray, Tracer],
+ kernel_shape: Union[Tuple[int, ...], List[int]],
+ strides: Optional[Union[Tuple[int, ...], List[int]]] = None,
+ auto_pad: str = "NOTSET",
+ pads: Optional[Union[Tuple[int, ...], List[int]]] = None,
+ dilations: Optional[Union[Tuple[int, ...], List[int]]] = None,
+ ceil_mode: int = 0,
+ storage_order: int = 0,
+) -> Union[np.ndarray, Tracer]:
+ """
+ Evaluate or trace MaxPool operation.
+
+ Refer to https://github.com/onnx/onnx/blob/main/docs/Operators.md#maxpool for more info.
+
+ Args:
+ x (Union[np.ndarray, Tracer]):
+ input of shape (N, C, D1, ..., DN)
+
+ kernel_shape (Union[Tuple[int, ...], List[int]]):
+ shape of the kernel
+
+ strides (Optional[Union[Tuple[int, ...], List[int]]]):
+ stride along each spatial axis
+ set to 1 along each spatial axis if not set
+
+ auto_pad (str, default = "NOTSET"):
+ padding strategy
+
+ pads (Optional[Union[Tuple[int, ...], List[int]]]):
+ padding for the beginning and ending along each spatial axis
+ (D1_begin, D2_begin, ..., D1_end, D2_end, ...)
+ set to 0 along each spatial axis if not set
+
+ dilations (Optional[Union[Tuple[int, ...], List[int]]]):
+ dilation along each spatial axis
+ set to 1 along each spatial axis if not set
+
+ ceil_mode (int, default = 1):
+ ceiling mode
+
+ storage_order (int, default = 0):
+ storage order, 0 for row major, 1 for column major
+
+ Raises:
+ TypeError:
+ if arguments are inappropriately typed
+
+ ValueError:
+ if arguments are inappropriate
+
+ NotImplementedError:
+ if desired operation is not supported yet
+
+ Returns:
+ Union[np.ndarray, Tracer]:
+ maxpool over the input or traced computation
+ """
+
+ def check_value_is_a_tuple_or_list_of_ints_of_size(value_name, value, size) -> Tuple[int, ...]:
+ if isinstance(value, list):
+ value = tuple(value)
+
+ if not isinstance(value, tuple):
+ message = (
+ f"Expected {value_name} to be a tuple or a list but it's {type(value).__name__}"
+ )
+ raise TypeError(message)
+
+ for element in value:
+ if not isinstance(element, int):
+ message = (
+ f"Expected {value_name} to consist of integers "
+ f"but it has an element of type {type(element).__name__}"
+ )
+ raise TypeError(message)
+
+ if len(value) != size:
+ message = f"Expected {value_name} to have {size} elements but it has {len(value)}"
+ raise ValueError(message)
+
+ return value
+
+ # check x
+
+ if isinstance(x, list): # pragma: no cover
+ try:
+ x = np.array(x)
+ except Exception: # pylint: disable=broad-except
+ pass
+
+ if isinstance(x, np.ndarray):
+ if not (
+ np.issubdtype(x.dtype, np.integer)
+ or np.issubdtype(x.dtype, np.floating)
+ or np.issubdtype(x.dtype, np.bool_)
+ ):
+ message = (
+ f"Expected input elements to be of type np.integer, np.floating, or np.bool_ "
+ f"but it's {type(x.dtype).__name__}"
+ )
+ raise TypeError(message)
+ elif not isinstance(x, Tracer):
+ message = (
+ f"Expected input to be of type np.ndarray or Tracer "
+ f"but it's {type(auto_pad).__name__}"
+ )
+ raise TypeError(message)
+
+ if x.ndim < 3:
+ message = (
+ f"Expected input to have at least 3 dimensions (N, C, D1, ...) "
+ f"but it only has {x.ndim}"
+ )
+ raise ValueError(message)
+
+ if x.ndim > 5:
+ message = f"{x.ndim - 2}D maximum pooling is not supported yet"
+ raise NotImplementedError(message)
+
+ # check kernel_shape
+
+ kernel_shape = check_value_is_a_tuple_or_list_of_ints_of_size(
+ "kernel_shape", kernel_shape, x.ndim - 2
+ )
+
+ # check strides
+
+ if strides is None:
+ strides = (1,) * (x.ndim - 2)
+
+ strides = check_value_is_a_tuple_or_list_of_ints_of_size("strides", strides, x.ndim - 2)
+
+ # check auto_pad
+
+ if not isinstance(auto_pad, str):
+ message = f"Expected auto_pad to be of type str but it's {type(auto_pad).__name__}"
+ raise TypeError(message)
+
+ if auto_pad not in AVAILABLE_AUTO_PAD:
+ message = (
+ f"Expected auto_pad to be one of "
+ f"{', '.join(sorted(AVAILABLE_AUTO_PAD))} "
+ f"but it's {auto_pad}"
+ )
+ raise ValueError(message)
+
+ if auto_pad not in SUPPORTED_AUTO_PAD:
+ message = f"Desired auto_pad of {auto_pad} is not supported yet"
+ raise NotImplementedError(message)
+
+ # check pads
+
+ if pads is None:
+ pads = (0,) * (2 * (x.ndim - 2))
+
+ pads = check_value_is_a_tuple_or_list_of_ints_of_size("pads", pads, 2 * (x.ndim - 2))
+
+ for i in range(len(pads) // 2):
+ pad_begin = pads[i]
+ pad_end = pads[i + len(pads) // 2]
+ if pad_begin != pad_end:
+ message = f"Desired pads of {pads} is not supported yet because of uneven padding"
+ raise NotImplementedError(message)
+
+ # check dilations
+
+ if dilations is None:
+ dilations = (1,) * (x.ndim - 2)
+
+ dilations = check_value_is_a_tuple_or_list_of_ints_of_size("dilations", dilations, x.ndim - 2)
+
+ # check ceil_mode
+
+ if not isinstance(ceil_mode, int):
+ message = f"Expected ceil_mode to be of type int but it's {type(ceil_mode).__name__}"
+ raise TypeError(message)
+
+ if ceil_mode not in AVAILABLE_CEIL_MODE:
+ message = (
+ f"Expected ceil_mode to be one of "
+ f"{', '.join(sorted(str(x) for x in AVAILABLE_CEIL_MODE))} "
+ f"but it's {ceil_mode}"
+ )
+ raise ValueError(message)
+
+ if ceil_mode not in SUPPORTED_CEIL_MODE:
+ message = f"Desired ceil_mode of {ceil_mode} is not supported yet"
+ raise NotImplementedError(message)
+
+ # check storage_order
+
+ if not isinstance(storage_order, int):
+ message = (
+ f"Expected storage_order to be of type int but it's {type(storage_order).__name__}"
+ )
+ raise TypeError(message)
+
+ if storage_order not in AVAILABLE_STORAGE_ORDER:
+ message = (
+ f"Expected storage_order to be one of "
+ f"{', '.join(sorted(str(x) for x in AVAILABLE_STORAGE_ORDER))} "
+ f"but it's {storage_order}"
+ )
+ raise ValueError(message)
+
+ if storage_order not in SUPPORTED_STORAGE_ORDER:
+ message = f"Desired storage_order of {storage_order} is not supported yet"
+ raise NotImplementedError(message)
+
+ # trace or evaluate
+ return _trace_or_evaluate(x, kernel_shape, strides, pads, dilations, ceil_mode == 1)
+
+
+def _trace_or_evaluate(
+ x: Union[np.ndarray, Tracer],
+ kernel_shape: Tuple[int, ...],
+ strides: Tuple[int, ...],
+ pads: Tuple[int, ...],
+ dilations: Tuple[int, ...],
+ ceil_mode: bool,
+):
+ if not isinstance(x, Tracer):
+ return _evaluate(x, kernel_shape, strides, pads, dilations, ceil_mode == 1)
+
+ result = _evaluate(np.zeros(x.shape), kernel_shape, strides, pads, dilations, ceil_mode == 1)
+ resulting_value = Value.of(result)
+
+ resulting_value.is_encrypted = x.output.is_encrypted
+ resulting_value.dtype = x.output.dtype
+
+ computation = Node.generic(
+ "maxpool",
+ [x.output],
+ resulting_value,
+ _evaluate,
+ kwargs={
+ "kernel_shape": kernel_shape,
+ "strides": strides,
+ "pads": pads,
+ "dilations": dilations,
+ "ceil_mode": ceil_mode,
+ },
+ )
+ return Tracer(computation, [x])
+
+
+def _evaluate(
+ x: np.ndarray,
+ kernel_shape: Tuple[int, ...],
+ strides: Tuple[int, ...],
+ pads: Tuple[int, ...],
+ dilations: Tuple[int, ...],
+ ceil_mode: bool,
+) -> np.ndarray:
+ # pylint: disable=no-member
+
+ dims = x.ndim - 2
+ assert_that(dims in {1, 2, 3})
+
+ evaluator = _EVALUATORS[dims]
+ result = (
+ evaluator(
+ torch.from_numpy(x.astype(np.float64)), # torch only supports float maxpools
+ kernel_shape,
+ strides,
+ pads[: len(pads) // 2],
+ dilations,
+ ceil_mode,
+ )
+ .numpy()
+ .astype(x.dtype)
+ )
+
+ # pylint: enable=no-member
+
+ return result
diff --git a/frontends/concrete-python/docker/Dockerfile.dev b/frontends/concrete-python/docker/Dockerfile.dev
new file mode 100644
index 000000000..abbe68978
--- /dev/null
+++ b/frontends/concrete-python/docker/Dockerfile.dev
@@ -0,0 +1,53 @@
+FROM ubuntu:22.04
+
+ENV TZ=Europe/Paris
+RUN ln -snf /usr/share/zoneinfo/$TZ /etc/localtime && echo $TZ > /etc/timezone
+
+# Replace default archive.ubuntu.com with fr mirror
+# original archive showed performance issues and is farther away
+RUN sed -i 's|^deb http://archive|deb http://fr.archive|g' /etc/apt/sources.list
+
+COPY ./script/make_utils/setup_os_deps.sh ./setup_os_deps.sh
+RUN ./setup_os_deps.sh --linux-install-python && rm ./setup_os_deps.sh
+
+ENV SRC_DIR=/src
+
+# Default to Ubuntu default uid for first user
+ARG BUILD_GID=1000
+ARG BUILD_UID=1000
+
+# Get sudo for our future user
+RUN apt-get update && \
+ apt-get install --no-install-recommends -y sudo && \
+ rm -rf /var/lib/apt/lists/*
+
+# From https://dev.to/emmanuelnk/using-sudo-without-password-prompt-as-non-root-docker-user-52bg
+# Create dev_user and add it to relevant groups
+# Create /src and make the dev user own it
+# Ensure sudo group users are not asked for a password when using
+# sudo command by ammending sudoers file
+RUN groupadd -g "${BUILD_GID}" dev_user && \
+ adduser --disabled-password \
+ --uid "${BUILD_UID}" --gid "${BUILD_GID}" --shell /bin/bash --gecos "" dev_user && \
+ usermod -aG sudo dev_user && \
+ mkdir -p "${SRC_DIR}" && \
+ chown dev_user "${SRC_DIR}" && \
+ echo '%sudo ALL=(ALL) NOPASSWD:ALL' >> /etc/sudoers
+
+# Now switch to the newly created user
+USER dev_user
+
+RUN echo "source ~/dev_venv/bin/activate" >> ~/.bashrc && \
+ echo "if [[ \"\$?\" != \"0\" ]]; then" >> ~/.bashrc && \
+ echo " python3 -m venv ~/dev_venv" >> ~/.bashrc && \
+ echo " source ~/dev_venv/bin/activate" >> ~/.bashrc && \
+ echo " cd ${SRC_DIR}/ && make setup_env" >> ~/.bashrc && \
+ echo "fi" >> ~/.bashrc && \
+ echo "export MPLBACKEND=TkAgg" >> ~/.bashrc && \
+ touch ~/.sudo_as_admin_successful && \
+ mkdir -p ~/dev_venv && \
+ mkdir -p ~/.cache
+
+WORKDIR ${SRC_DIR}
+
+CMD ["/bin/bash"]
diff --git a/frontends/concrete-python/docker/Dockerfile.dev.dockerignore b/frontends/concrete-python/docker/Dockerfile.dev.dockerignore
new file mode 100644
index 000000000..69f299d0e
--- /dev/null
+++ b/frontends/concrete-python/docker/Dockerfile.dev.dockerignore
@@ -0,0 +1 @@
+!script/make_utils/setup_os_deps.sh
diff --git a/frontends/concrete-python/docker/Dockerfile.release b/frontends/concrete-python/docker/Dockerfile.release
new file mode 100644
index 000000000..4284bb0ce
--- /dev/null
+++ b/frontends/concrete-python/docker/Dockerfile.release
@@ -0,0 +1,36 @@
+FROM ubuntu:22.04
+
+ENV TZ=Europe/Paris
+RUN ln -snf /usr/share/zoneinfo/$TZ /etc/localtime && echo $TZ > /etc/timezone
+
+# Replace default archive.ubuntu.com with fr mirror
+# original archive showed performance issues and is farther away
+RUN sed -i 's|^deb http://archive|deb http://fr.archive|g' /etc/apt/sources.list
+
+RUN mkdir /pkg && mkdir /app
+WORKDIR /pkg
+COPY docker/release_resources/release_requirements.txt .
+COPY ./pkg/*.whl .
+
+RUN apt-get update && apt-get upgrade --no-install-recommends -y && \
+ apt-get install --no-install-recommends -y \
+ build-essential \
+ python3-pip \
+ python3 \
+ python3-dev \
+ python3-tk \
+ python-is-python3 && \
+ rm -rf /var/lib/apt/lists/* && \
+ python3 -m pip install --no-cache-dir --upgrade pip wheel setuptools && \
+ echo "export MPLBACKEND=TkAgg" >> /root/.bashrc && \
+ python3 -m pip install --no-cache-dir "$(ls ./*.whl)" && \
+ python3 -m pip install --no-cache-dir -r release_requirements.txt
+
+WORKDIR /app
+COPY docker/release_resources/entry_point.sh ./entry_point.sh
+RUN mkdir /data
+
+WORKDIR /data
+VOLUME [ "/data" ]
+
+CMD ["/bin/bash", "-i", "/app/entry_point.sh"]
diff --git a/frontends/concrete-python/docker/Dockerfile.release.dockerignore b/frontends/concrete-python/docker/Dockerfile.release.dockerignore
new file mode 100644
index 000000000..4be3fd759
--- /dev/null
+++ b/frontends/concrete-python/docker/Dockerfile.release.dockerignore
@@ -0,0 +1,6 @@
+# Not our sources
+!docker/release_resources/entry_point.sh
+!docker/release_resources/release_requirements.txt
+
+!pkg/
+!pkg/**
diff --git a/frontends/concrete-python/docker/build_release_image.sh b/frontends/concrete-python/docker/build_release_image.sh
new file mode 100755
index 000000000..558f15aae
--- /dev/null
+++ b/frontends/concrete-python/docker/build_release_image.sh
@@ -0,0 +1,5 @@
+#!/usr/bin/env bash
+
+CURR_DIR=$(dirname "$0")
+DOCKER_BUILDKIT=1 docker build --pull --no-cache -f "$CURR_DIR/Dockerfile.release" \
+-t concrete-numpy-release "$CURR_DIR/.."
diff --git a/frontends/concrete-python/docker/release_resources/entry_point.sh b/frontends/concrete-python/docker/release_resources/entry_point.sh
new file mode 100755
index 000000000..33d5beffb
--- /dev/null
+++ b/frontends/concrete-python/docker/release_resources/entry_point.sh
@@ -0,0 +1,3 @@
+#!/bin/bash
+
+python3 -m jupyter notebook --ip=0.0.0.0 --allow-root --no-browser
diff --git a/frontends/concrete-python/docker/release_resources/release_requirements.txt b/frontends/concrete-python/docker/release_resources/release_requirements.txt
new file mode 100644
index 000000000..1a74d417f
--- /dev/null
+++ b/frontends/concrete-python/docker/release_resources/release_requirements.txt
@@ -0,0 +1 @@
+jupyter~=1.0.0
diff --git a/frontends/concrete-python/docker/release_resources/sanity_check.py b/frontends/concrete-python/docker/release_resources/sanity_check.py
new file mode 100644
index 000000000..676c34e6c
--- /dev/null
+++ b/frontends/concrete-python/docker/release_resources/sanity_check.py
@@ -0,0 +1,41 @@
+import random
+
+import concrete.numpy as cnp
+
+
+def main():
+ def function_to_compile(x):
+ return x + 42
+
+ n_bits = 3
+
+ compiler = cnp.Compiler(
+ function_to_compile,
+ {"x": "encrypted"},
+ )
+
+ print("Compiling...")
+
+ engine = compiler.compile(range(2 ** n_bits))
+
+ inputs = []
+ labels = []
+ for _ in range(4):
+ sample_x = random.randint(0, 2 ** n_bits - 1)
+
+ inputs.append([sample_x])
+ labels.append(function_to_compile(*inputs[-1]))
+
+ correct = 0
+ for idx, (input_i, label_i) in enumerate(zip(inputs, labels), 1):
+ print(f"Inference #{idx}")
+ result_i = engine.encrypt_run_decrypt(*input_i)
+
+ if result_i == label_i:
+ correct += 1
+
+ print(f"{correct}/{len(inputs)}")
+
+
+if __name__ == "__main__":
+ main()
diff --git a/frontends/concrete-python/docs/README.md b/frontends/concrete-python/docs/README.md
new file mode 100644
index 000000000..ad583be34
--- /dev/null
+++ b/frontends/concrete-python/docs/README.md
@@ -0,0 +1,24 @@
+# What is Concrete Numpy?
+
+[⭐️ Star the repo on Github ](https://github.com/zama-ai/concrete-numpy) | 🗣 [Community support forum ](https://community.zama.ai/c/concrete-numpy) | 📁 [Contribute to the project ](dev/contributing.md)
+
+
+
+**Concrete-Numpy** is an open-source library which simplifies the use of fully homomorphic encryption (FHE).
+
+FHE is a powerful cryptographic tool, which allows computation to be performed directly on encrypted data without needing to decrypt it first. With FHE, you can build services that preserve privacy for all users. FHE is also great against data breaches as everything is done on encrypted data. Even if the server is compromised, in the end no sensitive data is leaked.
+
+## Organization of this documentation
+
+This documentation is split into several sections:
+
+* **Getting Started** gives you the basics,
+* **Tutorials** gives you some essential examples on various features of the library,
+* **How to** helps you perform specific tasks,
+* and **Developer** explains the inner workings of the library and everything related to contributing to the project.
+
+## Looking for support? Ask our team!
+
+* Support forum: [https://community.zama.ai](https://community.zama.ai) (we answer in less than 24 hours).
+* Live discussion on the FHE.org discord server: [https://discord.fhe.org](https://discord.fhe.org) (inside the #**concrete** channel).
+* Do you have a question about Zama? You can write us on [Twitter](https://twitter.com/zama\_fhe) or send us an email at: **hello@zama.ai**
diff --git a/frontends/concrete-python/docs/SUMMARY.md b/frontends/concrete-python/docs/SUMMARY.md
new file mode 100644
index 000000000..4b6fe006b
--- /dev/null
+++ b/frontends/concrete-python/docs/SUMMARY.md
@@ -0,0 +1,40 @@
+# Table of contents
+
+* [What is Concrete Numpy?](README.md)
+
+## Getting Started
+
+* [Installation](getting-started/installing.md)
+* [Quick Start](getting-started/quick\_start.md)
+* [Compatibility](getting-started/compatibility.md)
+* [Exactness](getting-started/exactness.md)
+* [Performance](getting-started/performance.md)
+
+## Tutorials
+
+* [Decorator](tutorial/decorator.md)
+* [Formatting](tutorial/formatting.md)
+* [Tagging](tutorial/tagging.md)
+* [Extensions](tutorial/extensions.md)
+* [Table Lookups](tutorial/table\_lookups.md)
+* [Rounded Table Lookups](tutorial/rounded\_table\_lookups.md)
+* [Floating Points](tutorial/floating\_points.md)
+* [Simulation](tutorial/simulation.md)
+* [Direct Circuits](tutorial/direct\_circuits.md)
+* [Key Value Database](tutorial/key\_value\_database.md)
+
+## How To
+
+* [Configure](howto/configure.md)
+* [Debug](howto/debug.md)
+* [Deploy](howto/deploy.md)
+
+## Developer
+
+* [Project Setup](dev/project\_setup.md)
+* [Docker Setup](dev/docker.md)
+* [Contribute](dev/contributing.md)
+* [Terminology and Structure](dev/terminology\_and\_structure.md)
+* [Compilation](dev/compilation.md)
+* [Fusing](dev/fusing.md)
+* [MLIR](dev/mlir.md)
diff --git a/frontends/concrete-python/docs/_static/basics/compiling_and_executing_example_graph.png b/frontends/concrete-python/docs/_static/basics/compiling_and_executing_example_graph.png
new file mode 100644
index 000000000..0ace69983
Binary files /dev/null and b/frontends/concrete-python/docs/_static/basics/compiling_and_executing_example_graph.png differ
diff --git a/frontends/concrete-python/docs/_static/compilation-pipeline/two_x_plus_three.png b/frontends/concrete-python/docs/_static/compilation-pipeline/two_x_plus_three.png
new file mode 100644
index 000000000..61dc8ad49
Binary files /dev/null and b/frontends/concrete-python/docs/_static/compilation-pipeline/two_x_plus_three.png differ
diff --git a/frontends/concrete-python/docs/_static/float_fusing_example/after.png b/frontends/concrete-python/docs/_static/float_fusing_example/after.png
new file mode 100644
index 000000000..ddb214a4a
Binary files /dev/null and b/frontends/concrete-python/docs/_static/float_fusing_example/after.png differ
diff --git a/frontends/concrete-python/docs/_static/float_fusing_example/after_bigger_search.png b/frontends/concrete-python/docs/_static/float_fusing_example/after_bigger_search.png
new file mode 100644
index 000000000..1933fed4c
Binary files /dev/null and b/frontends/concrete-python/docs/_static/float_fusing_example/after_bigger_search.png differ
diff --git a/frontends/concrete-python/docs/_static/float_fusing_example/before.png b/frontends/concrete-python/docs/_static/float_fusing_example/before.png
new file mode 100644
index 000000000..94d55f981
Binary files /dev/null and b/frontends/concrete-python/docs/_static/float_fusing_example/before.png differ
diff --git a/frontends/concrete-python/docs/_static/float_fusing_example/before_bigger_search.png b/frontends/concrete-python/docs/_static/float_fusing_example/before_bigger_search.png
new file mode 100644
index 000000000..e782e5950
Binary files /dev/null and b/frontends/concrete-python/docs/_static/float_fusing_example/before_bigger_search.png differ
diff --git a/frontends/concrete-python/docs/_static/float_fusing_example/subgraph.png b/frontends/concrete-python/docs/_static/float_fusing_example/subgraph.png
new file mode 100644
index 000000000..31ca8d4bc
Binary files /dev/null and b/frontends/concrete-python/docs/_static/float_fusing_example/subgraph.png differ
diff --git a/frontends/concrete-python/docs/_static/float_fusing_example/subgraph_bigger_search.png b/frontends/concrete-python/docs/_static/float_fusing_example/subgraph_bigger_search.png
new file mode 100644
index 000000000..3ba4dff63
Binary files /dev/null and b/frontends/concrete-python/docs/_static/float_fusing_example/subgraph_bigger_search.png differ
diff --git a/frontends/concrete-python/docs/_static/mlir/MLIR_conversion.png b/frontends/concrete-python/docs/_static/mlir/MLIR_conversion.png
new file mode 100644
index 000000000..3f78d77f2
Binary files /dev/null and b/frontends/concrete-python/docs/_static/mlir/MLIR_conversion.png differ
diff --git a/frontends/concrete-python/docs/_static/p_error_simulation.pdf b/frontends/concrete-python/docs/_static/p_error_simulation.pdf
new file mode 100644
index 000000000..c9ff8e655
Binary files /dev/null and b/frontends/concrete-python/docs/_static/p_error_simulation.pdf differ
diff --git a/frontends/concrete-python/docs/_static/rounded-tlu/10-bits-removed.png b/frontends/concrete-python/docs/_static/rounded-tlu/10-bits-removed.png
new file mode 100644
index 000000000..1a405437b
Binary files /dev/null and b/frontends/concrete-python/docs/_static/rounded-tlu/10-bits-removed.png differ
diff --git a/frontends/concrete-python/docs/_static/rounded-tlu/12-bits-removed.png b/frontends/concrete-python/docs/_static/rounded-tlu/12-bits-removed.png
new file mode 100644
index 000000000..d1a36f841
Binary files /dev/null and b/frontends/concrete-python/docs/_static/rounded-tlu/12-bits-removed.png differ
diff --git a/frontends/concrete-python/docs/_static/rounded-tlu/4-bits-kept.png b/frontends/concrete-python/docs/_static/rounded-tlu/4-bits-kept.png
new file mode 100644
index 000000000..81cc38b00
Binary files /dev/null and b/frontends/concrete-python/docs/_static/rounded-tlu/4-bits-kept.png differ
diff --git a/frontends/concrete-python/docs/_static/rounded-tlu/6-bits-kept.png b/frontends/concrete-python/docs/_static/rounded-tlu/6-bits-kept.png
new file mode 100644
index 000000000..d1a36f841
Binary files /dev/null and b/frontends/concrete-python/docs/_static/rounded-tlu/6-bits-kept.png differ
diff --git a/frontends/concrete-python/docs/_static/rounded-tlu/relu.png b/frontends/concrete-python/docs/_static/rounded-tlu/relu.png
new file mode 100644
index 000000000..ee82f0cf7
Binary files /dev/null and b/frontends/concrete-python/docs/_static/rounded-tlu/relu.png differ
diff --git a/frontends/concrete-python/docs/_static/tutorials/artifacts/auto/1.initial.graph.png b/frontends/concrete-python/docs/_static/tutorials/artifacts/auto/1.initial.graph.png
new file mode 100644
index 000000000..462f2093c
Binary files /dev/null and b/frontends/concrete-python/docs/_static/tutorials/artifacts/auto/1.initial.graph.png differ
diff --git a/frontends/concrete-python/docs/_static/tutorials/artifacts/auto/2.final.graph.png b/frontends/concrete-python/docs/_static/tutorials/artifacts/auto/2.final.graph.png
new file mode 100644
index 000000000..462f2093c
Binary files /dev/null and b/frontends/concrete-python/docs/_static/tutorials/artifacts/auto/2.final.graph.png differ
diff --git a/frontends/concrete-python/docs/_static/tutorials/artifacts/manual/1.initial.graph.png b/frontends/concrete-python/docs/_static/tutorials/artifacts/manual/1.initial.graph.png
new file mode 100644
index 000000000..ef26ca811
Binary files /dev/null and b/frontends/concrete-python/docs/_static/tutorials/artifacts/manual/1.initial.graph.png differ
diff --git a/frontends/concrete-python/docs/_static/tutorials/artifacts/manual/2.after-fusing.graph.png b/frontends/concrete-python/docs/_static/tutorials/artifacts/manual/2.after-fusing.graph.png
new file mode 100644
index 000000000..cbb9c3f45
Binary files /dev/null and b/frontends/concrete-python/docs/_static/tutorials/artifacts/manual/2.after-fusing.graph.png differ
diff --git a/frontends/concrete-python/docs/_static/tutorials/artifacts/manual/3.final.graph.png b/frontends/concrete-python/docs/_static/tutorials/artifacts/manual/3.final.graph.png
new file mode 100644
index 000000000..cbb9c3f45
Binary files /dev/null and b/frontends/concrete-python/docs/_static/tutorials/artifacts/manual/3.final.graph.png differ
diff --git a/frontends/concrete-python/docs/_static/tutorials/table-lookup/1.initial.graph.png b/frontends/concrete-python/docs/_static/tutorials/table-lookup/1.initial.graph.png
new file mode 100644
index 000000000..0849bc65d
Binary files /dev/null and b/frontends/concrete-python/docs/_static/tutorials/table-lookup/1.initial.graph.png differ
diff --git a/frontends/concrete-python/docs/_static/tutorials/table-lookup/3.final.graph.png b/frontends/concrete-python/docs/_static/tutorials/table-lookup/3.final.graph.png
new file mode 100644
index 000000000..cb5756bc1
Binary files /dev/null and b/frontends/concrete-python/docs/_static/tutorials/table-lookup/3.final.graph.png differ
diff --git a/frontends/concrete-python/docs/_static/zama_home_docs.png b/frontends/concrete-python/docs/_static/zama_home_docs.png
new file mode 100644
index 000000000..02f23b7f8
Binary files /dev/null and b/frontends/concrete-python/docs/_static/zama_home_docs.png differ
diff --git a/frontends/concrete-python/docs/conftest.py b/frontends/concrete-python/docs/conftest.py
new file mode 120000
index 000000000..3e21d51c8
--- /dev/null
+++ b/frontends/concrete-python/docs/conftest.py
@@ -0,0 +1 @@
+../tests/conftest.py
\ No newline at end of file
diff --git a/frontends/concrete-python/docs/dev/compilation.md b/frontends/concrete-python/docs/dev/compilation.md
new file mode 100644
index 000000000..66159d363
--- /dev/null
+++ b/frontends/concrete-python/docs/dev/compilation.md
@@ -0,0 +1,148 @@
+# Compilation
+
+The compilation journey begins with tracing to get an easy-to-manipulate representation of the function. We call this representation a `Computation Graph`, which is basically a Directed Acyclic Graph (DAG) containing nodes representing the computations done in the function. Working with graphs is good because they have been studied extensively over the years and there are a lot of algorithms to manipulate them. Internally, we use [networkx](https://networkx.org), which is an excellent graph library for Python.
+
+The next step in the compilation is transforming the computation graph. There are many transformations we perform, and they will be discussed in their own sections. In any case, the result of transformations is just another computation graph.
+
+After transformations are applied, we need to determine the bounds (i.e., the minimum and the maximum values) of each intermediate node. This is required because FHE currently allows a limited precision for computations. Bound measurement is our way to know what is the required precision for the function.
+
+The final step is to transform the computation graph to equivalent `MLIR` code. How this is done will be explained in detail in its own chapter.
+
+Once the MLIR is generated, we send it to the **Concrete-Compiler**, and it completes the compilation process.
+
+## Tracing
+
+Given a Python function `f` such as this one:
+
+```
+def f(x):
+ return (2 * x) + 3
+```
+
+...the goal of tracing is to create the following computation graph without needing any change from the user.
+
+
+
+(Note that the edge labels are for non-commutative operations. To give an example, a subtraction node represents `(predecessor with edge label 0) - (predecessor with edge label 1)`)
+
+To do this, we make use of `Tracer`s, which are objects that record the operation performed during their creation. We create a `Tracer` for each argument of the function and call the function with those tracers. `Tracer`s make use of the operator overloading feature of Python to achieve their goal:
+
+```
+def f(x, y):
+ return x + 2 * y
+
+x = Tracer(computation=Input("x"))
+y = Tracer(computation=Input("y"))
+
+resulting_tracer = f(x, y)
+```
+
+`2 * y` will be performed first, and `*` is overloaded for `Tracer` to return another tracer: `Tracer(computation=Multiply(Constant(2), self.computation))`, which is equal to `Tracer(computation=Multiply(Constant(2), Input("y")))`
+
+`x + (2 * y)` will be performed next, and `+` is overloaded for `Tracer` to return another tracer: `Tracer(computation=Add(self.computation, (2 * y).computation))`, which is equal to `Tracer(computation=Add(Input("x"), Multiply(Constant(2), Input("y")))`
+
+In the end, we will have output tracers that can be used to create the computation graph. The implementation is a bit more complex than this, but the idea is the same.
+
+Tracing is also responsible for indicating whether the values in the node would be encrypted or not, and the rule for that is if a node has an encrypted predecessor, it is encrypted as well.
+
+## Topological transforms
+
+The goal of topological transforms is to make more functions compilable.
+
+With the current version of **Concrete-Numpy**, floating-point inputs and floating-point outputs are not supported. However, if the floating-point operations are intermediate operations, they can sometimes be fused into a single table lookup from integer to integer, thanks to some specific transforms.
+
+Let's take a closer look at the transforms we can currently perform.
+
+### Fusing.
+
+We have allocated a whole new chapter to explaining fusing. You can find it after this chapter.
+
+## Bounds measurement
+
+Given a computation graph, the goal of the bound measurement step is to assign the minimal data type to each node in the graph.
+
+Let's say we have an encrypted input that is always between `0` and `10`. We should assign the type `Encrypted` to the node of this input as `Encrypted` is the minimal encrypted integer that supports all values between `0` and `10`.
+
+If there were negative values in the range, we could have used `intX` instead of `uintX`.
+
+Bounds measurement is necessary because FHE supports limited precision, and we don't want unexpected behaviour while evaluating the compiled functions.
+
+Let's take a closer look at how we perform bounds measurement.
+
+### Inputset evaluation.
+
+This is a simple approach that requires an inputset to be provided by the user.
+
+The inputset is not to be confused with the dataset, which is classical in ML, as it doesn't require labels. Rather, it is a set of values which are typical inputs of the function.
+
+The idea is to evaluate each input in the inputset and record the result of each operation in the computation graph. Then we compare the evaluation results with the current minimum/maximum values of each node and update the minimum/maximum accordingly. After the entire inputset is evaluated, we assign a data type to each node using the minimum and the maximum values it contains.
+
+Here is an example, given this computation graph where `x` is encrypted:
+
+
+
+and this inputset:
+
+```
+[2, 3, 1]
+```
+
+Evaluation Result of `2`:
+
+* `x`: 2
+* `2`: 2
+* `*`: 4
+* `3`: 3
+* `+`: 7
+
+New Bounds:
+
+* `x`: \[**2**, **2**]
+* `2`: \[**2**, **2**]
+* `*`: \[**4**, **4**]
+* `3`: \[**3**, **3**]
+* `+`: \[**7**, **7**]
+
+Evaluation Result of `3`:
+
+* `x`: 3
+* `2`: 2
+* `*`: 6
+* `3`: 3
+* `+`: 9
+
+New Bounds:
+
+* `x`: \[2, **3**]
+* `2`: \[2, 2]
+* `*`: \[4, **6**]
+* `3`: \[3, 3]
+* `+`: \[7, **9**]
+
+Evaluation Result of `1`:
+
+* `x`: 1
+* `2`: 2
+* `*`: 2
+* `3`: 3
+* `+`: 5
+
+New Bounds:
+
+* `x`: \[**1**, 3]
+* `2`: \[2, 2]
+* `*`: \[**2**, 6]
+* `3`: \[3, 3]
+* `+`: \[**5**, 9]
+
+Assigned Data Types:
+
+* `x`: Encrypted<**uint2**>
+* `2`: Clear<**uint2**>
+* `*`: Encrypted<**uint3**>
+* `3`: Clear<**uint2**>
+* `+`: Encrypted<**uint4**>
+
+## MLIR conversion
+
+The actual compilation will be done by the **Concrete-Compiler**, which is expecting an MLIR input. The MLIR conversion goes from a computation graph to its MLIR equivalent. You can read more about it [here](mlir.md).
diff --git a/frontends/concrete-python/docs/dev/contributing.md b/frontends/concrete-python/docs/dev/contributing.md
new file mode 100644
index 000000000..92285672b
--- /dev/null
+++ b/frontends/concrete-python/docs/dev/contributing.md
@@ -0,0 +1,99 @@
+# Contribute
+
+{% hint style="info" %}
+There are two ways to contribute to **Concrete-Numpy** or to **Concrete** tools in general:
+
+* You can open issues to report bugs and typos and to suggest ideas.
+* You can ask to become an official contributor by emailing hello@zama.ai. Only approved contributors can send pull requests (PRs), so please make sure to get in touch before you do!
+{% endhint %}
+
+Now, let's go over some other important items that you need to know.
+
+## Creating a new branch
+
+We are using a consistent branch naming scheme, and you are expected to follow it as well. Here is the format:
+
+```shell
+git checkout -b {feat|fix|refactor|test|benchmark|doc|style|chore}/short-description
+```
+
+...and here are some examples:
+
+```shell
+git checkout -b feat/direct-tlu
+git checkout -b fix/tracing-indexing
+```
+
+## Before committing
+
+### Conformance.
+
+Each commit to **Concrete-Numpy** should conform to the standards decided by the team. Conformance can be checked using the following command:
+
+```shell
+make pcc
+```
+
+### Testing.
+
+On top of conformance, all tests must pass with 100% code coverage across the codebase:
+
+```shell
+make pytest
+```
+
+{% hint style="info" %}
+There may be cases where covering 100% of the code is not possible (e.g., exceptions that cannot be triggered in normal execution circumstances). In those cases, you may be allowed to disable coverage for some specific lines. This should be the exception rather than the rule. Reviewers may ask why some lines are not covered and, if it appears they can be covered, then the PR won't be accepted in that state.
+{% endhint %}
+
+## Committing
+
+We are using a consistent commit naming scheme, and you are expected to follow it as well. Again, here is the accepted format:
+
+```shell
+make show_scope
+```
+
+...and some examples:
+
+```shell
+git commit -m "feat: implement bounds checking"
+git commit -m "feat(debugging): add an helper function to print intermediate representation"
+git commit -m "fix(tracing): fix a bug that crashed pytorch tracer"
+```
+
+To learn more about conventional commits, check [this](https://www.conventionalcommits.org/en/v1.0.0/) page.
+
+## Before creating a pull request
+
+{% hint style="info" %}
+We remind you that only official contributors can send pull requests. To become an official contributor, please email hello@zama.ai.
+{% endhint %}
+
+You should rebase on top of the `main` branch before you create your pull request. We don't allow merge commits, so rebasing on `main` before pushing gives you the best chance of avoiding rewriting parts of your PR later if conflicts arise with other PRs being merged. After you commit your changes to your new branch, you can use the following commands to rebase:
+
+```shell
+# fetch the list of active remote branches
+git fetch --all --prune
+
+# checkout to main
+git checkout main
+
+# pull the latest changes to main (--ff-only is there to prevent accidental commits to main)
+git pull --ff-only
+
+# checkout back to your branch
+git checkout $YOUR_BRANCH
+
+# rebase on top of main branch
+git rebase main
+
+# If there are conflicts during the rebase, resolve them
+# and continue the rebase with the following command
+git rebase --continue
+
+# push the latest version of the local branch to remote
+git push --force
+```
+
+You can learn more about rebasing [here](https://git-scm.com/docs/git-rebase).
diff --git a/frontends/concrete-python/docs/dev/docker.md b/frontends/concrete-python/docs/dev/docker.md
new file mode 100644
index 000000000..37f321197
--- /dev/null
+++ b/frontends/concrete-python/docs/dev/docker.md
@@ -0,0 +1,45 @@
+# Docker Setup
+
+## Installation
+
+Before you start this section, go ahead and install Docker. You can follow [this](https://docs.docker.com/engine/install/) official guide if you need help.
+
+## X forwarding
+
+### Linux.
+
+You can use this xhost command:
+
+```shell
+xhost +localhost
+```
+
+### macOS.
+
+To use X forwarding on macOS:
+
+* Install XQuartz
+* Open XQuartz.app application, make sure in the application parameters that `authorize network connections` are set (currently in the Security settings)
+* Open a new terminal within XQuartz.app and type:
+
+```shell
+xhost +127.0.0.1
+```
+
+X server should be all set for Docker in the regular terminal.
+
+## Building
+
+You can use the dedicated target in the makefile to build the docker image:
+
+```shell
+make docker_build
+```
+
+## Starting
+
+You can use the dedicated target in the makefile to start the docker session:
+
+```shell
+make docker_start
+```
diff --git a/frontends/concrete-python/docs/dev/fusing.md b/frontends/concrete-python/docs/dev/fusing.md
new file mode 100644
index 000000000..547a5df84
--- /dev/null
+++ b/frontends/concrete-python/docs/dev/fusing.md
@@ -0,0 +1,37 @@
+# Fusing
+
+Fusing is the act of combining multiple nodes into a single node, which is converted to a table lookup.
+
+## How is it done?
+
+Code related to fusing is in the `concrete/numpy/compilation/utils.py` file. Fusing can be performed using the `fuse` function.
+
+Within `fuse`:
+
+1. We loop until there are no more subgraphs to fuse.
+2. Within each iteration:
+ 2.1. We find a subgraph to fuse.
+
+ 2.2. We search for a terminal node that is appropriate for fusing.
+
+ 2.3. We crawl backwards to find the closest integer nodes to this node.
+
+ 2.4. If there is a single node as such, we return the subgraph from this node to the terminal node.
+
+ 2.5. Otherwise, we try to find the lowest common ancestor (lca) of this list of nodes.
+
+ 2.6. If an lca doesn't exist, we say this particular terminal node is not fusable, and we go back to search for another subgraph.
+
+ 2.7. Otherwise, we use this lca as the input of the subgraph and continue with `subgraph` node creation below.
+
+ 2.8. We convert the subgraph into a `subgraph` node by checking fusability status of the nodes of the subgraph in this step.
+
+ 2.10. We substitute the `subgraph` node to the original graph.
+
+## Limitations
+
+With the current implementation, we cannot fuse subgraphs that depend on multiple encrypted values where those values doesn't have a common lca (e.g., `np.round(np.sin(x) + np.cos(y))`).
+
+{% hint style="info" %}
+[Kolmogorov–Arnold representation theorem](https://en.wikipedia.org/wiki/Kolmogorov%E2%80%93Arnold\_representation\_theorem) states that every multivariate continuous function can be represented as a superposition of continuous functions of one variable. Therefore, the case above could be handled in future versions of **Concrete-Numpy**.
+{% endhint %}
diff --git a/frontends/concrete-python/docs/dev/mlir.md b/frontends/concrete-python/docs/dev/mlir.md
new file mode 100644
index 000000000..2dc6b4798
--- /dev/null
+++ b/frontends/concrete-python/docs/dev/mlir.md
@@ -0,0 +1,17 @@
+# MLIR
+
+The MLIR project is a sub-project of the LLVM project. It's designed to simplify building domain-specific compilers such as our **Concrete-Compiler**.
+
+**Concrete-Compiler** accepts MLIR as an input and emits compiled assembly code for a target architecture.
+
+**Concrete-Numpy** performs the MLIR generation from the computation graph. Code related to this conversion is in the `concrete/numpy/mlir` folder.
+
+The conversion can be performed using the `convert` method of the `GraphConverter` class.
+
+Within the `convert` method of `GraphConverter`:
+
+* MLIR compatibility of the graph is checked;
+* bit width constraints are checked;
+* negative lookup tables are offset;
+* the computation graph is traversed and each node is converted to their corresponding MLIR representation using the `NodeConverter` class;
+* and string representation of the resulting MLIR is returned.
diff --git a/frontends/concrete-python/docs/dev/project_setup.md b/frontends/concrete-python/docs/dev/project_setup.md
new file mode 100644
index 000000000..5c48ff53d
--- /dev/null
+++ b/frontends/concrete-python/docs/dev/project_setup.md
@@ -0,0 +1,91 @@
+# Project Setup
+
+{% hint style="info" %}
+It is **strongly** recommended to use the development tool Docker. However, you are able to set the project up on a bare Linux or macOS as long as you have the required dependencies. You can see the required dependencies in `Dockerfile.dev` under `docker` directory.
+{% endhint %}
+
+## Installing `Python`
+
+**Concrete-Numpy** is a `Python` library, so `Python` should be installed to develop it. `v3.8` and `v3.9` are, currently, the only supported versions.
+
+You probably have Python already, but in case you don't, or in case you have an unsupported version, you can google `how to install python 3.8` and follow one of the results.
+
+## Installing `Poetry`
+
+`Poetry` is our package manager. It drastically simplifies dependency and environment management.
+
+You can follow [this](https://python-poetry.org/docs/#installation) official guide to install it.
+
+## Installing `make`
+
+`make` is used to launch various commands such as formatting and testing.
+
+On Linux, you can install `make` using the package manager of your distribution.
+
+On macOS, you can install `gmake` via brew:
+
+```shell
+brew install make
+```
+
+{% hint style="info" %}
+In the following sections, be sure to use the proper `make` tool for your system (i.e., `make`, `gmake`, etc).
+{% endhint %}
+
+## Cloning the repository
+
+Now, it's time to get the source code of **Concrete-Numpy**.
+
+Clone the git repository from GitHub using the protocol of your choice (ssh or https).
+
+## Setting up the environment
+
+Virtual environments are utilized to keep the project isolated from other `Python` projects in the system.
+
+To create a new virtual environment and install dependencies, use the command:
+
+```shell
+make setup_env
+```
+
+## Activating the environment
+
+To activate the newly created environment, use:
+
+```shell
+source .venv/bin/activate
+```
+
+## Syncing the environment
+
+From time to time, new dependencies will be added to the project and old ones will be removed.mThe command below will make sure the project has the proper environment, so run it regularly.
+
+```shell
+make sync_env
+```
+
+## Troubleshooting
+
+### In native setups.
+
+If you are having issues in a native setup, you can try to re-create your environment like this:
+
+```shell
+deactivate
+rm -rf .venv
+make setup_env
+source .venv/bin/activate
+```
+
+If the problem persists, you should consider using Docker. If you are working on a platform specific feature and Docker is not an option, you should create an issue so that we can take a look at your problem.
+
+### In docker setups.
+
+If you are having issues in a docker setup, you can try to re-build the docker image:
+
+```shell
+make docker_rebuild
+make docker_start
+```
+
+If the problem persists, you should contact us for help.
diff --git a/frontends/concrete-python/docs/dev/releasing.md b/frontends/concrete-python/docs/dev/releasing.md
new file mode 100644
index 000000000..f92efa8d7
--- /dev/null
+++ b/frontends/concrete-python/docs/dev/releasing.md
@@ -0,0 +1,17 @@
+# Release process
+
+## Release candidate cycle
+
+Throughout the quarter, many release candidatess are relesed. Those candidates are released in a private package repository. At the end of the quarter, we take the latest release candidate, and release it in PyPI without `rcX` tag.
+
+## Release flow
+
+* Checkout to the commit that you want to include in the release (everything before this commit and this commit will be in the release)
+* Run `make release`
+* Wait for CI to complete
+* Checkout to `chore/version` branch
+* Run `VERSION=a.b.c-rcX make set_version` with appropriate version
+* Push the branch to origin
+* Create a PR to merge it to main
+* Wait for CI to finish and get approval in the meantime
+* Merge the version update to main
diff --git a/frontends/concrete-python/docs/dev/terminology_and_structure.md b/frontends/concrete-python/docs/dev/terminology_and_structure.md
new file mode 100644
index 000000000..051810cff
--- /dev/null
+++ b/frontends/concrete-python/docs/dev/terminology_and_structure.md
@@ -0,0 +1,26 @@
+# Terminology and Structure
+
+## Terminology
+
+Some terms used throughout the project include:
+
+* computation graph - a data structure to represent a computation. This is basically a directed acyclic graph in which nodes are either inputs, constants or operations on other nodes.
+* tracing - the technique that takes a Python function from the user and generates the corresponding computation graph in an easy-to-read format.
+* bounds - before a computation graph is converted to MLIR, we need to know which node will output which type (e.g., uint3 vs euint5). Computation graphs with different inputs must remember the minimum and maximum values for each node, which is what we call bounds, and use bounds to determine the appropriate type for each node.
+* circuit - the result of compilation. A circuit is made of the client and server components and has methods, everything from printing to evaluation.
+
+## Module structure
+
+In this section, we will briefly discuss the module structure of **Concrete-Numpy**. You are encouraged to check individual `.py` files to learn more.
+
+* Concrete
+ * Numpy
+ * dtypes - data type specifications
+ * values - value specifications (i.e., data type + shape + encryption status)
+ * representation - representation of computation
+ * tracing - tracing of Python functions
+ * extensions - custom functionality which is not available in NumPy (e.g., direct table lookups)
+ * MLIR - MLIR conversion
+ * compilation - compilation from a Python function to a circuit, client/server architecture
+ * ONNX
+ * convolution - custom convolution operations that follow the behavior of ONNX
diff --git a/frontends/concrete-python/docs/getting-started/compatibility.md b/frontends/concrete-python/docs/getting-started/compatibility.md
new file mode 100644
index 000000000..60e74fa9b
--- /dev/null
+++ b/frontends/concrete-python/docs/getting-started/compatibility.md
@@ -0,0 +1,177 @@
+# Compatibility
+
+## Supported operations
+
+Here are the operations you can use inside the function you are compiling.
+
+{% hint style="info" %}
+Some of these operations are not supported between two encrypted values. A detailed error will be raised if you try to do something that is not supported.
+{% endhint %}
+
+### Supported Python operators.
+
+* [\_\_abs\_\_](https://docs.python.org/3/reference/datamodel.html#object.\_\_abs\_\_)
+* [\_\_add\_\_](https://docs.python.org/3/reference/datamodel.html#object.\_\_add\_\_)
+* [\_\_and\_\_](https://docs.python.org/3/reference/datamodel.html#object.\_\_and\_\_)
+* [\_\_eq\_\_](https://docs.python.org/3/reference/datamodel.html#object.\_\_eq\_\_)
+* [\_\_floordiv\_\_](https://docs.python.org/3/reference/datamodel.html#object.\_\_floordiv\_\_)
+* [\_\_ge\_\_](https://docs.python.org/3/reference/datamodel.html#object.\_\_ge\_\_)
+* [\_\_getitem\_\_](https://docs.python.org/3/reference/datamodel.html#object.\_\_getitem\_\_)
+* [\_\_gt\_\_](https://docs.python.org/3/reference/datamodel.html#object.\_\_gt\_\_)
+* [\_\_invert\_\_](https://docs.python.org/3/reference/datamodel.html#object.\_\_invert\_\_)
+* [\_\_le\_\_](https://docs.python.org/3/reference/datamodel.html#object.\_\_le\_\_)
+* [\_\_lshift\_\_](https://docs.python.org/3/reference/datamodel.html#object.\_\_lshift\_\_)
+* [\_\_lt\_\_](https://docs.python.org/3/reference/datamodel.html#object.\_\_lt\_\_)
+* [\_\_matmul\_\_](https://docs.python.org/3/reference/datamodel.html#object.\_\_matmul\_\_)
+* [\_\_mod\_\_](https://docs.python.org/3/reference/datamodel.html#object.\_\_mod\_\_)
+* [\_\_mul\_\_](https://docs.python.org/3/reference/datamodel.html#object.\_\_mul\_\_)
+* [\_\_ne\_\_](https://docs.python.org/3/reference/datamodel.html#object.\_\_ne\_\_)
+* [\_\_neg\_\_](https://docs.python.org/3/reference/datamodel.html#object.\_\_neg\_\_)
+* [\_\_or\_\_](https://docs.python.org/3/reference/datamodel.html#object.\_\_or\_\_)
+* [\_\_pos\_\_](https://docs.python.org/3/reference/datamodel.html#object.\_\_pos\_\_)
+* [\_\_pow\_\_](https://docs.python.org/3/reference/datamodel.html#object.\_\_pow\_\_)
+* [\_\_radd\_\_](https://docs.python.org/3/reference/datamodel.html#object.\_\_radd\_\_)
+* [\_\_rand\_\_](https://docs.python.org/3/reference/datamodel.html#object.\_\_rand\_\_)
+* [\_\_rfloordiv\_\_](https://docs.python.org/3/reference/datamodel.html#object.\_\_rfloordiv\_\_)
+* [\_\_rlshift\_\_](https://docs.python.org/3/reference/datamodel.html#object.\_\_rlshift\_\_)
+* [\_\_rmatmul\_\_](https://docs.python.org/3/reference/datamodel.html#object.\_\_rmatmul\_\_)
+* [\_\_rmod\_\_](https://docs.python.org/3/reference/datamodel.html#object.\_\_rmod\_\_)
+* [\_\_rmul\_\_](https://docs.python.org/3/reference/datamodel.html#object.\_\_rmul\_\_)
+* [\_\_ror\_\_](https://docs.python.org/3/reference/datamodel.html#object.\_\_ror\_\_)
+* [\_\_round\_\_](https://docs.python.org/3/reference/datamodel.html#object.\_\_round\_\_)
+* [\_\_rpow\_\_](https://docs.python.org/3/reference/datamodel.html#object.\_\_rpow\_\_)
+* [\_\_rrshift\_\_](https://docs.python.org/3/reference/datamodel.html#object.\_\_rrshift\_\_)
+* [\_\_rshift\_\_](https://docs.python.org/3/reference/datamodel.html#object.\_\_rshift\_\_)
+* [\_\_rsub\_\_](https://docs.python.org/3/reference/datamodel.html#object.\_\_rsub\_\_)
+* [\_\_rtruediv\_\_](https://docs.python.org/3/reference/datamodel.html#object.\_\_rtruediv\_\_)
+* [\_\_rxor\_\_](https://docs.python.org/3/reference/datamodel.html#object.\_\_rxor\_\_)
+* [\_\_sub\_\_](https://docs.python.org/3/reference/datamodel.html#object.\_\_sub\_\_)
+* [\_\_truediv\_\_](https://docs.python.org/3/reference/datamodel.html#object.\_\_truediv\_\_)
+* [\_\_xor\_\_](https://docs.python.org/3/reference/datamodel.html#object.\_\_xor\_\_)
+
+### Supported NumPy functions.
+
+* [np.absolute](https://numpy.org/doc/stable/reference/generated/numpy.absolute.html)
+* [np.add](https://numpy.org/doc/stable/reference/generated/numpy.add.html)
+* [np.arccos](https://numpy.org/doc/stable/reference/generated/numpy.arccos.html)
+* [np.arccosh](https://numpy.org/doc/stable/reference/generated/numpy.arccosh.html)
+* [np.arcsin](https://numpy.org/doc/stable/reference/generated/numpy.arcsin.html)
+* [np.arcsinh](https://numpy.org/doc/stable/reference/generated/numpy.arcsinh.html)
+* [np.arctan](https://numpy.org/doc/stable/reference/generated/numpy.arctan.html)
+* [np.arctan2](https://numpy.org/doc/stable/reference/generated/numpy.arctan2.html)
+* [np.arctanh](https://numpy.org/doc/stable/reference/generated/numpy.arctanh.html)
+* [np.around](https://numpy.org/doc/stable/reference/generated/numpy.around.html)
+* [np.bitwise\_and](https://numpy.org/doc/stable/reference/generated/numpy.bitwise\_and.html)
+* [np.bitwise\_or](https://numpy.org/doc/stable/reference/generated/numpy.bitwise\_or.html)
+* [np.bitwise\_xor](https://numpy.org/doc/stable/reference/generated/numpy.bitwise\_xor.html)
+* [np.broadcast\_to](https://numpy.org/doc/stable/reference/generated/numpy.broadcast\_to.html)
+* [np.cbrt](https://numpy.org/doc/stable/reference/generated/numpy.cbrt.html)
+* [np.ceil](https://numpy.org/doc/stable/reference/generated/numpy.ceil.html)
+* [np.clip](https://numpy.org/doc/stable/reference/generated/numpy.clip.html)
+* [np.concatenate](https://numpy.org/doc/stable/reference/generated/numpy.concatenate.html)
+* [np.copysign](https://numpy.org/doc/stable/reference/generated/numpy.copysign.html)
+* [np.cos](https://numpy.org/doc/stable/reference/generated/numpy.cos.html)
+* [np.cosh](https://numpy.org/doc/stable/reference/generated/numpy.cosh.html)
+* [np.deg2rad](https://numpy.org/doc/stable/reference/generated/numpy.deg2rad.html)
+* [np.degrees](https://numpy.org/doc/stable/reference/generated/numpy.degrees.html)
+* [np.dot](https://numpy.org/doc/stable/reference/generated/numpy.dot.html)
+* [np.equal](https://numpy.org/doc/stable/reference/generated/numpy.equal.html)
+* [np.exp](https://numpy.org/doc/stable/reference/generated/numpy.exp.html)
+* [np.exp2](https://numpy.org/doc/stable/reference/generated/numpy.exp2.html)
+* [np.expand\_dims](https://numpy.org/doc/stable/reference/generated/numpy.expand\_dims.html)
+* [np.expm1](https://numpy.org/doc/stable/reference/generated/numpy.expm1.html)
+* [np.fabs](https://numpy.org/doc/stable/reference/generated/numpy.fabs.html)
+* [np.float\_power](https://numpy.org/doc/stable/reference/generated/numpy.float\_power.html)
+* [np.floor](https://numpy.org/doc/stable/reference/generated/numpy.floor.html)
+* [np.floor\_divide](https://numpy.org/doc/stable/reference/generated/numpy.floor\_divide.html)
+* [np.fmax](https://numpy.org/doc/stable/reference/generated/numpy.fmax.html)
+* [np.fmin](https://numpy.org/doc/stable/reference/generated/numpy.fmin.html)
+* [np.fmod](https://numpy.org/doc/stable/reference/generated/numpy.fmod.html)
+* [np.gcd](https://numpy.org/doc/stable/reference/generated/numpy.gcd.html)
+* [np.greater](https://numpy.org/doc/stable/reference/generated/numpy.greater.html)
+* [np.greater\_equal](https://numpy.org/doc/stable/reference/generated/numpy.greater\_equal.html)
+* [np.heaviside](https://numpy.org/doc/stable/reference/generated/numpy.heaviside.html)
+* [np.hypot](https://numpy.org/doc/stable/reference/generated/numpy.hypot.html)
+* [np.invert](https://numpy.org/doc/stable/reference/generated/numpy.invert.html)
+* [np.isfinite](https://numpy.org/doc/stable/reference/generated/numpy.isfinite.html)
+* [np.isinf](https://numpy.org/doc/stable/reference/generated/numpy.isinf.html)
+* [np.isnan](https://numpy.org/doc/stable/reference/generated/numpy.isnan.html)
+* [np.lcm](https://numpy.org/doc/stable/reference/generated/numpy.lcm.html)
+* [np.ldexp](https://numpy.org/doc/stable/reference/generated/numpy.ldexp.html)
+* [np.left\_shift](https://numpy.org/doc/stable/reference/generated/numpy.left\_shift.html)
+* [np.less](https://numpy.org/doc/stable/reference/generated/numpy.less.html)
+* [np.less\_equal](https://numpy.org/doc/stable/reference/generated/numpy.less\_equal.html)
+* [np.log](https://numpy.org/doc/stable/reference/generated/numpy.log.html)
+* [np.log10](https://numpy.org/doc/stable/reference/generated/numpy.log10.html)
+* [np.log1p](https://numpy.org/doc/stable/reference/generated/numpy.log1p.html)
+* [np.log2](https://numpy.org/doc/stable/reference/generated/numpy.log2.html)
+* [np.logaddexp](https://numpy.org/doc/stable/reference/generated/numpy.logaddexp.html)
+* [np.logaddexp2](https://numpy.org/doc/stable/reference/generated/numpy.logaddexp2.html)
+* [np.logical\_and](https://numpy.org/doc/stable/reference/generated/numpy.logical\_and.html)
+* [np.logical\_not](https://numpy.org/doc/stable/reference/generated/numpy.logical\_not.html)
+* [np.logical\_or](https://numpy.org/doc/stable/reference/generated/numpy.logical\_or.html)
+* [np.logical\_xor](https://numpy.org/doc/stable/reference/generated/numpy.logical\_xor.html)
+* [np.matmul](https://numpy.org/doc/stable/reference/generated/numpy.matmul.html)
+* [np.maximum](https://numpy.org/doc/stable/reference/generated/numpy.maximum.html)
+* [np.minimum](https://numpy.org/doc/stable/reference/generated/numpy.minimum.html)
+* [np.multiply](https://numpy.org/doc/stable/reference/generated/numpy.multiply.html)
+* [np.negative](https://numpy.org/doc/stable/reference/generated/numpy.negative.html)
+* [np.nextafter](https://numpy.org/doc/stable/reference/generated/numpy.nextafter.html)
+* [np.not\_equal](https://numpy.org/doc/stable/reference/generated/numpy.not\_equal.html)
+* [np.ones\_like](https://numpy.org/doc/stable/reference/generated/numpy.ones\_like.html)
+* [np.positive](https://numpy.org/doc/stable/reference/generated/numpy.positive.html)
+* [np.power](https://numpy.org/doc/stable/reference/generated/numpy.power.html)
+* [np.rad2deg](https://numpy.org/doc/stable/reference/generated/numpy.rad2deg.html)
+* [np.radians](https://numpy.org/doc/stable/reference/generated/numpy.radians.html)
+* [np.reciprocal](https://numpy.org/doc/stable/reference/generated/numpy.reciprocal.html)
+* [np.remainder](https://numpy.org/doc/stable/reference/generated/numpy.remainder.html)
+* [np.reshape](https://numpy.org/doc/stable/reference/generated/numpy.reshape.html)
+* [np.right\_shift](https://numpy.org/doc/stable/reference/generated/numpy.right\_shift.html)
+* [np.rint](https://numpy.org/doc/stable/reference/generated/numpy.rint.html)
+* [np.round\_](https://numpy.org/doc/stable/reference/generated/numpy.round\_.html)
+* [np.sign](https://numpy.org/doc/stable/reference/generated/numpy.sign.html)
+* [np.signbit](https://numpy.org/doc/stable/reference/generated/numpy.signbit.html)
+* [np.sin](https://numpy.org/doc/stable/reference/generated/numpy.sin.html)
+* [np.sinh](https://numpy.org/doc/stable/reference/generated/numpy.sinh.html)
+* [np.spacing](https://numpy.org/doc/stable/reference/generated/numpy.spacing.html)
+* [np.sqrt](https://numpy.org/doc/stable/reference/generated/numpy.sqrt.html)
+* [np.square](https://numpy.org/doc/stable/reference/generated/numpy.square.html)
+* [np.subtract](https://numpy.org/doc/stable/reference/generated/numpy.subtract.html)
+* [np.sum](https://numpy.org/doc/stable/reference/generated/numpy.sum.html)
+* [np.tan](https://numpy.org/doc/stable/reference/generated/numpy.tan.html)
+* [np.tanh](https://numpy.org/doc/stable/reference/generated/numpy.tanh.html)
+* [np.transpose](https://numpy.org/doc/stable/reference/generated/numpy.transpose.html)
+* [np.true\_divide](https://numpy.org/doc/stable/reference/generated/numpy.true\_divide.html)
+* [np.trunc](https://numpy.org/doc/stable/reference/generated/numpy.trunc.html)
+* [np.where](https://numpy.org/doc/stable/reference/generated/numpy.where.html)
+* [np.zeros\_like](https://numpy.org/doc/stable/reference/generated/numpy.zeros\_like.html)
+
+### Supported `ndarray` methods.
+
+* [np.ndarray.astype](https://numpy.org/doc/stable/reference/generated/numpy.ndarray.astype.html)
+* [np.ndarray.clip](https://numpy.org/doc/stable/reference/generated/numpy.ndarray.clip.html)
+* [np.ndarray.dot](https://numpy.org/doc/stable/reference/generated/numpy.ndarray.dot.html)
+* [np.ndarray.flatten](https://numpy.org/doc/stable/reference/generated/numpy.ndarray.flatten.html)
+* [np.ndarray.reshape](https://numpy.org/doc/stable/reference/generated/numpy.ndarray.reshape.html)
+* [np.ndarray.transpose](https://numpy.org/doc/stable/reference/generated/numpy.ndarray.transpose.html)
+
+### Supported `ndarray` properties.
+
+* [np.ndarray.shape](https://numpy.org/doc/stable/reference/generated/numpy.ndarray.shape.html)
+* [np.ndarray.ndim](https://numpy.org/doc/stable/reference/generated/numpy.ndarray.ndim.html)
+* [np.ndarray.size](https://numpy.org/doc/stable/reference/generated/numpy.ndarray.size.html)
+* [np.ndarray.T](https://numpy.org/doc/stable/reference/generated/numpy.ndarray.T.html)
+
+## Limitations
+
+### Control flow constraints.
+
+Some Python control flow statements are not supported. For example, you cannot have an `if` statement or a `while` statement for which the condition depends on an encrypted value. However, such statements are supported with constant values (e.g., `for i in range(SOME_CONSTANT)`, `if os.environ.get("SOME_FEATURE") == "ON":`).
+
+### Type constraints.
+
+Another constraint is that you cannot have floating-point inputs or floating-point outputs. You can have floating-point intermediate values as long as they can be converted to an integer Table Lookup (e.g., `(60 * np.sin(x)).astype(np.int64)`).
+
+### Bit width constraints.
+
+There is a limit on the bit width of encrypted values. We are constantly working on increasing this bit width. If you go above the limit, you will get an error.
diff --git a/frontends/concrete-python/docs/getting-started/exactness.md b/frontends/concrete-python/docs/getting-started/exactness.md
new file mode 100644
index 000000000..18d6afe9e
--- /dev/null
+++ b/frontends/concrete-python/docs/getting-started/exactness.md
@@ -0,0 +1,27 @@
+# Exactness
+
+One of the most common operations in **Concrete-Numpy** is `Table Lookups` (TLUs). TLUs are performed with an FHE operation called `Programmable Bootstrapping` (PBS). PBSes have a certain probability of error, which, when triggered, result in inaccurate results.
+
+Let's say you have the table:
+
+```python
+[0, 1, 4, 9, 16, 25, 36, 49, 64]
+```
+
+And you performed a table lookup using `4`. The result you should get is `16`, but because of the possibility of error, you can sometimes get `9` or `25`. Sometimes even `4` or `36` if you have a high probability of error.
+
+The probability of this error can be configured through the `p_error` and `global_p_error` configuration options. The difference between these two options is that, `p_error` is for individual TLUs but `global_p_error` is for the whole circuit.
+
+Here is an example, if you set `p_error` to `0.01`, it means every TLU in the circuit will have a 1% chance of not being exact and 99% chance of being exact. If you have a single TLU in the circuit, `global_p_error` would be 1% as well. But if you have 2 TLUs for example, `global_p_error` would be almost 2% (`1 - (0.99 * 0.99)`).
+
+However, if you set `global_p_error` to `0.01`, the whole circuit will have 1% probability of being not exact, no matter how many table lookups are there.
+
+If you set both of them, both will be satisfied. Essentially, the stricter one will be used.
+
+By default, both `p_error` and `global_p_error` is set to `None`, which results in `global_p_error` of `1 / 100_000` being used. Feel free to play with these configuration options to pick the one best suited for your needs! For example, in some machine learning use cases, off-by-one or off-by-two errors doesn't affect the result much, in such cases `p_error` could be set to increase performance without losing accuracy.
+
+See [How to Configure](../howto/configure.md) to learn how you can set a custom `p_error` and/or `global_p_error`.
+
+{% hint style="info" %}
+Configuring either of those variables would affect computation time (compilation, keys generation, circuit execution) and space requirements (size of the keys on disk and in memory). Lower error probability would result in longer computation time and larger space requirements.
+{% endhint %}
diff --git a/frontends/concrete-python/docs/getting-started/installing.md b/frontends/concrete-python/docs/getting-started/installing.md
new file mode 100644
index 000000000..85adbf5f3
--- /dev/null
+++ b/frontends/concrete-python/docs/getting-started/installing.md
@@ -0,0 +1,46 @@
+# Installation
+
+**Concrete-Numpy** is natively supported on Linux and macOS for Python 3.8 and 3.9, but if you have Docker support in your platform, you can use the docker image to use **Concrete-Numpy**.
+
+## Using PyPI
+
+You can install **Concrete-Numpy** from PyPI:
+
+```shell
+pip install -U pip wheel setuptools
+pip install concrete-numpy
+```
+
+{% hint style="warning" %}
+Apple silicon users must use docker installation (explained below) as there is no ARM version of some of our dependencies for the time being.
+{% endhint %}
+
+## Using Docker
+
+You can also get the **Concrete-Numpy** docker image:
+
+```shell
+docker pull zamafhe/concrete-numpy:v1.0.0
+```
+
+### Starting a Jupyter server.
+
+By default, the entry point of the **Concrete-Numpy** docker image is a jupyter server that you can access from your browser:
+
+```shell
+docker run --rm -it -p 8888:8888 zamafhe/concrete-numpy:v1.0.0
+```
+
+To save notebooks on host, you can use a local volume:
+
+```shell
+docker run --rm -it -p 8888:8888 -v /path/to/notebooks:/data zamafhe/concrete-numpy:v1.0.0
+```
+
+### Starting a Bash session.
+
+Alternatively, you can launch a Bash session:
+
+```shell
+docker run --rm -it zamafhe/concrete-numpy:latest /bin/bash
+```
diff --git a/frontends/concrete-python/docs/getting-started/performance.md b/frontends/concrete-python/docs/getting-started/performance.md
new file mode 100644
index 000000000..51ebf1d65
--- /dev/null
+++ b/frontends/concrete-python/docs/getting-started/performance.md
@@ -0,0 +1,104 @@
+# Performance
+
+The most important operation in Concrete-Numpy is the table lookup operation. All operations except addition, subtraction, multiplication with non-encrypted values, and a few operations built with those primitive operations (e.g. matmul, conv) are converted to table lookups under the hood:
+
+```python
+import concrete.numpy as cnp
+
+@cnp.compiler({"x": "encrypted"})
+def f(x):
+ return x ** 2
+
+inputset = range(2 ** 4)
+circuit = f.compile(inputset)
+```
+
+is exactly the same as
+
+```python
+import concrete.numpy as cnp
+
+table = cnp.LookupTable([x ** 2 for x in range(2 ** 4)])
+
+@cnp.compiler({"x": "encrypted"})
+def f(x):
+ return table[x]
+
+inputset = range(2 ** 4)
+circuit = f.compile(inputset)
+```
+
+Table lookups are very flexible, and they allow Concrete Numpy to support many operations, but they are expensive! Therefore, you should try to avoid them as much as possible. In most cases, it's not possible to avoid them completely, but you might remove the number of TLUs or replace some of them with other primitive operations.
+
+The exact cost depend on many variables (machine configuration, error probability, etc.), but you can develop some intuition for single threaded CPU execution performance using:
+
+```python
+import time
+
+import concrete.numpy as cnp
+import numpy as np
+
+WARMUP = 3
+SAMPLES = 8
+BITWIDTHS = range(1, 15)
+CONFIGURATION = cnp.Configuration(
+ enable_unsafe_features=True,
+ use_insecure_key_cache=True,
+ insecure_key_cache_location=".keys",
+)
+
+timings = {}
+for n in BITWIDTHS:
+ @cnp.compiler({"x": "encrypted"})
+ def base(x):
+ return x
+
+ table = cnp.LookupTable([np.sqrt(x).round().astype(np.int64) for x in range(2 ** n)])
+
+ @cnp.compiler({"x": "encrypted"})
+ def tlu(x):
+ return table[x]
+
+ inputset = [0, 2**n - 1]
+
+ base_circuit = base.compile(inputset, CONFIGURATION)
+ tlu_circuit = tlu.compile(inputset, CONFIGURATION)
+
+ print()
+ print(f"Generating keys for n={n}...")
+
+ base_circuit.keygen()
+ tlu_circuit.keygen()
+
+ timings[n] = []
+ for i in range(SAMPLES + WARMUP):
+ sample = np.random.randint(0, 2 ** n)
+
+ encrypted_sample = base_circuit.encrypt(sample)
+ start = time.time()
+ encrypted_result = base_circuit.run(encrypted_sample)
+ end = time.time()
+ assert base_circuit.decrypt(encrypted_result) == sample
+
+ base_time = end - start
+
+ encrypted_sample = tlu_circuit.encrypt(sample)
+ start = time.time()
+ encrypted_result = tlu_circuit.run(encrypted_sample)
+ end = time.time()
+ assert tlu_circuit.decrypt(encrypted_result) == np.sqrt(sample).round().astype(np.int64)
+
+ tlu_time = end - start
+
+ if i >= WARMUP:
+ timings[n].append(tlu_time - base_time)
+ print(f"Sample #{i - WARMUP + 1} took {timings[n][-1] * 1000:.3f}ms")
+
+print()
+for n, times in timings.items():
+ print(f"{n}-bits -> {np.mean(times) * 1000:.3f}ms")
+```
+
+{% hint style="info" %}
+Concrete Numpy automatically parallelize execution if TLUs are applied to tensors.
+{% endhint %}
diff --git a/frontends/concrete-python/docs/getting-started/quick_start.md b/frontends/concrete-python/docs/getting-started/quick_start.md
new file mode 100644
index 000000000..efc073a99
--- /dev/null
+++ b/frontends/concrete-python/docs/getting-started/quick_start.md
@@ -0,0 +1,90 @@
+# Quick Start
+
+To compute on encrypted data, you first need to define the function that you want to compute, then compile it into a Concrete-Numpy `Circuit`, which you can use to perform homomorphic evaluation.
+
+Here is the full example that we will walk through:
+
+```python
+import concrete.numpy as cnp
+
+def add(x, y):
+ return x + y
+
+compiler = cnp.Compiler(add, {"x": "encrypted", "y": "clear"})
+
+inputset = [(2, 3), (0, 0), (1, 6), (7, 7), (7, 1)]
+circuit = compiler.compile(inputset)
+
+x = 4
+y = 4
+
+clear_evaluation = add(x, y)
+homomorphic_evaluation = circuit.encrypt_run_decrypt(x, y)
+
+print(x, "+", y, "=", clear_evaluation, "=", homomorphic_evaluation)
+```
+
+## Importing the library
+
+Everything you need to perform homomorphic evaluation is included in a single module:
+
+
+```python
+import concrete.numpy as cnp
+```
+
+## Defining the function to compile
+
+In this example, we will compile a simple addition function:
+
+
+```python
+def add(x, y):
+ return x + y
+```
+
+## Creating a compiler
+
+To compile the function, you need to create a `Compiler` by specifying the function to compile and encryption status of its inputs:
+
+
+```python
+compiler = cnp.Compiler(add, {"x": "encrypted", "y": "clear"})
+```
+
+## Defining an inputset
+
+An inputset is a collection representing the typical inputs to the function. It is used to determine the bit widths and shapes of the variables within the function.
+
+It should be an iterable, yielding tuples of the same length as the number of arguments of the function being compiled:
+
+
+```python
+inputset = [(2, 3), (0, 0), (1, 6), (7, 7), (7, 1)]
+```
+
+{% hint style="warning" %}
+All inputs in the inputset will be evaluated in the graph, which takes time. If you're experiencing long compilation times, consider providing a smaller inputset.
+{% endhint %}
+
+## Compiling the function
+
+You can use the `compile` method of a `Compiler` class with an inputset to perform the compilation and get the resulting circuit back:
+
+
+```python
+circuit = compiler.compile(inputset)
+```
+
+## Performing homomorphic evaluation
+
+You can use the `encrypt_run_decrypt` method of a `Circuit` class to perform homomorphic evaluation:
+
+
+```python
+homomorphic_evaluation = circuit.encrypt_run_decrypt(4, 4)
+```
+
+{% hint style="info" %}
+`circuit.encrypt_run_decrypt(*args)` is just a convenient way to do everything at once. It is implemented as `circuit.decrypt(circuit.run(circuit.encrypt(*args)))`.
+{% endhint %}
diff --git a/frontends/concrete-python/docs/howto/configure.md b/frontends/concrete-python/docs/howto/configure.md
new file mode 100644
index 000000000..286b5a4e2
--- /dev/null
+++ b/frontends/concrete-python/docs/howto/configure.md
@@ -0,0 +1,101 @@
+# Configure
+
+The behavior of **Concrete-Numpy** can be customized using `Configuration`s:
+
+```python
+import concrete.numpy as cnp
+import numpy as np
+
+configuration = cnp.Configuration(p_error=0.01, loop_parallelize=True)
+
+@cnp.compiler({"x": "encrypted"})
+def f(x):
+ return x + 42
+
+inputset = range(10)
+circuit = f.compile(inputset, configuration=configuration)
+```
+
+Alternatively, you can overwrite individual options as kwargs to `compile` method:
+
+```python
+import concrete.numpy as cnp
+import numpy as np
+
+@cnp.compiler({"x": "encrypted"})
+def f(x):
+ return x + 42
+
+inputset = range(10)
+circuit = f.compile(inputset, p_error=0.01, loop_parallelize=True)
+```
+
+Or you can combine both:
+
+```python
+import concrete.numpy as cnp
+import numpy as np
+
+configuration = cnp.Configuration(p_error=0.01)
+
+@cnp.compiler({"x": "encrypted"})
+def f(x):
+ return x + 42
+
+inputset = range(10)
+circuit = f.compile(inputset, configuration=configuration, loop_parallelize=True)
+```
+
+{% hint style="info" %}
+Additional kwarg to `compile` function have higher precedence. So if you set an option in both `configuration` and in `compile` methods, the value in the `compile` method will be used.
+{% endhint %}
+
+## Options
+
+* **show\_graph**: Optional[bool] = None
+ * Whether to print computation graph during compilation.
+ `True` means always to print, `False` means always to not print, `None` means print depending on verbose configuration below.
+
+* **show\_mlir**: Optional[bool] = None
+ * Whether to print MLIR during compilation.
+ `True` means always to print, `False` means always to not print, `None` means print depending on verbose configuration below.
+
+* **show\_optimizer**: Optional[bool] = None
+ * Whether to print optimizer output during compilation.
+ `True` means always to print, `False` means always to not print, `None` means print depending on verbose configuration below.
+
+* **verbose**: bool = False
+ * Whether to print details related to compilation.
+
+* **dump\_artifacts\_on\_unexpected\_failures**: bool = True
+ * Whether to export debugging artifacts automatically on compilation failures.
+
+* **auto_adjust_rounders**: bool = False
+ * Whether to adjust rounders automatically.
+
+* **p_error**: Optional[float] = None
+ * Error probability for individual table lookups. If set, all table lookups will have the probability of non-exact result smaller than the set value. See [Exactness](../getting-started/exactness.md) to learn more.
+
+* **global_p_error**: Optional[float] = None
+ * Global error probability for the whole circuit. If set, the whole circuit will have the probability of non-exact result smaller than the set value. See [Exactness](../getting-started/exactness.md) to learn more.
+
+* **jit**: bool = False
+ * Whether to use JIT compilation.
+
+* **loop\_parallelize**: bool = True
+ * Whether to enable loop parallelization in the compiler.
+
+* **dataflow\_parallelize**: bool = False
+ * Whether to enable dataflow parallelization in the compiler.
+
+* **auto\_parallelize**: bool = False
+ * Whether to enable auto parallelization in the compiler.
+
+* **enable\_unsafe\_features**: bool = False
+ * Whether to enable unsafe features.
+
+* **use\_insecure\_key\_cache**: bool = False _(Unsafe)_
+ * Whether to use the insecure key cache.
+
+* **insecure\_key\_cache\_location**: Optional\[Union\[Path, str]] = None
+ * Location of insecure key cache.
diff --git a/frontends/concrete-python/docs/howto/debug.md b/frontends/concrete-python/docs/howto/debug.md
new file mode 100644
index 000000000..7b0e866e1
--- /dev/null
+++ b/frontends/concrete-python/docs/howto/debug.md
@@ -0,0 +1,265 @@
+# Debug
+
+In this section, you will learn how to debug the compilation process easily as well as how to get help in case you cannot resolve your issue.
+
+## Debug Artifacts
+
+**Concrete-Numpy** has an artifact system to simplify the process of debugging issues.
+
+### Automatic export.
+
+In case of compilation failures, artifacts are exported automatically to the `.artifacts` directory under the working directory. Let's intentionally create a compilation failure to show what kinds of things are exported.
+
+```python
+def f(x):
+ return np.sin(x)
+```
+
+This function fails to compile because **Concrete-Numpy** does not support floating-point outputs. When you try to compile it, an exception will be raised and the artifacts will be exported automatically. If you go the `.artifacts` directory under the working directory, you'll see the following files:
+
+#### environment.txt
+
+This file contains information about your setup (i.e., your operating system and python version).
+
+```
+Linux-5.12.13-arch1-2-x86_64-with-glibc2.29 #1 SMP PREEMPT Fri, 25 Jun 2021 22:56:51 +0000
+Python 3.8.10
+```
+
+#### requirements.txt
+
+This file contains information about python packages and their versions installed on your system.
+
+```
+alabaster==0.7.12
+appdirs==1.4.4
+argon2-cffi==21.1.0
+...
+wheel==0.37.0
+widgetsnbextension==3.5.1
+wrapt==1.12.1
+```
+
+#### function.txt
+
+This file contains information about the function you tried to compile.
+
+```
+def f(x):
+ return np.sin(x)
+```
+
+#### parameters.txt
+
+This file contains information about the encryption status of the parameters of the function you tried to compile.
+
+```
+x :: encrypted
+```
+
+#### 1.initial.graph.txt
+
+This file contains the textual representation of the initial computation graph right after tracing.
+
+```
+%0 = x # EncryptedScalar
+%1 = sin(%0) # EncryptedScalar
+return %1
+```
+
+#### 1.initial.graph.png
+
+This file contains the visual representation of the initial computation graph right after tracing.
+
+
+
+#### 2.final.graph.txt
+
+This file contains the textual representation of the final computation graph right before MLIR conversion.
+
+```
+%0 = x # EncryptedScalar
+%1 = sin(%0) # EncryptedScalar
+return %1
+```
+
+#### 2.final.graph.png
+
+This file contains the visual representation of the final computation graph right before MLIR conversion.
+
+
+
+#### traceback.txt
+
+This file contains information about the error you received.
+
+```
+Traceback (most recent call last):
+ File "/home/default/Documents/Projects/Zama/hdk/concrete/numpy/compilation/compiler.py", line 320, in compile
+ mlir = GraphConverter.convert(self.graph)
+ File "/home/default/Documents/Projects/Zama/hdk/concrete/numpy/mlir/graph_converter.py", line 298, in convert
+ GraphConverter._check_graph_convertibility(graph)
+ File "/home/default/Documents/Projects/Zama/hdk/concrete/numpy/mlir/graph_converter.py", line 175, in _check_graph_convertibility
+ raise RuntimeError(message)
+RuntimeError: Function you are trying to compile cannot be converted to MLIR
+
+%0 = x # EncryptedScalar
+%1 = sin(%0) # EncryptedScalar
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ only integer operations are supported
+return %1
+```
+
+### Manual export.
+
+Manual exports are mostly used for visualization. Nonetheless, they can be very useful for demonstrations. Here is how to perform one:
+
+```python
+import concrete.numpy as cnp
+import numpy as np
+
+artifacts = cnp.DebugArtifacts("/tmp/custom/export/path")
+
+@cnp.compiler({"x": "encrypted"})
+def f(x):
+ return 127 - (50 * (np.sin(x) + 1)).astype(np.int64)
+
+inputset = range(2 ** 3)
+circuit = f.compile(inputset, artifacts=artifacts)
+
+artifacts.export()
+```
+
+If you go to the `/tmp/custom/export/path` directory, you'll see the following files:
+
+#### 1.initial.graph.txt
+
+This file contains the textual representation of the initial computation graph right after tracing.
+
+```
+%0 = 127 # ClearScalar
+%1 = 50 # ClearScalar
+%2 = 1 # ClearScalar
+%3 = x # EncryptedScalar
+%4 = sin(%3) # EncryptedScalar
+%5 = add(%4, %2) # EncryptedScalar
+%6 = multiply(%1, %5) # EncryptedScalar
+%7 = astype(%6, dtype=int_) # EncryptedScalar
+%8 = subtract(%0, %7) # EncryptedScalar
+return %8
+```
+
+#### 1.initial.graph.png
+
+This file contains the visual representation of the initial computation graph right after tracing.
+
+
+
+#### 2.after-float-fuse-0.graph.txt
+
+This file contains the textual representation of the intermediate computation graph after fusing.
+
+```
+%0 = 127 # ClearScalar
+%1 = x # EncryptedScalar
+%2 = subgraph(%1) # EncryptedScalar
+%3 = subtract(%0, %2) # EncryptedScalar
+return %3
+
+Subgraphs:
+
+ %2 = subgraph(%1):
+
+ %0 = 50 # ClearScalar
+ %1 = 1 # ClearScalar
+ %2 = input # EncryptedScalar
+ %3 = sin(%2) # EncryptedScalar
+ %4 = add(%3, %1) # EncryptedScalar
+ %5 = multiply(%0, %4) # EncryptedScalar
+ %6 = astype(%5, dtype=int_) # EncryptedScalar
+ return %6
+```
+
+#### 2.after-fusing.graph.png
+
+This file contains the visual representation of the intermediate computation graph after fusing.
+
+
+
+#### 3.final.graph.txt
+
+This file contains the textual representation of the final computation graph right before MLIR conversion.
+
+```
+%0 = 127 # ClearScalar
+%1 = x # EncryptedScalar
+%2 = subgraph(%1) # EncryptedScalar
+%3 = subtract(%0, %2) # EncryptedScalar
+return %3
+
+Subgraphs:
+
+ %2 = subgraph(%1):
+
+ %0 = 50 # ClearScalar
+ %1 = 1 # ClearScalar
+ %2 = input # EncryptedScalar
+ %3 = sin(%2) # EncryptedScalar
+ %4 = add(%3, %1) # EncryptedScalar
+ %5 = multiply(%0, %4) # EncryptedScalar
+ %6 = astype(%5, dtype=int_) # EncryptedScalar
+ return %6
+```
+
+#### 3.final.graph.png
+
+This file contains the visual representation of the final computation graph right before MLIR conversion.
+
+
+
+#### bounds.txt
+
+This file contains information about the bounds of the final computation graph of the function you are compiling using the inputset you provide.
+
+```
+%0 :: [127, 127]
+%1 :: [0, 7]
+%2 :: [2, 95]
+%3 :: [32, 125]
+```
+
+#### mlir.txt
+
+This file contains information about the MLIR of the function you compiled using the inputset you provided.
+
+```
+module {
+ func @main(%arg0: !FHE.eint<7>) -> !FHE.eint<7> {
+ %c127_i8 = arith.constant 127 : i8
+ %cst = arith.constant dense<"..."> : tensor<128xi64>
+ %0 = "FHE.apply_lookup_table"(%arg0, %cst) : (!FHE.eint<7>, tensor<128xi64>) -> !FHE.eint<7>
+ %1 = "FHE.sub_int_eint"(%c127_i8, %0) : (i8, !FHE.eint<7>) -> !FHE.eint<7>
+ return %1 : !FHE.eint<7>
+ }
+}
+```
+
+## Asking the community
+
+You can seek help with your issue by asking a question directly in the [community forum](https://community.zama.ai/).
+
+## Submitting an issue
+
+If you cannot find a solution in the community forum, or you found a bug in the library, you could create an issue in our GitHub repository.
+
+In case of a bug:
+
+* try to minimize randomness
+* try to minimize your function as much as possible while keeping the bug - this will help to fix the bug faster
+* try to include your inputset in the issue
+* try to include reproduction steps in the issue
+* try to include debug artifacts in the issue
+
+In case of a feature request:
+
+* try to give a minimal example of the desired behavior
+* try to explain your use case
diff --git a/frontends/concrete-python/docs/howto/deploy.md b/frontends/concrete-python/docs/howto/deploy.md
new file mode 100644
index 000000000..1643cc994
--- /dev/null
+++ b/frontends/concrete-python/docs/howto/deploy.md
@@ -0,0 +1,132 @@
+# Deploy
+
+After developing your circuit, you may want to deploy it. However, sharing the details of your circuit with every client might not be desirable. Further, you might want to perform the computation in dedicated servers. In this case, you can use the `Client` and `Server` features of **Concrete-Numpy**.
+
+## Development of the circuit
+
+You can develop your circuit like we've discussed in the previous chapters. Here is a simple example:
+
+
+```python
+import concrete.numpy as cnp
+
+@cnp.compiler({"x": "encrypted"})
+def function(x):
+ return x + 42
+
+inputset = range(10)
+circuit = function.compile(inputset)
+```
+
+Once you have your circuit, you can save everything the server needs like so:
+
+
+```python
+circuit.server.save("server.zip")
+```
+
+All you need to do now is to send `server.zip` to your computation server.
+
+## Setting up a server
+
+You can load the `server.zip` you get from the development machine as follows:
+
+
+```python
+import concrete.numpy as cnp
+
+server = cnp.Server.load("server.zip")
+```
+
+At this point, you will need to wait for requests from clients. The first likely request is for `ClientSpecs`.
+
+Clients need `ClientSpecs` to generate keys and request computation. You can serialize `ClientSpecs` like so:
+
+
+```python
+serialized_client_specs: str = server.client_specs.serialize()
+```
+
+Then, you can send it to the clients requesting it.
+
+## Setting up clients
+
+After getting the serialized `ClientSpecs` from a server, you can create the client object like this:
+
+
+```python
+client_specs = cnp.ClientSpecs.unserialize(serialized_client_specs)
+client = cnp.Client(client_specs)
+```
+
+## Generating keys (on the client)
+
+Once you have the `Client` object, you can perform key generation:
+
+
+```python
+client.keygen()
+```
+
+This method generates encryption/decryption keys and evaluation keys.
+
+The server requires evaluation keys linked to the encryption keys that you just generated. You can serialize your evaluation keys as shown below:
+
+
+```python
+serialized_evaluation_keys: bytes = client.evaluation_keys.serialize()
+```
+
+After serialization, you can send the evaluation keys to the server.
+
+{% hint style="info" %}
+Serialized evaluation keys are very big in size, so you may want to cache them on the server instead of sending them with each request.
+{% endhint %}
+
+## Encrypting inputs (on the client)
+
+You are now ready to encrypt your inputs and request the server to perform the computation. You can do it like so:
+
+
+```python
+serialized_args: bytes = client.encrypt(7).serialize()
+```
+
+The only thing left to do is to send serialized args to the server.
+
+## Performing computation (on the server)
+
+Upon having the serialized evaluation keys and serialized arguments, you can unserialize them like so:
+
+
+```python
+unserialized_evaluation_keys = cnp.EvaluationKeys.unserialize(serialized_evaluation_keys)
+unserialized_args = server.client_specs.unserialize_public_args(serialized_args)
+```
+
+And you can perform the computation as well:
+
+
+```python
+public_result = server.run(unserialized_args, unserialized_evaluation_keys)
+serialized_public_result: bytes = public_result.serialize()
+```
+
+Finally, you can send the serialized public result back to the client, so they can decrypt it and get the result of the computation.
+
+## Decrypting the result (on the client)
+
+Once you have received the public result of the computation from the server, you can unserialize it:
+
+
+```python
+unserialized_public_result = client.specs.unserialize_public_result(serialized_public_result)
+```
+
+Finally, you can decrypt the result like so:
+
+
+```python
+result = client.decrypt(unserialized_public_result)
+assert result == 49
+```
diff --git a/frontends/concrete-python/docs/linux.dependency.licenses.txt b/frontends/concrete-python/docs/linux.dependency.licenses.txt
new file mode 100644
index 000000000..c07c3aa7a
--- /dev/null
+++ b/frontends/concrete-python/docs/linux.dependency.licenses.txt
@@ -0,0 +1,21 @@
+ Name Version License
+ Pillow 9.4.0 Historical Permission Notice and Disclaimer (HPND)
+ PyYAML 6.0 MIT License
+ concrete-compiler 0.24.0rc5 BSD-3
+ cycler 0.11.0 BSD License
+ fonttools 4.38.0 MIT License
+ kiwisolver 1.4.4 BSD License
+ matplotlib 3.5.3 Python Software Foundation License
+ networkx 2.6.3 BSD License
+ numpy 1.24.2 BSD License
+ nvidia-cublas-cu11 11.10.3.66 Other/Proprietary License
+ nvidia-cuda-nvrtc-cu11 11.7.99 Other/Proprietary License
+ nvidia-cuda-runtime-cu11 11.7.99 Other/Proprietary License
+ nvidia-cudnn-cu11 8.5.0.96 Other/Proprietary License
+ packaging 23.0 Apache Software License; BSD License
+ pyparsing 3.0.9 MIT License
+ python-dateutil 2.8.2 Apache Software License; BSD License
+ scipy 1.10.1 BSD License
+ six 1.16.0 MIT License
+ torch 1.13.1 BSD License
+ typing-extensions 3.10.0.2 Python Software Foundation License
diff --git a/frontends/concrete-python/docs/tutorial/decorator.md b/frontends/concrete-python/docs/tutorial/decorator.md
new file mode 100644
index 000000000..c46ca92ab
--- /dev/null
+++ b/frontends/concrete-python/docs/tutorial/decorator.md
@@ -0,0 +1,20 @@
+# Decorator
+
+If you are trying to compile a regular function, you can use the decorator interface instead of the explicit `Compiler` interface to simplify your code:
+
+```python
+import concrete.numpy as cnp
+
+@cnp.compiler({"x": "encrypted"})
+def f(x):
+ return x + 42
+
+inputset = range(10)
+circuit = f.compile(inputset)
+
+assert circuit.encrypt_run_decrypt(10) == f(10)
+```
+
+{% hint style="info" %}
+Think of this decorator as a way to add the `compile` method to the function object without changing its name elsewhere.
+{% endhint %}
diff --git a/frontends/concrete-python/docs/tutorial/direct_circuits.md b/frontends/concrete-python/docs/tutorial/direct_circuits.md
new file mode 100644
index 000000000..53e0b7ef0
--- /dev/null
+++ b/frontends/concrete-python/docs/tutorial/direct_circuits.md
@@ -0,0 +1,83 @@
+# Direct Circuits
+
+{% hint style="warning" %}
+Direct circuits are still experimental, and it's very easy to shoot yourself in the foot (e.g., no overflow checks, no type coercion) while using them so utilize them with care.
+{% endhint %}
+
+For some applications, data types of inputs, intermediate values and outputs are known (e.g., for manipulating bytes, you would want to use uint8). For such cases, using inputsets to determine bounds are not necessary, or even error-prone. Therefore, another interface for defining such circuits, is introduced:
+
+```python
+import concrete.numpy as cnp
+
+@cnp.circuit({"x": "encrypted"})
+def circuit(x: cnp.uint8):
+ return x + 42
+
+assert circuit.encrypt_run_decrypt(10) == 52
+```
+
+There are a few differences between direct circuits and traditional circuits though:
+
+- You need to remember that resulting dtype for each operation will be determined by its inputs. This can lead to some unexpected results if you're not careful (e.g., if you do `-x` where `x: cnp.uint8`, you'll not get the negative value as the result will be `cnp.uint8` as well)
+- You need to use cnp types in `.astype(...)` calls (e.g., `np.sqrt(x).astype(cnp.uint4)`). This is because there are no inputset evaluation, so cannot determine the bit-width of the output.
+- You need to specify the resulting data type in [univariate](./extensions.md#cnpunivariatefunction) extension (e.g., `cnp.univariate(function, outputs=cnp.uint4)(x)`), because of the same reason as above.
+- You need to be careful with overflows. With inputset evaluation, you'll get bigger bit-widths but no overflows, with direct definition, you're responsible to ensure there aren't any overflows!
+
+Let's go over a more complicated example to see how direct circuits behave:
+
+```python
+import concrete.numpy as cnp
+import numpy as np
+
+def square(value):
+ return value ** 2
+
+@cnp.circuit({"x": "encrypted", "y": "encrypted"})
+def circuit(x: cnp.uint8, y: cnp.int2):
+ a = x + 10
+ b = y + 10
+
+ c = np.sqrt(a).round().astype(cnp.uint4)
+ d = cnp.univariate(square, outputs=cnp.uint8)(b)
+
+ return d - c
+
+print(circuit)
+```
+prints
+```
+%0 = x # EncryptedScalar
+%1 = y # EncryptedScalar
+%2 = 10 # ClearScalar
+%3 = add(%0, %2) # EncryptedScalar
+%4 = 10 # ClearScalar
+%5 = add(%1, %4) # EncryptedScalar
+%6 = subgraph(%3) # EncryptedScalar
+%7 = square(%5) # EncryptedScalar
+%8 = subtract(%7, %6) # EncryptedScalar
+return %8
+
+Subgraphs:
+
+ %6 = subgraph(%3):
+
+ %0 = input # EncryptedScalar
+ %1 = sqrt(%0) # EncryptedScalar
+ %2 = around(%1, decimals=0) # EncryptedScalar
+ %3 = astype(%2) # EncryptedScalar
+ return %3
+```
+And here is the breakdown of assigned data types:
+```
+%0 is uint8 because it's specified in the definition
+%1 is int2 because it's specified in the definition
+%2 is uint4 because it's the constant 10
+%3 is uint8 because it's the addition between uint8 and uint4
+%4 is uint4 because it's the constant 10
+%5 is int4 because it's the addition between int2 and uint4
+%6 is uint4 because it's specified in astype
+%7 is uint8 because it's specified in univariate
+%8 is uint8 because it's subtraction between uint8 and uint4
+```
+
+As you can see, `%8` is subtraction of two unsigned values, and it's unsigned as well. In an overflow condition where `c > d`, it'll result in undefined behavior.
diff --git a/frontends/concrete-python/docs/tutorial/extensions.md b/frontends/concrete-python/docs/tutorial/extensions.md
new file mode 100644
index 000000000..3b5b40fe0
--- /dev/null
+++ b/frontends/concrete-python/docs/tutorial/extensions.md
@@ -0,0 +1,153 @@
+# Extensions
+
+**Concrete-Numpy** tries to support **NumPy** as much as possible, but due to some technical limitations, not everything can be supported. On top of that, there are some things **NumPy** lack, which are useful. In some of these situations, we provide extensions in **Concrete-Numpy** to improve your experience.
+
+## cnp.zero()
+
+Allows you to create encrypted scalar zero:
+
+```python
+import concrete.numpy as cnp
+import numpy as np
+
+@cnp.compiler({"x": "encrypted"})
+def f(x):
+ z = cnp.zero()
+ return x + z
+
+inputset = range(10)
+circuit = f.compile(inputset)
+
+for x in range(10):
+ assert circuit.encrypt_run_decrypt(x) == x
+```
+
+## cnp.zeros(shape)
+
+Allows you to create encrypted tensor of zeros:
+
+```python
+import concrete.numpy as cnp
+import numpy as np
+
+@cnp.compiler({"x": "encrypted"})
+def f(x):
+ z = cnp.zeros((2, 3))
+ return x + z
+
+inputset = range(10)
+circuit = f.compile(inputset)
+
+for x in range(10):
+ assert np.array_equal(circuit.encrypt_run_decrypt(x), np.array([[x, x, x], [x, x, x]]))
+```
+
+## cnp.one()
+
+Allows you to create encrypted scalar one:
+
+```python
+import concrete.numpy as cnp
+import numpy as np
+
+@cnp.compiler({"x": "encrypted"})
+def f(x):
+ z = cnp.one()
+ return x + z
+
+inputset = range(10)
+circuit = f.compile(inputset)
+
+for x in range(10):
+ assert circuit.encrypt_run_decrypt(x) == x + 1
+```
+
+## cnp.ones(shape)
+
+Allows you to create encrypted tensor of ones:
+
+```python
+import concrete.numpy as cnp
+import numpy as np
+
+@cnp.compiler({"x": "encrypted"})
+def f(x):
+ z = cnp.ones((2, 3))
+ return x + z
+
+inputset = range(10)
+circuit = f.compile(inputset)
+
+for x in range(10):
+ assert np.array_equal(circuit.encrypt_run_decrypt(x), np.array([[x, x, x], [x, x, x]]) + 1)
+```
+
+## cnp.univariate(function)
+
+Allows you to wrap any univariate function into a single table lookup:
+
+```python
+import concrete.numpy as cnp
+import numpy as np
+
+def complex_univariate_function(x):
+
+ def per_element(element):
+ result = 0
+ for i in range(element):
+ result += i
+ return result
+
+ return np.vectorize(per_element)(x)
+
+@cnp.compiler({"x": "encrypted"})
+def f(x):
+ return cnp.univariate(complex_univariate_function)(x)
+
+inputset = [np.random.randint(0, 5, size=(3, 2)) for _ in range(10)]
+circuit = f.compile(inputset)
+
+sample = np.array([
+ [0, 4],
+ [2, 1],
+ [3, 0],
+])
+assert np.array_equal(circuit.encrypt_run_decrypt(sample), complex_univariate_function(sample))
+```
+
+{% hint style="danger" %}
+The wrapped function shouldn't have any side effects, and it should be deterministic.
+{% endhint %}
+
+## coonx.conv(...)
+
+Allows you to perform a convolution operation, with the same semantic of [onnx.Conv](https://github.com/onnx/onnx/blob/main/docs/Operators.md#Conv):
+
+```python
+import concrete.numpy as cnp
+import concrete.onnx as connx
+import numpy as np
+
+weight = np.array([[2, 1], [3, 2]]).reshape(1, 1, 2, 2)
+
+@cnp.compiler({"x": "encrypted"})
+def f(x):
+ return connx.conv(x, weight, strides=(2, 2), dilations=(1, 1), group=1)
+
+inputset = [np.random.randint(0, 4, size=(1, 1, 4, 4)) for _ in range(10)]
+circuit = f.compile(inputset)
+
+sample = np.array(
+ [
+ [3, 2, 1, 0],
+ [3, 2, 1, 0],
+ [3, 2, 1, 0],
+ [3, 2, 1, 0],
+ ]
+).reshape(1, 1, 4, 4)
+assert np.array_equal(circuit.encrypt_run_decrypt(sample), f(sample))
+```
+
+{% hint style="danger" %}
+Only 2D convolutions with one groups and without padding are supported for the time being.
+{% endhint %}
diff --git a/frontends/concrete-python/docs/tutorial/floating_points.md b/frontends/concrete-python/docs/tutorial/floating_points.md
new file mode 100644
index 000000000..0ce7281fc
--- /dev/null
+++ b/frontends/concrete-python/docs/tutorial/floating_points.md
@@ -0,0 +1,79 @@
+# Floating Points
+
+**Concrete-Numpy** partly supports floating points:
+
+* They cannot be inputs
+* They cannot be outputs
+* They can be intermediate values under certain constraints
+
+## As intermediate values
+
+**Concrete-Compile**, which is used for compiling the circuit, doesn't support floating points at all. However, it supports table lookups. They take an integer and map it to another integer. It does not care how the lookup table is calculated. Further, the constraints of this operation are such that there should be a single integer input and it should result in a single integer output.
+
+As long as your floating point operations comply with those constraints, **Concrete-Numpy** automatically converts your operations to a table lookup operation:
+
+```python
+import concrete.numpy as cnp
+import numpy as np
+
+@cnp.compiler({"x": "encrypted"})
+def f(x):
+ a = x + 1.5
+ b = np.sin(x)
+ c = np.around(a + b)
+ d = c.astype(np.int64)
+ return d
+
+inputset = range(8)
+circuit = f.compile(inputset)
+
+for x in range(8):
+ assert circuit.encrypt_run_decrypt(x) == f(x)
+```
+
+In the example above, `a`, `b`, and `c` are all floating point intermediates. However, they are just used to calculate `d`, which is an integer and value of `d` dependent upon `x` , which is another integer. **Concrete-Numpy** detects this and fuses all of those operations into a single table lookup from `x` to `d`.
+
+This approach works for a variety of use cases, but it comes up short for some:
+
+
+```python
+import concrete.numpy as cnp
+import numpy as np
+
+@cnp.compiler({"x": "encrypted", "y": "encrypted"})
+def f(x, y):
+ a = x + 1.5
+ b = np.sin(y)
+ c = np.around(a + b)
+ d = c.astype(np.int64)
+ return d
+
+inputset = [(1, 2), (3, 0), (2, 2), (1, 3)]
+circuit = f.compile(inputset)
+
+for x in range(8):
+ assert circuit.encrypt_run_decrypt(x) == f(x)
+```
+
+... results in:
+
+```
+RuntimeError: Function you are trying to compile cannot be converted to MLIR
+
+%0 = x # EncryptedScalar
+%1 = 1.5 # ClearScalar
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ only integer constants are supported
+%2 = y # EncryptedScalar
+%3 = add(%0, %1) # EncryptedScalar
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ only integer operations are supported
+%4 = sin(%2) # EncryptedScalar
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ only integer operations are supported
+%5 = add(%3, %4) # EncryptedScalar
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ only integer operations are supported
+%6 = around(%5) # EncryptedScalar
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ only integer operations are supported
+%7 = astype(%6, dtype=int_) # EncryptedScalar
+return %7
+```
+
+The reason for this is that `d` no longer depends solely on `x`, it depends on `y` as well. **Concrete-Numpy** cannot fuse these operations, so it raises an exception instead.
diff --git a/frontends/concrete-python/docs/tutorial/formatting.md b/frontends/concrete-python/docs/tutorial/formatting.md
new file mode 100644
index 000000000..9b1e4d56f
--- /dev/null
+++ b/frontends/concrete-python/docs/tutorial/formatting.md
@@ -0,0 +1,19 @@
+# Formatting
+
+You can convert your compiled circuit into its textual representation by converting it to string:
+
+
+```python
+str(circuit)
+```
+
+If you just want to see the output on your terminal, you can directly print it as well:
+
+
+```python
+print(circuit)
+```
+
+{% hint style="warning" %}
+Formatting is just for debugging. It's not possible to serialize the circuit back from its textual representation. See [How to Deploy](../howto/deploy.md) if that's your goal.
+{% endhint %}
diff --git a/frontends/concrete-python/docs/tutorial/key_value_database.ipynb b/frontends/concrete-python/docs/tutorial/key_value_database.ipynb
new file mode 100644
index 000000000..d8e734396
--- /dev/null
+++ b/frontends/concrete-python/docs/tutorial/key_value_database.ipynb
@@ -0,0 +1,1083 @@
+{
+ "cells": [
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "# Key Value Database\n",
+ "\n",
+ "This is an interactive tutorial of an Encrypted Key Value Database. The database allows for three operations, **Insert, Replace, and Query**. All the operations are implemented as fully-homomorphic encrypted circuits.\n",
+ "\n",
+ "In `examples/key-value-database/`, you will find the following files:\n",
+ "\n",
+ "- `static-size.py`: This file contains a static size database implementation, meaning that the number of entries is given as a parameter at the beginning.\n",
+ "- `dynamic-size.py`: This file contains a dynamic size database implementation, meaning that the database starts as a zero entry database, and is grown as needed.\n",
+ "\n",
+ "This tutorial goes over the statically-sized database implementation. The dynamic database implementation is very similar, and the reader is encouraged to look at the code to see the differences.\n"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "Below are the import statements.\n",
+ "\n",
+ "**time:** Used for measuring the time to create keys, encrypt and run circuits.\n",
+ "\n",
+ "**concrete.numpy:** Used for implementing homomorphic circuits.\n",
+ "\n",
+ "**numpy:** Used for mathematical operations. Concrete library compiles numpy operations into FHE encrypted operations.\n"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 1,
+ "metadata": {},
+ "outputs": [],
+ "source": [
+ "import time\n",
+ "\n",
+ "import concrete.numpy as cnp\n",
+ "import numpy as np"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "Below are the database configuration parameters. \n",
+ "\n",
+ "**Number of Entries:** Defines the maximum number of insertable (key, value) pairs. \n",
+ "\n",
+ "**Chunk Size:** Defines the size of each chunk. Chunks are used as the smallest substructure of key and values.\n",
+ "\n",
+ "**Key Size:** Defines the size of each key.\n",
+ "\n",
+ "**Value Size:** Defines the size of each value."
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 2,
+ "metadata": {},
+ "outputs": [],
+ "source": [
+ "\n",
+ "# The number of entries in the database\n",
+ "NUMBER_OF_ENTRIES = 5\n",
+ "# The number of bits in each chunk\n",
+ "CHUNK_SIZE = 4\n",
+ "\n",
+ "# The number of bits in the key and value\n",
+ "KEY_SIZE = 32\n",
+ "VALUE_SIZE = 32"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "Below are the definition of the state, and the accessors/indexers to the state.\n",
+ "\n",
+ "The shape of the state is defined with respect to the size of the key/value with the table given below.\n",
+ "\n",
+ "| Flag Size | Key Size | Number of Key Chunks | Value Size | Number of Value Chunks |\n",
+ "| --- | --- | --- | --- | --- |\n",
+ "| 1 | 32 | 32/4 = 8 | 32 | 32/4 = 8 |\n",
+ "| 1 | 8 | 8/4 = 2 | 16 | 16/4 = 4 |\n",
+ "| 1 | 4 | 4/4 = 1 | 4 | 4/4 = 1 |"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 3,
+ "metadata": {},
+ "outputs": [],
+ "source": [
+ "\n",
+ "# Key and Value size must be a multiple of chunk size\n",
+ "assert KEY_SIZE % CHUNK_SIZE == 0\n",
+ "assert VALUE_SIZE % CHUNK_SIZE == 0\n",
+ "\n",
+ "# Required number of chunks to store keys and values\n",
+ "NUMBER_OF_KEY_CHUNKS = KEY_SIZE // CHUNK_SIZE\n",
+ "NUMBER_OF_VALUE_CHUNKS = VALUE_SIZE // CHUNK_SIZE\n",
+ "\n",
+ "# The shape of the state as a tensor\n",
+ "# Shape:\n",
+ "# | Flag Size | Key Size | Value Size |\n",
+ "# | 1 | 32/4 = 8 | 32/4 = 8 |\n",
+ "STATE_SHAPE = (NUMBER_OF_ENTRIES, 1 + NUMBER_OF_KEY_CHUNKS + NUMBER_OF_VALUE_CHUNKS)"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "Slices below are used to index certain parts of the the state. "
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 4,
+ "metadata": {},
+ "outputs": [],
+ "source": [
+ "\n",
+ "# Indexers for each part of the state\n",
+ "FLAG = 0\n",
+ "KEY = slice(1, 1 + NUMBER_OF_KEY_CHUNKS)\n",
+ "VALUE = slice(1 + NUMBER_OF_KEY_CHUNKS, None)"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "### Encode/Decode functions.\n",
+ "\n",
+ "Encode/Decode functions are used to convert between integers and numpy arrays. The interface exposes integers, but the state is stored and processed as a numpy array."
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "#### Encode\n",
+ "\n",
+ "Encodes a number into a numpy array.\n",
+ "\n",
+ "- The number is encoded in binary and then split into chunks.\n",
+ "- Each chunk is then converted to an integer\n",
+ "- The integers are then stored in a numpy array\n",
+ "\n",
+ "| Function Call | Input(Integer) | Array-Width | Result(Numpy Array) |\n",
+ "| --- | --- | --- | --- |\n",
+ "| encode(25, 4) | 25 | 4 | [0, 0, 1, 9] |\n",
+ "| encode(40, 4) | 40 | 4 | [0, 0, 2, 8] |\n",
+ "| encode(11, 3) | 11 | 3 | [0, 0, 11] |\n"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 5,
+ "metadata": {},
+ "outputs": [],
+ "source": [
+ "def encode(number: int, width: int) -> np.array:\n",
+ " binary_repr = np.binary_repr(number, width=width)\n",
+ " blocks = [binary_repr[i:i+CHUNK_SIZE] for i in range(0, len(binary_repr), CHUNK_SIZE)]\n",
+ " return np.array([int(block, 2) for block in blocks])\n",
+ "\n",
+ "# Encode a number with the key size\n",
+ "def encode_key(number: int) -> np.array:\n",
+ " return encode(number, width=KEY_SIZE)\n",
+ "\n",
+ "# Encode a number with the value size\n",
+ "def encode_value(number: int) -> np.array:\n",
+ " return encode(number, width=VALUE_SIZE)"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "#### Decode\n",
+ "\n",
+ "Decodes a numpy array into a number.\n",
+ "\n",
+ "| Function Call | Input(Numpy Array) | Result(Integer) |\n",
+ "| --- | --- | --- |\n",
+ "| decode([0, 0, 1, 9]) | [0, 0, 1, 9] | 25 |\n",
+ "| decode([0, 0, 2, 8]) | [0, 0, 2, 8] | 40 |\n",
+ "| decode([0, 0, 11]) | [0, 0, 11] | 11 |"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 6,
+ "metadata": {},
+ "outputs": [],
+ "source": [
+ "def decode(encoded_number: np.array) -> int:\n",
+ " result = 0\n",
+ " for i in range(len(encoded_number)):\n",
+ " result += 2**(CHUNK_SIZE*i) * encoded_number[(len(encoded_number) - i) - 1]\n",
+ " return result"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "### Row Selection with Table Lookups\n",
+ "\n",
+ "Keep selected function is used to select the correct row of the database for each operation.\n",
+ "\n",
+ "Below is the python definition of the function."
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 7,
+ "metadata": {},
+ "outputs": [],
+ "source": [
+ "def keep_selected(value, selected):\n",
+ " if selected:\n",
+ " return value\n",
+ " else:\n",
+ " return 0"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "This function takes any value, and a boolean flag that indicates if value is selected or not. Within homomorphic encryption circuits, we cannot compile this function as encrypted values cannot affect control flow. Instead, we turn this function into a lookup table.\n",
+ "\n",
+ "Selected is preprended to the value, and function is modified to act as below.\n",
+ "\n",
+ "`keep_selected(i=0..15, 1) -> 0` \n",
+ "`keep_selected(i=16..31, 0) -> i-16`\n",
+ "\n",
+ "Below is the python code for the lookup table."
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 8,
+ "metadata": {},
+ "outputs": [],
+ "source": [
+ "keep_selected_lut = cnp.LookupTable([0 for _ in range(16)] + [i for i in range(16)])"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "Most significant bit of the input to the lookup table represents the select bit, hence if `select=0 <=> i=0..15` then the output is `0`. If `select=1 <=> i=16..31` then the output is `i-16`, the value itself.\n",
+ "\n",
+ "To summarize, we could implement the keep_selected function as below."
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 9,
+ "metadata": {},
+ "outputs": [],
+ "source": [
+ "def keep_selected_using_lut(value, selected):\n",
+ " packed = (2 ** CHUNK_SIZE) * selected + value\n",
+ " return keep_selected_lut[packed]"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "\n",
+ "### Circuit Implementation Functions\n",
+ "\n",
+ "The following functions are used to implement the key-value database circuits. \n",
+ "Three circuits are implemented: \n",
+ "- insert: Inserts a key value pair into the database\n",
+ "- replace: Replaces the value of a key in the database\n",
+ "- query: Queries the database for a key and returns the value\n"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "#### Insert\n",
+ "\n",
+ "Algorithm of the insert function is as follows:\n",
+ "- Create a selection array to select a certain row of the database\n",
+ "- Fill this array by setting the first non-empty row of the database to 1\n",
+ "- Create a state update array, where the first non-empty row of the database is set to the new key and value\n",
+ "- Add the state update array to the state\n",
+ "\n",
+ "Implementation is below. "
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 10,
+ "metadata": {},
+ "outputs": [],
+ "source": [
+ "\n",
+ "# Insert a key value pair into the database\n",
+ "# - state: The state of the database\n",
+ "# - key: The key to insert\n",
+ "# - value: The value to insert\n",
+ "# Returns the updated state\n",
+ "def _insert_impl(state, key, value):\n",
+ " # Get the used bit from the state\n",
+ " # This bit is used to determine if an entry is used or not\n",
+ " flags = state[:, FLAG]\n",
+ "\n",
+ " # Create a selection array\n",
+ " # This array is used to select the first unused entry\n",
+ " selection = cnp.zeros(NUMBER_OF_ENTRIES)\n",
+ "\n",
+ " # The found bit is used to determine if an unused entry has been found\n",
+ " found = cnp.zero()\n",
+ " for i in range(NUMBER_OF_ENTRIES):\n",
+ " # The packed flag and found bit are used to determine if the entry is unused\n",
+ " # | Flag | Found |\n",
+ " # | 0 | 0 | -> Unused, select\n",
+ " # | 0 | 1 | -> Unused, skip\n",
+ " # | 1 | 0 | -> Used, skip\n",
+ " # | 1 | 1 | -> Used, skip\n",
+ " packed_flag_and_found = (found * 2) + flags[i]\n",
+ " # Use the packed flag and found bit to determine if the entry is unused\n",
+ " is_selected = (packed_flag_and_found == 0)\n",
+ "\n",
+ " # Update the selection array\n",
+ " selection[i] = is_selected\n",
+ " # Update the found bit, so all entries will be \n",
+ " # skipped after the first unused entry is found\n",
+ " found += is_selected\n",
+ "\n",
+ " # Create a state update array\n",
+ " state_update = cnp.zeros(STATE_SHAPE)\n",
+ " # Update the state update array with the selection array\n",
+ " state_update[:, FLAG] = selection\n",
+ "\n",
+ " # Reshape the selection array to be able to use it as an index\n",
+ " selection = selection.reshape((-1, 1))\n",
+ "\n",
+ " # Create a packed selection and key array\n",
+ " # This array is used to update the key of the selected entry\n",
+ " packed_selection_and_key = (selection * (2 ** CHUNK_SIZE)) + key\n",
+ " key_update = keep_selected_lut[packed_selection_and_key]\n",
+ "\n",
+ " # Create a packed selection and value array\n",
+ " # This array is used to update the value of the selected entry\n",
+ " packed_selection_and_value = selection * (2 ** CHUNK_SIZE) + value\n",
+ " value_update = keep_selected_lut[packed_selection_and_value]\n",
+ "\n",
+ " # Update the state update array with the key and value update arrays\n",
+ " state_update[:, KEY] = key_update\n",
+ " state_update[:, VALUE] = value_update\n",
+ "\n",
+ " # Update the state with the state update array\n",
+ " new_state = state + state_update\n",
+ " return new_state"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "#### Replace\n",
+ "\n",
+ "Algorithm of the replace function is as follows:\n",
+ "- Create a equal-rows array to select rows that match the given key in the database\n",
+ "- Create a selection array to select the row that is currently used in the database\n",
+ "- Set the selection array to 1 for the row that contains the key, and 0 for all other rows\n",
+ "- Create an inverse selection array by inverting the selection array\n",
+ "- Row set to 1 in the selection array will be updated, whereas all other values will stay the same\n",
+ "- To do this, we multiply the selection array with the new key and value, and the inverse selection array with the old key and value\n",
+ "- We then add the two arrays to get the new state"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 11,
+ "metadata": {},
+ "outputs": [],
+ "source": [
+ "\n",
+ "# Replace the value of a key in the database\n",
+ "# If the key is not in the database, nothing happens\n",
+ "# If the key is in the database, the value is replaced\n",
+ "# - state: The state of the database\n",
+ "# - key: The key to replace\n",
+ "# - value: The value to replace\n",
+ "# Returns the updated state\n",
+ "def _replace_impl(state, key, value):\n",
+ " # Get the flags, keys and values from the state\n",
+ " flags = state[:, FLAG]\n",
+ " keys = state[:, KEY]\n",
+ " values = state[:, VALUE]\n",
+ "\n",
+ " \n",
+ "\n",
+ " # Create an equal_rows array\n",
+ " # This array is used to select all entries with the given key\n",
+ " # The equal_rows array is created by comparing the keys in the state\n",
+ " # with the given key, and only setting the entry to 1 if the keys are equal\n",
+ " # Example:\n",
+ " # keys = [[1, 0, 1, 0], [0, 1, 0, 1, 1]]\n",
+ " # key = [1, 0, 1, 0]\n",
+ " # equal_rows = [1, 0]\n",
+ " equal_rows = (np.sum((keys - key) == 0, axis=1) == NUMBER_OF_KEY_CHUNKS)\n",
+ "\n",
+ " # Create a selection array\n",
+ " # This array is used to select the entry to change the value of\n",
+ " # The selection array is created by combining the equal_rows array\n",
+ " # with the flags array, which is used to determine if an entry is used or not\n",
+ " # The reason for combining the equal_rows array with the flags array\n",
+ " # is to make sure that only used entries are selected\n",
+ " selection = (flags * 2 + equal_rows == 3).reshape((-1, 1))\n",
+ " \n",
+ " # Create a packed selection and value array\n",
+ " # This array is used to update the value of the selected entry\n",
+ " packed_selection_and_value = selection * (2 ** CHUNK_SIZE) + value\n",
+ " set_value = keep_selected_lut[packed_selection_and_value]\n",
+ "\n",
+ " # Create an inverse selection array\n",
+ " # This array is used to pick entries that are not selected\n",
+ " # Example:\n",
+ " # selection = [1, 0, 0]\n",
+ " # inverse_selection = [0, 1, 1]\n",
+ " inverse_selection = 1 - selection\n",
+ "\n",
+ " # Create a packed inverse selection and value array\n",
+ " # This array is used to keep the value of the entries that are not selected\n",
+ " packed_inverse_selection_and_values = inverse_selection * (2 ** CHUNK_SIZE) + values\n",
+ " kept_values = keep_selected_lut[packed_inverse_selection_and_values]\n",
+ "\n",
+ " # Update the values of the state with the new values\n",
+ " new_values = kept_values + set_value\n",
+ " state[:, VALUE] = new_values\n",
+ "\n",
+ " return state"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "#### Query\n",
+ "\n",
+ "Algorithm of the query function is as follows:\n",
+ "- Create a selection array to select a certain row of the database\n",
+ "- Set the selection array to 1 for the row that contains the key\n",
+ "- Multiply the selection array with the state to zero all rows that do not contain the key\n",
+ "- Sum the rows of the state to get the remaining non-zero row, basically doing a dimension reduction\n",
+ "- Prepend the found flag to the value, return the resulting array.\n",
+ "- The resulting array will be destructured in the non-encrypted query function"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 12,
+ "metadata": {},
+ "outputs": [],
+ "source": [
+ "\n",
+ "# Query the database for a key and return the value\n",
+ "# - state: The state of the database\n",
+ "# - key: The key to query\n",
+ "# Returns an array with the following format:\n",
+ "# [found, value]\n",
+ "# found: 1 if the key was found, 0 otherwise\n",
+ "# value: The value of the key if the key was found, 0 otherwise\n",
+ "def _query_impl(state, key):\n",
+ " # Get the keys and values from the state\n",
+ " keys = state[:, KEY]\n",
+ " values = state[:, VALUE]\n",
+ "\n",
+ " # Create a selection array\n",
+ " # This array is used to select the entry with the given key\n",
+ " # The selection array is created by comparing the keys in the state\n",
+ " # with the given key, and only setting the entry to 1 if the keys are equal\n",
+ " # Example:\n",
+ " # keys = [[1, 0, 1, 0], [0, 1, 0, 1, 1]]\n",
+ " # key = [1, 0, 1, 0]\n",
+ " # selection = [1, 0]\n",
+ " selection = (np.sum((keys - key) == 0, axis=1) == NUMBER_OF_KEY_CHUNKS).reshape((-1, 1))\n",
+ "\n",
+ " # Create a found bit\n",
+ " # This bit is used to determine if the key was found\n",
+ " # The found bit is set to 1 if the key was found, and 0 otherwise\n",
+ " found = np.sum(selection)\n",
+ "\n",
+ " # Create a packed selection and value array\n",
+ " # This array is used to get the value of the selected entry\n",
+ " packed_selection_and_values = selection * (2 ** CHUNK_SIZE) + values\n",
+ " value_selection = keep_selected_lut[packed_selection_and_values]\n",
+ "\n",
+ " # Sum the value selection array to get the value\n",
+ " value = np.sum(value_selection, axis=0)\n",
+ "\n",
+ " # Return the found bit and the value\n",
+ " return cnp.array([found, *value])"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "### Key-Value Database Interface\n",
+ "\n",
+ "KeyValueDatabase class is the interface that exposes the functionality of the key-value database."
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 13,
+ "metadata": {},
+ "outputs": [],
+ "source": [
+ "class KeyValueDatabase:\n",
+ " \"\"\"\n",
+ " A key-value database that uses fully homomorphic encryption circuits to store the data.\n",
+ " \"\"\"\n",
+ "\n",
+ " # The state of the database, it holds all the \n",
+ " # keys and values as a table of entries\n",
+ " _state: np.ndarray\n",
+ "\n",
+ " # The circuits used to implement the database\n",
+ " _insert_circuit: cnp.Circuit\n",
+ " _replace_circuit: cnp.Circuit\n",
+ " _query_circuit: cnp.Circuit\n",
+ "\n",
+ " # Below is the initialization of the database.\n",
+ "\n",
+ " # First, we initialize the state, and provide the necessary input sets. \n",
+ " # In versions later than concrete-numpy.0.9.0, we can use the `direct circuit` \n",
+ " # functionality to define the bit-widths of encrypted values rather than using \n",
+ " # `input sets`. Input sets are used to determine the required bit-width of the \n",
+ " # encrypted values. Hence, we add the largest possible value in the database \n",
+ " # to the input sets.\n",
+ "\n",
+ " # Within the initialization phase, we create the required configuration, \n",
+ " # compilers, circuits, and keys. Circuit and key generation phase is \n",
+ " # timed and printed in the output.\n",
+ "\n",
+ " def __init__(self):\n",
+ " # Initialize the state to all zeros\n",
+ " self._state = np.zeros(STATE_SHAPE, dtype=np.int64)\n",
+ "\n",
+ " ## Input sets for initialization of the circuits\n",
+ " # The input sets are used to initialize the circuits with the correct parameters\n",
+ "\n",
+ " # The input set for the query circuit\n",
+ " inputset_binary = [\n",
+ " (\n",
+ " np.zeros(STATE_SHAPE, dtype=np.int64), # state\n",
+ " np.ones(NUMBER_OF_KEY_CHUNKS, dtype=np.int64) * (2**CHUNK_SIZE - 1), # key\n",
+ " )\n",
+ " ]\n",
+ " # The input set for the insert and replace circuits\n",
+ " inputset_ternary = [\n",
+ " (\n",
+ " np.zeros(STATE_SHAPE, dtype=np.int64), # state\n",
+ " np.ones(NUMBER_OF_KEY_CHUNKS, dtype=np.int64) * (2**CHUNK_SIZE - 1), # key\n",
+ " np.ones(NUMBER_OF_VALUE_CHUNKS, dtype=np.int64) * (2**CHUNK_SIZE - 1), # value\n",
+ " )\n",
+ " ]\n",
+ "\n",
+ " ## Circuit compilation\n",
+ "\n",
+ " # Create a configuration for the compiler\n",
+ " configuration = cnp.Configuration(\n",
+ " enable_unsafe_features=True,\n",
+ " use_insecure_key_cache=True,\n",
+ " insecure_key_cache_location=\".keys\",\n",
+ " )\n",
+ "\n",
+ " # Create the compilers for the circuits\n",
+ " # Each compiler is provided with\n",
+ " # - The implementation of the circuit\n",
+ " # - The inputs and their corresponding types of the circuit\n",
+ " # - \"encrypted\": The input is encrypted\n",
+ " # - \"plain\": The input is not encrypted\n",
+ " insert_compiler = cnp.Compiler(\n",
+ " _insert_impl,\n",
+ " {\"state\": \"encrypted\", \"key\": \"encrypted\", \"value\": \"encrypted\"}\n",
+ " )\n",
+ " replace_compiler = cnp.Compiler(\n",
+ " _replace_impl,\n",
+ " {\"state\": \"encrypted\", \"key\": \"encrypted\", \"value\": \"encrypted\"}\n",
+ " )\n",
+ " query_compiler = cnp.Compiler(\n",
+ " _query_impl,\n",
+ " {\"state\": \"encrypted\", \"key\": \"encrypted\"}\n",
+ " )\n",
+ "\n",
+ "\n",
+ " ## Compile the circuits\n",
+ " # The circuits are compiled with the input set and the configuration\n",
+ "\n",
+ " print()\n",
+ "\n",
+ " print(\"Compiling insertion circuit...\")\n",
+ " start = time.time()\n",
+ " self._insert_circuit = insert_compiler.compile(inputset_ternary, configuration)\n",
+ " end = time.time()\n",
+ " print(f\"(took {end - start:.3f} seconds)\")\n",
+ "\n",
+ " print()\n",
+ "\n",
+ " print(\"Compiling replacement circuit...\")\n",
+ " start = time.time()\n",
+ " self._replace_circuit = replace_compiler.compile(inputset_ternary, configuration)\n",
+ " end = time.time()\n",
+ " print(f\"(took {end - start:.3f} seconds)\")\n",
+ "\n",
+ " print()\n",
+ "\n",
+ " print(\"Compiling query circuit...\")\n",
+ " start = time.time()\n",
+ " self._query_circuit = query_compiler.compile(inputset_binary, configuration)\n",
+ " end = time.time()\n",
+ " print(f\"(took {end - start:.3f} seconds)\")\n",
+ "\n",
+ " print()\n",
+ "\n",
+ " ## Generate the keys for the circuits\n",
+ " # The keys are seaparately generated for each circuit\n",
+ "\n",
+ " print(\"Generating insertion keys...\")\n",
+ " start = time.time()\n",
+ " self._insert_circuit.keygen()\n",
+ " end = time.time()\n",
+ " print(f\"(took {end - start:.3f} seconds)\")\n",
+ "\n",
+ " print()\n",
+ "\n",
+ " print(\"Generating replacement keys...\")\n",
+ " start = time.time()\n",
+ " self._replace_circuit.keygen()\n",
+ " end = time.time()\n",
+ " print(f\"(took {end - start:.3f} seconds)\")\n",
+ "\n",
+ " print()\n",
+ "\n",
+ " print(\"Generating query keys...\")\n",
+ " start = time.time()\n",
+ " self._query_circuit.keygen()\n",
+ " end = time.time()\n",
+ " print(f\"(took {end - start:.3f} seconds)\")\n",
+ "\n",
+ " ### The Interface Functions\n",
+ " \n",
+ " # The following methods are used to interact with the database. \n",
+ " # They are used to insert, replace and query the database. \n",
+ " # The methods are implemented by encrypting the inputs, \n",
+ " # running the circuit and decrypting the output.\n",
+ "\n",
+ " # Insert a key-value pair into the database\n",
+ " # - key: The key to insert\n",
+ " # - value: The value to insert\n",
+ " # The key and value are encoded before they are inserted\n",
+ " # The state of the database is updated with the new key-value pair\n",
+ " def insert(self, key, value):\n",
+ " print()\n",
+ " print(f\"Inserting...\")\n",
+ " start = time.time()\n",
+ " self._state = self._insert_circuit.encrypt_run_decrypt(\n",
+ " self._state, encode_key(key), encode_value(value)\n",
+ " )\n",
+ " end = time.time()\n",
+ " print(f\"(took {end - start:.3f} seconds)\")\n",
+ "\n",
+ " # Replace a key-value pair in the database\n",
+ " # - key: The key to replace\n",
+ " # - value: The new value to insert with the key\n",
+ " # The key and value are encoded before they are inserted\n",
+ " # The state of the database is updated with the new key-value pair\n",
+ " def replace(self, key, value):\n",
+ " print()\n",
+ " print(f\"Replacing...\")\n",
+ " start = time.time()\n",
+ " self._state = self._replace_circuit.encrypt_run_decrypt(\n",
+ " self._state, encode_key(key), encode_value(value)\n",
+ " )\n",
+ " end = time.time()\n",
+ " print(f\"(took {end - start:.3f} seconds)\")\n",
+ "\n",
+ " # Query the database for a key\n",
+ " # - key: The key to query\n",
+ " # The key is encoded before it is queried\n",
+ " # Returns the value associated with the key or None if the key is not found\n",
+ " def query(self, key):\n",
+ " print()\n",
+ " print(f\"Querying...\")\n",
+ " start = time.time()\n",
+ " result = self._query_circuit.encrypt_run_decrypt(\n",
+ " self._state, encode_key(key)\n",
+ " )\n",
+ " end = time.time()\n",
+ " print(f\"(took {end - start:.3f} seconds)\")\n",
+ "\n",
+ " if result[0] == 0:\n",
+ " return None\n",
+ "\n",
+ " return decode(result[1:])\n"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "The implementation provided above is the statically-sized implementation. We will shortly discuss the dynamic implementation below.\n",
+ "\n",
+ "Whereas static implementation works with circuits over the whole database, dynamic implementation works with circuits over a single row of the database.\n",
+ "\n",
+ "In the dynamic implementation, we iterate over the rows of the database in a simple Python loop, and run the circuits over each row. This difference in implementation is reflected in the `insert`, `replace` and `query` functions.\n",
+ "\n",
+ "In terms of comparison of the implementations, the static implementation is more efficient with dense databases as it works with parallelized tensors, but it takes the same amount of time to query an empty database and a database with 1 million entries. The dynamic implementation is more efficient with sparse databases as it grows with the number of entries, but it doesn't use circuit level parallelization. Also, insertion is free in the dynamic implementation as it only appends a new item to a Python list."
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "We have now finished the definition of the database. We can now use the database to insert, replace and query values."
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "## Usage\n",
+ "\n",
+ "Below is the initialization of the database. As we provide parameters globally, we can simply initialize the database with the following command."
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 14,
+ "metadata": {},
+ "outputs": [
+ {
+ "name": "stdout",
+ "output_type": "stream",
+ "text": [
+ "\n",
+ "Compiling insertion circuit...\n",
+ "(took 1.178 seconds)\n",
+ "\n",
+ "Compiling replacement circuit...\n",
+ "(took 0.626 seconds)\n",
+ "\n",
+ "Compiling query circuit...\n",
+ "(took 0.603 seconds)\n",
+ "\n",
+ "Generating insertion keys...\n",
+ "(took 0.188 seconds)\n",
+ "\n",
+ "Generating replacement keys...\n",
+ "(took 0.280 seconds)\n",
+ "\n",
+ "Generating query keys...\n",
+ "(took 0.227 seconds)\n"
+ ]
+ }
+ ],
+ "source": [
+ "## Test: Initialization\n",
+ "# Initialize the database\n",
+ "db = KeyValueDatabase()"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "We can use the interface functions as provided below."
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 15,
+ "metadata": {},
+ "outputs": [
+ {
+ "name": "stdout",
+ "output_type": "stream",
+ "text": [
+ "\n",
+ "Inserting...\n",
+ "(took 0.768 seconds)\n"
+ ]
+ }
+ ],
+ "source": [
+ "# Test: Insert/Query\n",
+ "# Insert (key: 3, value: 4) into the database\n",
+ "db.insert(3, 4)"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 16,
+ "metadata": {},
+ "outputs": [
+ {
+ "name": "stdout",
+ "output_type": "stream",
+ "text": [
+ "\n",
+ "Querying...\n",
+ "(took 0.460 seconds)\n"
+ ]
+ }
+ ],
+ "source": [
+ "# Query the database for the key 3\n",
+ "# The value 4 should be returned\n",
+ "assert db.query(3) == 4"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 17,
+ "metadata": {},
+ "outputs": [
+ {
+ "name": "stdout",
+ "output_type": "stream",
+ "text": [
+ "\n",
+ "Replacing...\n",
+ "(took 0.806 seconds)\n"
+ ]
+ }
+ ],
+ "source": [
+ "# Test: Replace/Query\n",
+ "# Replace the value of the key 3 with 1\n",
+ "db.replace(3, 1)"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 18,
+ "metadata": {},
+ "outputs": [
+ {
+ "name": "stdout",
+ "output_type": "stream",
+ "text": [
+ "\n",
+ "Querying...\n",
+ "(took 0.483 seconds)\n"
+ ]
+ }
+ ],
+ "source": [
+ "# Query the database for the key 3\n",
+ "# The value 1 should be returned\n",
+ "assert db.query(3) == 1"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 19,
+ "metadata": {},
+ "outputs": [
+ {
+ "name": "stdout",
+ "output_type": "stream",
+ "text": [
+ "\n",
+ "Inserting...\n",
+ "(took 0.618 seconds)\n"
+ ]
+ }
+ ],
+ "source": [
+ "# Test: Insert/Query\n",
+ "# Insert (key: 25, value: 40) into the database\n",
+ "db.insert(25, 40)"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 20,
+ "metadata": {},
+ "outputs": [
+ {
+ "name": "stdout",
+ "output_type": "stream",
+ "text": [
+ "\n",
+ "Querying...\n",
+ "(took 0.957 seconds)\n"
+ ]
+ }
+ ],
+ "source": [
+ "# Query the database for the key 25\n",
+ "# The value 40 should be returned\n",
+ "assert db.query(25) == 40"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 21,
+ "metadata": {},
+ "outputs": [
+ {
+ "name": "stdout",
+ "output_type": "stream",
+ "text": [
+ "\n",
+ "Querying...\n",
+ "(took 1.133 seconds)\n"
+ ]
+ }
+ ],
+ "source": [
+ "# Test: Query Not Found\n",
+ "# Query the database for the key 4\n",
+ "# None should be returned\n",
+ "assert db.query(4) == None"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 22,
+ "metadata": {},
+ "outputs": [
+ {
+ "name": "stdout",
+ "output_type": "stream",
+ "text": [
+ "\n",
+ "Replacing...\n",
+ "(took 2.325 seconds)\n"
+ ]
+ }
+ ],
+ "source": [
+ "# Test: Replace/Query\n",
+ "# Replace the value of the key 3 with 5\n",
+ "db.replace(3, 5)"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 23,
+ "metadata": {},
+ "outputs": [
+ {
+ "name": "stdout",
+ "output_type": "stream",
+ "text": [
+ "\n",
+ "Querying...\n",
+ "(took 1.172 seconds)\n"
+ ]
+ }
+ ],
+ "source": [
+ "# Query the database for the key 3\n",
+ "# The value 5 should be returned\n",
+ "assert db.query(3) == 5"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "We can now test the limits, we'll use the hyper-parameters `KEY_SIZE` and `VALUE_SIZE` in order to ensure that the examples work robustly against changes to the parameters."
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 24,
+ "metadata": {},
+ "outputs": [],
+ "source": [
+ "# Define lower/upper bounds for the key\n",
+ "minimum_key = 1\n",
+ "maximum_key = 2 ** KEY_SIZE - 1\n",
+ "# Define lower/upper bounds for the value\n",
+ "minimum_value = 1\n",
+ "maximum_value = 2 ** VALUE_SIZE - 1"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "Below are the usage examples with these bounds."
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 25,
+ "metadata": {},
+ "outputs": [
+ {
+ "name": "stdout",
+ "output_type": "stream",
+ "text": [
+ "\n",
+ "Inserting...\n",
+ "(took 1.358 seconds)\n",
+ "\n",
+ "Querying...\n",
+ "(took 1.137 seconds)\n",
+ "\n",
+ "Inserting...\n",
+ "(took 1.383 seconds)\n",
+ "\n",
+ "Querying...\n",
+ "(took 1.207 seconds)\n",
+ "\n",
+ "Replacing...\n",
+ "(took 2.404 seconds)\n",
+ "\n",
+ "Querying...\n",
+ "(took 1.241 seconds)\n",
+ "\n",
+ "Replacing...\n",
+ "(took 2.345 seconds)\n",
+ "\n",
+ "Querying...\n",
+ "(took 1.213 seconds)\n"
+ ]
+ }
+ ],
+ "source": [
+ "# Test: Insert/Replace/Query Bounds\n",
+ "# Insert (key: minimum_key , value: minimum_value) into the database\n",
+ "db.insert(minimum_key, minimum_value)\n",
+ "\n",
+ "# Query the database for the key=minimum_key\n",
+ "# The value minimum_value should be returned\n",
+ "assert db.query(minimum_key) == minimum_value\n",
+ "\n",
+ "# Insert (key: maximum_key , value: maximum_value) into the database\n",
+ "db.insert(maximum_key, maximum_value)\n",
+ "\n",
+ "# Query the database for the key=maximum_key\n",
+ "# The value maximum_value should be returned\n",
+ "assert db.query(maximum_key) == maximum_value\n",
+ "\n",
+ "# Replace the value of key=minimum_key with maximum_value\n",
+ "db.replace(minimum_key, maximum_value)\n",
+ "\n",
+ "# Query the database for the key=minimum_key\n",
+ "# The value maximum_value should be returned\n",
+ "assert db.query(minimum_key) == maximum_value\n",
+ "\n",
+ "# Replace the value of key=maximum_key with minimum_value\n",
+ "db.replace(maximum_key, minimum_value)\n",
+ "\n",
+ "# Query the database for the key=maximum_key\n",
+ "# The value minimum_value should be returned\n",
+ "assert db.query(maximum_key) == minimum_value"
+ ]
+ }
+ ],
+ "metadata": {
+ "execution": {
+ "timeout": 10800
+ }
+ },
+ "nbformat": 4,
+ "nbformat_minor": 2
+}
diff --git a/frontends/concrete-python/docs/tutorial/key_value_database.md b/frontends/concrete-python/docs/tutorial/key_value_database.md
new file mode 100644
index 000000000..475dfc9f2
--- /dev/null
+++ b/frontends/concrete-python/docs/tutorial/key_value_database.md
@@ -0,0 +1,3 @@
+# Key Value Database
+
+This is an interactive tutorial written as a Jupyter Notebook, which you can find [here](https://github.com/zama-ai/concrete-numpy/blob/main/docs/tutorial/key_value_database.ipynb).
diff --git a/frontends/concrete-python/docs/tutorial/rounded_table_lookups.md b/frontends/concrete-python/docs/tutorial/rounded_table_lookups.md
new file mode 100644
index 000000000..c2f266840
--- /dev/null
+++ b/frontends/concrete-python/docs/tutorial/rounded_table_lookups.md
@@ -0,0 +1,183 @@
+# Rounded Table Lookups
+
+{% hint style="warning" %}
+Rounded table lookups are not compilable yet. API is stable and will not change so it's documented, but you might not be able to run the code samples in this document.
+{% endhint %}
+
+Table lookups have a strict constraint on number of bits they support. This can be quite limiting, especially if you don't need the exact precision.
+
+To overcome such shortcomings, rounded table lookup operation is introduced. It's a way to extract most significant bits of a large integer and then applying the table lookup to those bits.
+
+Imagine you have an 8-bit value, but you want to have a 5-bit table lookup, you can call `cnp.round_bit_pattern(input, lsbs_to_remove=3)` and use the value you get in the table lookup.
+
+In Python, evaluation will work like the following:
+```
+0b_0000_0000 => 0b_0000_0000
+0b_0000_0001 => 0b_0000_0000
+0b_0000_0010 => 0b_0000_0000
+0b_0000_0011 => 0b_0000_0000
+0b_0000_0100 => 0b_0000_1000
+0b_0000_0101 => 0b_0000_1000
+0b_0000_0110 => 0b_0000_1000
+0b_0000_0111 => 0b_0000_1000
+
+0b_1010_0000 => 0b_1010_0000
+0b_1010_0001 => 0b_1010_0000
+0b_1010_0010 => 0b_1010_0000
+0b_1010_0011 => 0b_1010_0000
+0b_1010_0100 => 0b_1010_1000
+0b_1010_0101 => 0b_1010_1000
+0b_1010_0110 => 0b_1010_1000
+0b_1010_0111 => 0b_1010_1000
+
+0b_1010_1000 => 0b_1010_1000
+0b_1010_1001 => 0b_1010_1000
+0b_1010_1010 => 0b_1010_1000
+0b_1010_1011 => 0b_1010_1000
+0b_1010_1100 => 0b_1011_0000
+0b_1010_1101 => 0b_1011_0000
+0b_1010_1110 => 0b_1011_0000
+0b_1010_1111 => 0b_1011_0000
+
+0b_1011_1000 => 0b_1011_1000
+0b_1011_1001 => 0b_1011_1000
+0b_1011_1010 => 0b_1011_1000
+0b_1011_1011 => 0b_1011_1000
+0b_1011_1100 => 0b_1100_0000
+0b_1011_1101 => 0b_1100_0000
+0b_1011_1110 => 0b_1100_0000
+0b_1011_1111 => 0b_1100_0000
+```
+
+and during homomorphic execution, it'll be converted like this:
+```
+0b_0000_0000 => 0b_00000
+0b_0000_0001 => 0b_00000
+0b_0000_0010 => 0b_00000
+0b_0000_0011 => 0b_00000
+0b_0000_0100 => 0b_00001
+0b_0000_0101 => 0b_00001
+0b_0000_0110 => 0b_00001
+0b_0000_0111 => 0b_00001
+
+0b_1010_0000 => 0b_10100
+0b_1010_0001 => 0b_10100
+0b_1010_0010 => 0b_10100
+0b_1010_0011 => 0b_10100
+0b_1010_0100 => 0b_10101
+0b_1010_0101 => 0b_10101
+0b_1010_0110 => 0b_10101
+0b_1010_0111 => 0b_10101
+
+0b_1010_1000 => 0b_10101
+0b_1010_1001 => 0b_10101
+0b_1010_1010 => 0b_10101
+0b_1010_1011 => 0b_10101
+0b_1010_1100 => 0b_10110
+0b_1010_1101 => 0b_10110
+0b_1010_1110 => 0b_10110
+0b_1010_1111 => 0b_10110
+
+0b_1011_1000 => 0b_10111
+0b_1011_1001 => 0b_10111
+0b_1011_1010 => 0b_10111
+0b_1011_1011 => 0b_10111
+0b_1011_1100 => 0b_11000
+0b_1011_1101 => 0b_11000
+0b_1011_1110 => 0b_11000
+0b_1011_1111 => 0b_11000
+```
+
+and then a modified table lookup would be applied to the resulting 5-bits.
+
+Here is a concrete example, let's say you want to apply ReLU to an 18-bit value. Let's see what the original ReLU looks like first:
+
+```python
+import matplotlib.pyplot as plt
+
+def relu(x):
+ return x if x >= 0 else 0
+
+xs = range(-100_000, 100_000)
+ys = [relu(x) for x in xs]
+
+plt.plot(xs, ys)
+plt.show()
+```
+
+
+
+Input range is [-100_000, 100_000), which means 18-bit table lookups are required, but they are not supported yet, you can apply rounding operation to the input before passing it to `ReLU` function:
+
+```python
+import concrete.numpy as cnp
+import matplotlib.pyplot as plt
+import numpy as np
+
+def relu(x):
+ return x if x >= 0 else 0
+
+@cnp.compiler({"x": "encrypted"})
+def f(x):
+ x = cnp.round_bit_pattern(x, lsbs_to_remove=10)
+ return cnp.univariate(relu)(x)
+
+inputset = [-100_000, (100_000 - 1)]
+circuit = f.compile(inputset)
+
+xs = range(-100_000, 100_000)
+ys = [circuit.simulate(x) for x in xs]
+
+plt.plot(xs, ys)
+plt.show()
+```
+
+in this case we've removed 10 least significant bits of the input and then applied ReLU function to this value to get:
+
+
+
+which is close enough to original ReLU for some cases. If your application is more flexible, you could remove more bits, let's say 12 to get:
+
+
+
+This is very useful, but in some cases, you don't know how many bits your input have, so it's not reliable to specify `lsbs_to_remove` manually. For this reason, `AutoRounder` class is introduced.
+
+```python
+import concrete.numpy as cnp
+import matplotlib.pyplot as plt
+import numpy as np
+
+rounder = cnp.AutoRounder(target_msbs=6)
+
+def relu(x):
+ return x if x >= 0 else 0
+
+@cnp.compiler({"x": "encrypted"})
+def f(x):
+ x = cnp.round_bit_pattern(x, lsbs_to_remove=rounder)
+ return cnp.univariate(relu)(x)
+
+inputset = [-100_000, (100_000 - 1)]
+cnp.AutoRounder.adjust(f, inputset) # alternatively, you can use `auto_adjust_rounders=True` below
+circuit = f.compile(inputset)
+
+xs = range(-100_000, 100_000)
+ys = [circuit.simulate(x) for x in xs]
+
+plt.plot(xs, ys)
+plt.show()
+```
+
+`AutoRounder`s allow you to set how many of the most significant bits to keep, but they need to be adjusted using an inputset to determine how many of the least significant bits to remove. This can be done manually using `cnp.AutoRounder.adjust(function, inputset)`, or by setting `auto_adjust_rounders` to `True` during compilation.
+
+In the example above, `6` of the most significant bits are kept to get:
+
+
+
+You can adjust `target_msbs` depending on your requirements. If you set it to `4` for example, you'd get:
+
+
+
+{% hint style="warning" %}
+`AutoRounder`s should be defined outside the function being compiled. They are used to store the result of adjustment process, so they shouldn't be created each time the function is called.
+{% endhint %}
diff --git a/frontends/concrete-python/docs/tutorial/simulation.md b/frontends/concrete-python/docs/tutorial/simulation.md
new file mode 100644
index 000000000..2558341b0
--- /dev/null
+++ b/frontends/concrete-python/docs/tutorial/simulation.md
@@ -0,0 +1,37 @@
+# Simulation
+
+During development, speed of homomorphic execution is a big blocker for fast prototyping.
+You could call the function you're trying to compile directly of course, but it won't be exactly the same as FHE execution, which has a certain probability of error (see [Exactness](../getting-started/exactness.md)).
+
+Considering these, simulation is introduced:
+
+```python
+import concrete.numpy as cnp
+import numpy as np
+
+@cnp.compiler({"x": "encrypted"})
+def f(x):
+ return (x + 1) ** 2
+
+inputset = [np.random.randint(0, 10, size=(10,)) for _ in range(10)]
+circuit = f.compile(inputset, p_error=0.1)
+
+sample = np.array([0, 1, 2, 3, 4, 5, 6, 7, 8, 9])
+
+actual = f(sample)
+simulation = circuit.simulate(sample)
+
+print(actual.tolist())
+print(simulation.tolist())
+```
+
+prints
+
+```
+[1, 4, 9, 16, 25, 36, 49, 64, 81, 100]
+[1, 4, 9, 16, 16, 36, 49, 64, 81, 100]
+```
+
+{% hint style="warning" %}
+Currently, simulation is better than directly calling from Python, but it's not exactly the same with FHE execution. The reason is that it is implemented in Python. Imagine you have an identity table lookup, it might be ommitted from the generated FHE code by the compiler, but it'll be present in Python as optimizations are not done in Python. This will result in a bigger error in simulation. Furthermore, some operations have multiple table lookups within them, and those cannot be simulated unless the actual implementations of said operations are ported to Python. In the future, simulation functionality will be provided by the compiler so all of these issues would be addressed. Until then, keep these in mind.
+{% endhint %}
diff --git a/frontends/concrete-python/docs/tutorial/table_lookups.md b/frontends/concrete-python/docs/tutorial/table_lookups.md
new file mode 100644
index 000000000..b8a926111
--- /dev/null
+++ b/frontends/concrete-python/docs/tutorial/table_lookups.md
@@ -0,0 +1,178 @@
+# Table Lookups
+
+In this tutorial, we will review the ways to perform direct table lookups in **Concrete-Numpy**.
+
+## Direct table lookup
+
+**Concrete-Numpy** provides a `LookupTable` class for you to create your own tables and apply them in your circuits.
+
+{% hint style="info" %}
+`LookupTable`s can have any number of elements. Let's call them **N**. As long as the lookup variable is in range \[-**N**, **N**), table lookup is valid.
+
+If you go out of bounds of this range, you will get the following error:
+
+```
+IndexError: index 10 is out of bounds for axis 0 with size 6
+```
+{% endhint %}
+
+{% hint style="info" %}
+The number of elements in the lookup table doesn't affect performance in any way.
+{% endhint %}
+
+### With scalars.
+
+You can create the lookup table using a list of integers and apply it using indexing:
+
+```python
+import concrete.numpy as cnp
+
+table = cnp.LookupTable([2, -1, 3, 0])
+
+@cnp.compiler({"x": "encrypted"})
+def f(x):
+ return table[x]
+
+inputset = range(4)
+circuit = f.compile(inputset)
+
+assert circuit.encrypt_run_decrypt(0) == table[0] == 2
+assert circuit.encrypt_run_decrypt(1) == table[1] == -1
+assert circuit.encrypt_run_decrypt(2) == table[2] == 3
+assert circuit.encrypt_run_decrypt(3) == table[3] == 0
+```
+
+### With tensors.
+
+When you apply the table lookup to a tensor, you apply the scalar table lookup to each element of the tensor:
+
+```python
+import concrete.numpy as cnp
+import numpy as np
+
+table = cnp.LookupTable([2, -1, 3, 0])
+
+@cnp.compiler({"x": "encrypted"})
+def f(x):
+ return table[x]
+
+inputset = [np.random.randint(0, 4, size=(2, 3)) for _ in range(10)]
+circuit = f.compile(inputset)
+
+sample = [
+ [0, 1, 3],
+ [2, 3, 1],
+]
+expected_output = [
+ [2, -1, 0],
+ [3, 0, -1],
+]
+actual_output = circuit.encrypt_run_decrypt(np.array(sample))
+
+for i in range(2):
+ for j in range(3):
+ assert actual_output[i][j] == expected_output[i][j] == table[sample[i][j]]
+```
+
+### With negative values.
+
+`LookupTable` mimics array indexing in Python, which means if the lookup variable is negative, the table is looked up from the back:
+
+```python
+import concrete.numpy as cnp
+
+table = cnp.LookupTable([2, -1, 3, 0])
+
+@cnp.compiler({"x": "encrypted"})
+def f(x):
+ return table[-x]
+
+inputset = range(1, 5)
+circuit = f.compile(inputset)
+
+assert circuit.encrypt_run_decrypt(1) == table[-1] == 0
+assert circuit.encrypt_run_decrypt(2) == table[-2] == 3
+assert circuit.encrypt_run_decrypt(3) == table[-3] == -1
+assert circuit.encrypt_run_decrypt(4) == table[-4] == 2
+```
+
+## Direct multi table lookup
+
+In case you want to apply a different lookup table to each element of a tensor, you can have a `LookupTable` of `LookupTable`s:
+
+```python
+import concrete.numpy as cnp
+import numpy as np
+
+squared = cnp.LookupTable([i ** 2 for i in range(4)])
+cubed = cnp.LookupTable([i ** 3 for i in range(4)])
+
+table = cnp.LookupTable([
+ [squared, cubed],
+ [squared, cubed],
+ [squared, cubed],
+])
+
+@cnp.compiler({"x": "encrypted"})
+def f(x):
+ return table[x]
+
+inputset = [np.random.randint(0, 4, size=(3, 2)) for _ in range(10)]
+circuit = f.compile(inputset)
+
+sample = [
+ [0, 1],
+ [2, 3],
+ [3, 0],
+]
+expected_output = [
+ [0, 1],
+ [4, 27],
+ [9, 0]
+]
+actual_output = circuit.encrypt_run_decrypt(np.array(sample))
+
+for i in range(3):
+ for j in range(2):
+ if j == 0:
+ assert actual_output[i][j] == expected_output[i][j] == squared[sample[i][j]]
+ else:
+ assert actual_output[i][j] == expected_output[i][j] == cubed[sample[i][j]]
+```
+
+In this example, we applied a `squared` table to the first column and a `cubed` table to the second one.
+
+## Fused table lookup
+
+**Concrete-Numpy** tries to fuse some operations into table lookups automatically, so you don't need to create the lookup tables manually:
+
+```python
+import concrete.numpy as cnp
+import numpy as np
+
+@cnp.compiler({"x": "encrypted"})
+def f(x):
+ return (42 * np.sin(x)).astype(np.int64) // 10
+
+inputset = range(8)
+circuit = f.compile(inputset)
+
+for x in range(8):
+ assert circuit.encrypt_run_decrypt(x) == f(x)
+```
+
+{% hint style="info" %}
+All lookup tables need to be from integers to integers. So, without `.astype(np.int64)`, **Concrete-Numpy** will not be able to fuse.
+{% endhint %}
+
+The function is first traced into:
+
+
+
+Then, **Concrete-Numpy** fuses appropriate nodes:
+
+
+
+{% hint style="info" %}
+Fusing makes the code more readable and easier to modify, so try to utilize it over manual `LookupTable`s as much as possible.
+{% endhint %}
diff --git a/frontends/concrete-python/docs/tutorial/tagging.md b/frontends/concrete-python/docs/tutorial/tagging.md
new file mode 100644
index 000000000..280893356
--- /dev/null
+++ b/frontends/concrete-python/docs/tutorial/tagging.md
@@ -0,0 +1,56 @@
+# Tagging
+
+When you have big circuits, keeping track of which node corresponds to which part of your code becomes very hard. Tagging system could simplify such situations:
+
+```python
+def g(z):
+ with cnp.tag("def"):
+ a = 120 - z
+ b = a // 4
+ return b
+
+
+def f(x):
+ with cnp.tag("abc"):
+ x = x * 2
+ with cnp.tag("foo"):
+ y = x + 42
+ z = np.sqrt(y).astype(np.int64)
+
+ return g(z + 3) * 2
+```
+
+when you compile `f` with inputset of `range(10)`, you get the following graph:
+
+```
+ %0 = x # EncryptedScalar ∈ [0, 9]
+ %1 = 2 # ClearScalar ∈ [2, 2] @ abc
+ %2 = multiply(%0, %1) # EncryptedScalar ∈ [0, 18] @ abc
+ %3 = 42 # ClearScalar ∈ [42, 42] @ abc.foo
+ %4 = add(%2, %3) # EncryptedScalar ∈ [42, 60] @ abc.foo
+ %5 = subgraph(%4) # EncryptedScalar ∈ [6, 7] @ abc
+ %6 = 3 # ClearScalar ∈ [3, 3]
+ %7 = add(%5, %6) # EncryptedScalar ∈ [9, 10]
+ %8 = 120 # ClearScalar ∈ [120, 120] @ def
+ %9 = subtract(%8, %7) # EncryptedScalar ∈ [110, 111] @ def
+%10 = 4 # ClearScalar ∈ [4, 4] @ def
+%11 = floor_divide(%9, %10) # EncryptedScalar ∈ [27, 27] @ def
+%12 = 2 # ClearScalar ∈ [2, 2]
+%13 = multiply(%11, %12) # EncryptedScalar ∈ [54, 54]
+return %13
+
+Subgraphs:
+
+ %5 = subgraph(%4):
+
+ %0 = input # EncryptedScalar @ abc.foo
+ %1 = sqrt(%0) # EncryptedScalar @ abc
+ %2 = astype(%1, dtype=int_) # EncryptedScalar @ abc
+ return %2
+```
+
+and if you get an error, you'll precisely see where the error occurred (e.g., which layer of the neural network, if you tag layers).
+
+{% hint style="info" %}
+In the future, we're planning to use tags for other features as well (e.g., to measure performance of tagged regions), so it's a good idea to start utilizing them for big circuits.
+{% endhint %}
diff --git a/frontends/concrete-python/examples/key-value-database/dynamic-size.py b/frontends/concrete-python/examples/key-value-database/dynamic-size.py
new file mode 100644
index 000000000..67e550523
--- /dev/null
+++ b/frontends/concrete-python/examples/key-value-database/dynamic-size.py
@@ -0,0 +1,247 @@
+import time
+
+import concrete.numpy as cnp
+import numpy as np
+
+
+CHUNK_SIZE = 4
+
+KEY_SIZE = 32
+VALUE_SIZE = 32
+
+assert KEY_SIZE % CHUNK_SIZE == 0
+assert VALUE_SIZE % CHUNK_SIZE == 0
+
+NUMBER_OF_KEY_CHUNKS = KEY_SIZE // CHUNK_SIZE
+NUMBER_OF_VALUE_CHUNKS = VALUE_SIZE // CHUNK_SIZE
+
+
+def encode(number, width):
+ binary_repr = np.binary_repr(number, width=width)
+ blocks = [binary_repr[i:i+CHUNK_SIZE] for i in range(0, len(binary_repr), CHUNK_SIZE)]
+ return np.array([int(block, 2) for block in blocks])
+
+def encode_key(number):
+ return encode(number, width=KEY_SIZE)
+
+def encode_value(number):
+ return encode(number, width=VALUE_SIZE)
+
+def decode(encoded_number):
+ result = 0
+ for i in range(len(encoded_number)):
+ result += 2**(CHUNK_SIZE*i) * encoded_number[(len(encoded_number) - i) - 1]
+ return result
+
+
+keep_if_match_lut = cnp.LookupTable([0 for _ in range(16)] + [i for i in range(16)])
+keep_if_no_match_lut = cnp.LookupTable([i for i in range(16)] + [0 for _ in range(16)])
+
+def _replace_impl(key, value, candidate_key, candidate_value):
+ match = np.sum((candidate_key - key) == 0) == NUMBER_OF_KEY_CHUNKS
+
+ packed_match_and_value = (2 ** CHUNK_SIZE) * match + value
+ value_if_match_else_zeros = keep_if_match_lut[packed_match_and_value]
+
+ packed_match_and_candidate_value = (2 ** CHUNK_SIZE) * match + candidate_value
+ zeros_if_match_else_candidate_value = keep_if_no_match_lut[packed_match_and_candidate_value]
+
+ return value_if_match_else_zeros + zeros_if_match_else_candidate_value
+
+def _query_impl(key, candidate_key, candidate_value):
+ match = np.sum((candidate_key - key) == 0) == NUMBER_OF_KEY_CHUNKS
+
+ packed_match_and_candidate_value = (2 ** CHUNK_SIZE) * match + candidate_value
+ candidate_value_if_match_else_zeros = keep_if_match_lut[packed_match_and_candidate_value]
+
+ return cnp.array([match, *candidate_value_if_match_else_zeros])
+
+
+class KeyValueDatabase:
+
+ _state: list[np.ndarray]
+
+ _replace_circuit: cnp.Circuit
+ _query_circuit: cnp.Circuit
+
+ def __init__(self):
+ self._state = []
+
+ configuration = cnp.Configuration(
+ enable_unsafe_features=True,
+ use_insecure_key_cache=True,
+ insecure_key_cache_location=".keys",
+ )
+
+ replace_compiler = cnp.Compiler(
+ _replace_impl,
+ {
+ "key": "encrypted", "value": "encrypted",
+ "candidate_key": "encrypted", "candidate_value": "encrypted",
+ }
+ )
+ query_compiler = cnp.Compiler(
+ _query_impl,
+ {
+ "key": "encrypted",
+ "candidate_key": "encrypted", "candidate_value": "encrypted",
+ }
+ )
+
+ replace_inputset = [
+ (
+ # key
+ np.ones(NUMBER_OF_KEY_CHUNKS, dtype=np.int64) * (2**CHUNK_SIZE - 1),
+ # value
+ np.ones(NUMBER_OF_VALUE_CHUNKS, dtype=np.int64) * (2**CHUNK_SIZE - 1),
+ # candidate_key
+ np.ones(NUMBER_OF_KEY_CHUNKS, dtype=np.int64) * (2**CHUNK_SIZE - 1),
+ # candidate_value
+ np.ones(NUMBER_OF_VALUE_CHUNKS, dtype=np.int64) * (2**CHUNK_SIZE - 1),
+ )
+ ]
+ query_inputset = [
+ (
+ # key
+ np.ones(NUMBER_OF_KEY_CHUNKS, dtype=np.int64) * (2**CHUNK_SIZE - 1),
+ # candidate_key
+ np.ones(NUMBER_OF_KEY_CHUNKS, dtype=np.int64) * (2**CHUNK_SIZE - 1),
+ # candidate_value
+ np.ones(NUMBER_OF_VALUE_CHUNKS, dtype=np.int64) * (2**CHUNK_SIZE - 1),
+ )
+ ]
+
+ print("Compiling replacement circuit...")
+ start = time.time()
+ self._replace_circuit = replace_compiler.compile(replace_inputset, configuration)
+ end = time.time()
+ print(f"(took {end - start:.3f} seconds)")
+
+ print()
+
+ print("Compiling query circuit...")
+ start = time.time()
+ self._query_circuit = query_compiler.compile(query_inputset, configuration)
+ end = time.time()
+ print(f"(took {end - start:.3f} seconds)")
+
+ print()
+
+ print("Generating replacement keys...")
+ start = time.time()
+ self._replace_circuit.keygen()
+ end = time.time()
+ print(f"(took {end - start:.3f} seconds)")
+
+ print()
+
+ print("Generating query keys...")
+ start = time.time()
+ self._query_circuit.keygen()
+ end = time.time()
+ print(f"(took {end - start:.3f} seconds)")
+
+ def insert(self, key, value):
+ print()
+ print(f"Inserting...")
+ start = time.time()
+
+ self._state.append([encode_key(key), encode_value(value)])
+
+ end = time.time()
+ print(f"(took {end - start:.3f} seconds)")
+
+ def replace(self, key, value):
+ print()
+ print(f"Replacing...")
+ start = time.time()
+
+ encoded_key = encode_key(key)
+ encoded_value = encode_value(value)
+
+ for entry in self._state:
+ entry[1] = self._replace_circuit.encrypt_run_decrypt(encoded_key, encoded_value, *entry)
+
+ end = time.time()
+ print(f"(took {end - start:.3f} seconds)")
+
+ def query(self, key):
+ print()
+ print(f"Querying...")
+ start = time.time()
+
+ encoded_key = encode_key(key)
+
+ accumulation = np.zeros(1 + NUMBER_OF_VALUE_CHUNKS, dtype=np.uint64)
+ for entry in self._state:
+ accumulation += self._query_circuit.encrypt_run_decrypt(encoded_key, *entry)
+
+ match_count = accumulation[0]
+ if match_count > 1:
+ raise RuntimeError("Key inserted multiple times")
+
+ result = decode(accumulation[1:]) if match_count == 1 else None
+
+ end = time.time()
+ print(f"(took {end - start:.3f} seconds)")
+
+ return result
+
+
+db = KeyValueDatabase()
+
+# Test: Insert/Query
+db.insert(3, 4)
+assert db.query(3) == 4
+
+db.replace(3, 1)
+assert db.query(3) == 1
+
+# Test: Insert/Query
+db.insert(25, 40)
+assert db.query(25) == 40
+
+# Test: Query Not Found
+assert db.query(4) is None
+
+# Test: Replace/Query
+db.replace(3, 5)
+assert db.query(3) == 5
+
+
+# Define lower/upper bounds for the key
+minimum_key = 0
+maximum_key = 2 ** KEY_SIZE - 1
+# Define lower/upper bounds for the value
+minimum_value = 0
+maximum_value = 2 ** VALUE_SIZE - 1
+
+
+# Test: Insert/Replace/Query Bounds
+# Insert (key: minimum_key , value: minimum_value) into the database
+db.insert(minimum_key, minimum_value)
+
+# Query the database for the key=minimum_key
+# The value minimum_value should be returned
+assert db.query(minimum_key) == minimum_value
+
+# Insert (key: maximum_key , value: maximum_value) into the database
+db.insert(maximum_key, maximum_value)
+
+# Query the database for the key=maximum_key
+# The value maximum_value should be returned
+assert db.query(maximum_key) == maximum_value
+
+# Replace the value of key=minimum_key with maximum_value
+db.replace(minimum_key, maximum_value)
+
+# Query the database for the key=minimum_key
+# The value maximum_value should be returned
+assert db.query(minimum_key) == maximum_value
+
+# Replace the value of key=maximum_key with minimum_value
+db.replace(maximum_key, minimum_value)
+
+# Query the database for the key=maximum_key
+# The value minimum_value should be returned
+assert db.query(maximum_key) == minimum_value
diff --git a/frontends/concrete-python/examples/key-value-database/static-size.py b/frontends/concrete-python/examples/key-value-database/static-size.py
new file mode 100644
index 000000000..30e693ebf
--- /dev/null
+++ b/frontends/concrete-python/examples/key-value-database/static-size.py
@@ -0,0 +1,299 @@
+import time
+
+import concrete.numpy as cnp
+import numpy as np
+
+
+NUMBER_OF_ENTRIES = 5
+CHUNK_SIZE = 4
+
+KEY_SIZE = 32
+VALUE_SIZE = 32
+
+assert KEY_SIZE % CHUNK_SIZE == 0
+assert VALUE_SIZE % CHUNK_SIZE == 0
+
+NUMBER_OF_KEY_CHUNKS = KEY_SIZE // CHUNK_SIZE
+NUMBER_OF_VALUE_CHUNKS = VALUE_SIZE // CHUNK_SIZE
+
+STATE_SHAPE = (NUMBER_OF_ENTRIES, 1 + NUMBER_OF_KEY_CHUNKS + NUMBER_OF_VALUE_CHUNKS)
+
+FLAG = 0
+KEY = slice(1, 1 + NUMBER_OF_KEY_CHUNKS)
+VALUE = slice(1 + NUMBER_OF_KEY_CHUNKS, None)
+
+
+def encode(number: int, width: int) -> np.array:
+ binary_repr = np.binary_repr(number, width=width)
+ blocks = [binary_repr[i:i+CHUNK_SIZE] for i in range(0, len(binary_repr), CHUNK_SIZE)]
+ return np.array([int(block, 2) for block in blocks])
+
+def encode_key(number: int) -> np.array:
+ return encode(number, width=KEY_SIZE)
+
+def encode_value(number: int) -> np.array:
+ return encode(number, width=VALUE_SIZE)
+
+def decode(encoded_number: np.array) -> int:
+ result = 0
+ for i in range(len(encoded_number)):
+ result += 2**(CHUNK_SIZE*i) * encoded_number[(len(encoded_number) - i) - 1]
+ return result
+
+
+keep_selected_lut = cnp.LookupTable([0 for _ in range(16)] + [i for i in range(16)])
+
+def _insert_impl(state, key, value):
+ flags = state[:, FLAG]
+
+ selection = cnp.zeros(NUMBER_OF_ENTRIES)
+
+ found = cnp.zero()
+ for i in range(NUMBER_OF_ENTRIES):
+ packed_flag_and_already_found = (found * 2) + flags[i]
+ is_selected = (packed_flag_and_already_found == 0)
+
+ selection[i] = is_selected
+ found += is_selected
+
+ state_update = cnp.zeros(STATE_SHAPE)
+ state_update[:, FLAG] = selection
+
+ selection = selection.reshape((-1, 1))
+
+ packed_selection_and_key = (selection * (2 ** CHUNK_SIZE)) + key
+ key_update = keep_selected_lut[packed_selection_and_key]
+
+ packed_selection_and_value = selection * (2 ** CHUNK_SIZE) + value
+ value_update = keep_selected_lut[packed_selection_and_value]
+
+ state_update[:, KEY] = key_update
+ state_update[:, VALUE] = value_update
+
+ new_state = state + state_update
+ return new_state
+
+def _replace_impl(state, key, value):
+ flags = state[:, FLAG]
+ keys = state[:, KEY]
+ values = state[:, VALUE]
+
+ equal_rows = (np.sum((keys - key) == 0, axis=1) == NUMBER_OF_KEY_CHUNKS)
+ selection = (flags * 2 + equal_rows == 3).reshape((-1, 1))
+
+ packed_selection_and_value = selection * (2 ** CHUNK_SIZE) + value
+ set_value = keep_selected_lut[packed_selection_and_value]
+
+ inverse_selection = 1 - selection
+ packed_inverse_selection_and_values = inverse_selection * (2 ** CHUNK_SIZE) + values
+ kept_values = keep_selected_lut[packed_inverse_selection_and_values]
+
+ new_values = kept_values + set_value
+ state[:, VALUE] = new_values
+
+ return state
+
+def _query_impl(state, key):
+ keys = state[:, KEY]
+ values = state[:, VALUE]
+
+ selection = (np.sum((keys - key) == 0, axis=1) == NUMBER_OF_KEY_CHUNKS).reshape((-1, 1))
+ found = np.sum(selection)
+
+ packed_selection_and_values = selection * (2 ** CHUNK_SIZE) + values
+ value_selection = keep_selected_lut[packed_selection_and_values]
+ value = np.sum(value_selection, axis=0)
+
+ return cnp.array([found, *value])
+
+
+class KeyValueDatabase:
+
+ _state: np.ndarray
+
+ _insert_circuit: cnp.Circuit
+ _replace_circuit: cnp.Circuit
+ _query_circuit: cnp.Circuit
+
+ def __init__(self):
+ self._state = np.zeros(STATE_SHAPE, dtype=np.int64)
+
+ inputset_binary = [
+ (
+ # state
+ np.zeros(STATE_SHAPE, dtype=np.int64),
+ # key
+ np.ones(NUMBER_OF_KEY_CHUNKS, dtype=np.int64) * (2**CHUNK_SIZE - 1),
+ )
+ ]
+ inputset_ternary = [
+ (
+ # state
+ np.zeros(STATE_SHAPE, dtype=np.int64),
+ # key
+ np.ones(NUMBER_OF_KEY_CHUNKS, dtype=np.int64) * (2**CHUNK_SIZE - 1),
+ # value
+ np.ones(NUMBER_OF_VALUE_CHUNKS, dtype=np.int64) * (2**CHUNK_SIZE - 1),
+ )
+ ]
+
+ configuration = cnp.Configuration(
+ enable_unsafe_features=True,
+ use_insecure_key_cache=True,
+ insecure_key_cache_location=".keys",
+ )
+
+ insert_compiler = cnp.Compiler(
+ _insert_impl,
+ {"state": "encrypted", "key": "encrypted", "value": "encrypted"}
+ )
+ replace_compiler = cnp.Compiler(
+ _replace_impl,
+ {"state": "encrypted", "key": "encrypted", "value": "encrypted"}
+ )
+ query_compiler = cnp.Compiler(
+ _query_impl,
+ {"state": "encrypted", "key": "encrypted"}
+ )
+
+ print()
+
+ print("Compiling insertion circuit...")
+ start = time.time()
+ self._insert_circuit = insert_compiler.compile(inputset_ternary, configuration)
+ end = time.time()
+ print(f"(took {end - start:.3f} seconds)")
+
+ print()
+
+ print("Compiling replacement circuit...")
+ start = time.time()
+ self._replace_circuit = replace_compiler.compile(inputset_ternary, configuration)
+ end = time.time()
+ print(f"(took {end - start:.3f} seconds)")
+
+ print()
+
+ print("Compiling query circuit...")
+ start = time.time()
+ self._query_circuit = query_compiler.compile(inputset_binary, configuration)
+ end = time.time()
+ print(f"(took {end - start:.3f} seconds)")
+
+ print()
+
+ print("Generating insertion keys...")
+ start = time.time()
+ self._insert_circuit.keygen()
+ end = time.time()
+ print(f"(took {end - start:.3f} seconds)")
+
+ print()
+
+ print("Generating replacement keys...")
+ start = time.time()
+ self._replace_circuit.keygen()
+ end = time.time()
+ print(f"(took {end - start:.3f} seconds)")
+
+ print()
+
+ print("Generating query keys...")
+ start = time.time()
+ self._query_circuit.keygen()
+ end = time.time()
+ print(f"(took {end - start:.3f} seconds)")
+
+ def insert(self, key, value):
+ print()
+ print(f"Inserting...")
+ start = time.time()
+ self._state = self._insert_circuit.encrypt_run_decrypt(
+ self._state, encode_key(key), encode_value(value)
+ )
+ end = time.time()
+ print(f"(took {end - start:.3f} seconds)")
+
+ def replace(self, key, value):
+ print()
+ print(f"Replacing...")
+ start = time.time()
+ self._state = self._replace_circuit.encrypt_run_decrypt(
+ self._state, encode_key(key), encode_value(value)
+ )
+ end = time.time()
+ print(f"(took {end - start:.3f} seconds)")
+
+ def query(self, key):
+ print()
+ print(f"Querying...")
+ start = time.time()
+ result = self._query_circuit.encrypt_run_decrypt(
+ self._state, encode_key(key)
+ )
+ end = time.time()
+ print(f"(took {end - start:.3f} seconds)")
+
+ if result[0] == 0:
+ return None
+
+ return decode(result[1:])
+
+
+db = KeyValueDatabase()
+
+# Test: Insert/Query
+db.insert(3, 4)
+assert db.query(3) == 4
+
+db.replace(3, 1)
+assert db.query(3) == 1
+
+# Test: Insert/Query
+db.insert(25, 40)
+assert db.query(25) == 40
+
+# Test: Query Not Found
+assert db.query(4) is None
+
+# Test: Replace/Query
+db.replace(3, 5)
+assert db.query(3) == 5
+
+
+# Define lower/upper bounds for the key
+minimum_key = 1
+maximum_key = 2 ** KEY_SIZE - 1
+# Define lower/upper bounds for the value
+minimum_value = 1
+maximum_value = 2 ** VALUE_SIZE - 1
+
+
+# Test: Insert/Replace/Query Bounds
+# Insert (key: minimum_key , value: minimum_value) into the database
+db.insert(minimum_key, minimum_value)
+
+# Query the database for the key=minimum_key
+# The value minimum_value should be returned
+assert db.query(minimum_key) == minimum_value
+
+# Insert (key: maximum_key , value: maximum_value) into the database
+db.insert(maximum_key, maximum_value)
+
+# Query the database for the key=maximum_key
+# The value maximum_value should be returned
+assert db.query(maximum_key) == maximum_value
+
+# Replace the value of key=minimum_key with maximum_value
+db.replace(minimum_key, maximum_value)
+
+# Query the database for the key=minimum_key
+# The value maximum_value should be returned
+assert db.query(minimum_key) == maximum_value
+
+# Replace the value of key=maximum_key with minimum_value
+db.replace(maximum_key, minimum_value)
+
+# Query the database for the key=maximum_key
+# The value minimum_value should be returned
+assert db.query(maximum_key) == minimum_value
diff --git a/frontends/concrete-python/mypy.ini b/frontends/concrete-python/mypy.ini
new file mode 100644
index 000000000..35159d2f1
--- /dev/null
+++ b/frontends/concrete-python/mypy.ini
@@ -0,0 +1,3 @@
+[mypy]
+plugins = numpy.typing.mypy_plugin
+disable_error_code = annotation-unchecked
diff --git a/frontends/concrete-python/poetry.lock b/frontends/concrete-python/poetry.lock
new file mode 100644
index 000000000..35c7b3fab
--- /dev/null
+++ b/frontends/concrete-python/poetry.lock
@@ -0,0 +1,3965 @@
+# This file is automatically @generated by Poetry 1.4.0 and should not be changed by hand.
+
+[[package]]
+name = "anyio"
+version = "3.6.2"
+description = "High level compatibility layer for multiple asynchronous event loop implementations"
+category = "dev"
+optional = false
+python-versions = ">=3.6.2"
+files = [
+ {file = "anyio-3.6.2-py3-none-any.whl", hash = "sha256:fbbe32bd270d2a2ef3ed1c5d45041250284e31fc0a4df4a5a6071842051a51e3"},
+ {file = "anyio-3.6.2.tar.gz", hash = "sha256:25ea0d673ae30af41a0c442f81cf3b38c7e79fdc7b60335a4c14e05eb0947421"},
+]
+
+[package.dependencies]
+idna = ">=2.8"
+sniffio = ">=1.1"
+typing-extensions = {version = "*", markers = "python_version < \"3.8\""}
+
+[package.extras]
+doc = ["packaging", "sphinx-autodoc-typehints (>=1.2.0)", "sphinx-rtd-theme"]
+test = ["contextlib2", "coverage[toml] (>=4.5)", "hypothesis (>=4.0)", "mock (>=4)", "pytest (>=7.0)", "pytest-mock (>=3.6.1)", "trustme", "uvloop (<0.15)", "uvloop (>=0.15)"]
+trio = ["trio (>=0.16,<0.22)"]
+
+[[package]]
+name = "appnope"
+version = "0.1.3"
+description = "Disable App Nap on macOS >= 10.9"
+category = "dev"
+optional = false
+python-versions = "*"
+files = [
+ {file = "appnope-0.1.3-py2.py3-none-any.whl", hash = "sha256:265a455292d0bd8a72453494fa24df5a11eb18373a60c7c0430889f22548605e"},
+ {file = "appnope-0.1.3.tar.gz", hash = "sha256:02bd91c4de869fbb1e1c50aafc4098827a7a54ab2f39d9dcba6c9547ed920e24"},
+]
+
+[[package]]
+name = "argon2-cffi"
+version = "21.3.0"
+description = "The secure Argon2 password hashing algorithm."
+category = "dev"
+optional = false
+python-versions = ">=3.6"
+files = [
+ {file = "argon2-cffi-21.3.0.tar.gz", hash = "sha256:d384164d944190a7dd7ef22c6aa3ff197da12962bd04b17f64d4e93d934dba5b"},
+ {file = "argon2_cffi-21.3.0-py3-none-any.whl", hash = "sha256:8c976986f2c5c0e5000919e6de187906cfd81fb1c72bf9d88c01177e77da7f80"},
+]
+
+[package.dependencies]
+argon2-cffi-bindings = "*"
+typing-extensions = {version = "*", markers = "python_version < \"3.8\""}
+
+[package.extras]
+dev = ["cogapp", "coverage[toml] (>=5.0.2)", "furo", "hypothesis", "pre-commit", "pytest", "sphinx", "sphinx-notfound-page", "tomli"]
+docs = ["furo", "sphinx", "sphinx-notfound-page"]
+tests = ["coverage[toml] (>=5.0.2)", "hypothesis", "pytest"]
+
+[[package]]
+name = "argon2-cffi-bindings"
+version = "21.2.0"
+description = "Low-level CFFI bindings for Argon2"
+category = "dev"
+optional = false
+python-versions = ">=3.6"
+files = [
+ {file = "argon2-cffi-bindings-21.2.0.tar.gz", hash = "sha256:bb89ceffa6c791807d1305ceb77dbfacc5aa499891d2c55661c6459651fc39e3"},
+ {file = "argon2_cffi_bindings-21.2.0-cp36-abi3-macosx_10_9_x86_64.whl", hash = "sha256:ccb949252cb2ab3a08c02024acb77cfb179492d5701c7cbdbfd776124d4d2367"},
+ {file = "argon2_cffi_bindings-21.2.0-cp36-abi3-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:9524464572e12979364b7d600abf96181d3541da11e23ddf565a32e70bd4dc0d"},
+ {file = "argon2_cffi_bindings-21.2.0-cp36-abi3-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:b746dba803a79238e925d9046a63aa26bf86ab2a2fe74ce6b009a1c3f5c8f2ae"},
+ {file = "argon2_cffi_bindings-21.2.0-cp36-abi3-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:58ed19212051f49a523abb1dbe954337dc82d947fb6e5a0da60f7c8471a8476c"},
+ {file = "argon2_cffi_bindings-21.2.0-cp36-abi3-musllinux_1_1_aarch64.whl", hash = "sha256:bd46088725ef7f58b5a1ef7ca06647ebaf0eb4baff7d1d0d177c6cc8744abd86"},
+ {file = "argon2_cffi_bindings-21.2.0-cp36-abi3-musllinux_1_1_i686.whl", hash = "sha256:8cd69c07dd875537a824deec19f978e0f2078fdda07fd5c42ac29668dda5f40f"},
+ {file = "argon2_cffi_bindings-21.2.0-cp36-abi3-musllinux_1_1_x86_64.whl", hash = "sha256:f1152ac548bd5b8bcecfb0b0371f082037e47128653df2e8ba6e914d384f3c3e"},
+ {file = "argon2_cffi_bindings-21.2.0-cp36-abi3-win32.whl", hash = "sha256:603ca0aba86b1349b147cab91ae970c63118a0f30444d4bc80355937c950c082"},
+ {file = "argon2_cffi_bindings-21.2.0-cp36-abi3-win_amd64.whl", hash = "sha256:b2ef1c30440dbbcba7a5dc3e319408b59676e2e039e2ae11a8775ecf482b192f"},
+ {file = "argon2_cffi_bindings-21.2.0-cp38-abi3-macosx_10_9_universal2.whl", hash = "sha256:e415e3f62c8d124ee16018e491a009937f8cf7ebf5eb430ffc5de21b900dad93"},
+ {file = "argon2_cffi_bindings-21.2.0-pp37-pypy37_pp73-macosx_10_9_x86_64.whl", hash = "sha256:3e385d1c39c520c08b53d63300c3ecc28622f076f4c2b0e6d7e796e9f6502194"},
+ {file = "argon2_cffi_bindings-21.2.0-pp37-pypy37_pp73-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:2c3e3cc67fdb7d82c4718f19b4e7a87123caf8a93fde7e23cf66ac0337d3cb3f"},
+ {file = "argon2_cffi_bindings-21.2.0-pp37-pypy37_pp73-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:6a22ad9800121b71099d0fb0a65323810a15f2e292f2ba450810a7316e128ee5"},
+ {file = "argon2_cffi_bindings-21.2.0-pp37-pypy37_pp73-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:f9f8b450ed0547e3d473fdc8612083fd08dd2120d6ac8f73828df9b7d45bb351"},
+ {file = "argon2_cffi_bindings-21.2.0-pp37-pypy37_pp73-win_amd64.whl", hash = "sha256:93f9bf70084f97245ba10ee36575f0c3f1e7d7724d67d8e5b08e61787c320ed7"},
+ {file = "argon2_cffi_bindings-21.2.0-pp38-pypy38_pp73-macosx_10_9_x86_64.whl", hash = "sha256:3b9ef65804859d335dc6b31582cad2c5166f0c3e7975f324d9ffaa34ee7e6583"},
+ {file = "argon2_cffi_bindings-21.2.0-pp38-pypy38_pp73-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:d4966ef5848d820776f5f562a7d45fdd70c2f330c961d0d745b784034bd9f48d"},
+ {file = "argon2_cffi_bindings-21.2.0-pp38-pypy38_pp73-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:20ef543a89dee4db46a1a6e206cd015360e5a75822f76df533845c3cbaf72670"},
+ {file = "argon2_cffi_bindings-21.2.0-pp38-pypy38_pp73-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:ed2937d286e2ad0cc79a7087d3c272832865f779430e0cc2b4f3718d3159b0cb"},
+ {file = "argon2_cffi_bindings-21.2.0-pp38-pypy38_pp73-win_amd64.whl", hash = "sha256:5e00316dabdaea0b2dd82d141cc66889ced0cdcbfa599e8b471cf22c620c329a"},
+]
+
+[package.dependencies]
+cffi = ">=1.0.1"
+
+[package.extras]
+dev = ["cogapp", "pre-commit", "pytest", "wheel"]
+tests = ["pytest"]
+
+[[package]]
+name = "astroid"
+version = "2.8.6"
+description = "An abstract syntax tree for Python with inference support."
+category = "dev"
+optional = false
+python-versions = "~=3.6"
+files = [
+ {file = "astroid-2.8.6-py3-none-any.whl", hash = "sha256:cd8326b424c971e7d87678609cf6275d22028afd37d6ac59c16d47f1245882f6"},
+ {file = "astroid-2.8.6.tar.gz", hash = "sha256:5f6f75e45f15290e73b56f9dfde95b4bf96382284cde406ef4203e928335a495"},
+]
+
+[package.dependencies]
+lazy-object-proxy = ">=1.4.0"
+setuptools = ">=20.0"
+typed-ast = {version = ">=1.4.0,<2.0", markers = "implementation_name == \"cpython\" and python_version < \"3.8\""}
+typing-extensions = {version = ">=3.10", markers = "python_version < \"3.10\""}
+wrapt = ">=1.11,<1.14"
+
+[[package]]
+name = "atomicwrites"
+version = "1.4.1"
+description = "Atomic file writes."
+category = "dev"
+optional = false
+python-versions = ">=2.7, !=3.0.*, !=3.1.*, !=3.2.*, !=3.3.*"
+files = [
+ {file = "atomicwrites-1.4.1.tar.gz", hash = "sha256:81b2c9071a49367a7f770170e5eec8cb66567cfbbc8c73d20ce5ca4a8d71cf11"},
+]
+
+[[package]]
+name = "attrs"
+version = "22.2.0"
+description = "Classes Without Boilerplate"
+category = "dev"
+optional = false
+python-versions = ">=3.6"
+files = [
+ {file = "attrs-22.2.0-py3-none-any.whl", hash = "sha256:29e95c7f6778868dbd49170f98f8818f78f3dc5e0e37c0b1f474e3561b240836"},
+ {file = "attrs-22.2.0.tar.gz", hash = "sha256:c9227bfc2f01993c03f68db37d1d15c9690188323c067c641f1a35ca58185f99"},
+]
+
+[package.extras]
+cov = ["attrs[tests]", "coverage-enable-subprocess", "coverage[toml] (>=5.3)"]
+dev = ["attrs[docs,tests]"]
+docs = ["furo", "myst-parser", "sphinx", "sphinx-notfound-page", "sphinxcontrib-towncrier", "towncrier", "zope.interface"]
+tests = ["attrs[tests-no-zope]", "zope.interface"]
+tests-no-zope = ["cloudpickle", "cloudpickle", "hypothesis", "hypothesis", "mypy (>=0.971,<0.990)", "mypy (>=0.971,<0.990)", "pympler", "pympler", "pytest (>=4.3.0)", "pytest (>=4.3.0)", "pytest-mypy-plugins", "pytest-mypy-plugins", "pytest-xdist[psutil]", "pytest-xdist[psutil]"]
+
+[[package]]
+name = "backcall"
+version = "0.2.0"
+description = "Specifications for callback functions passed in to an API"
+category = "dev"
+optional = false
+python-versions = "*"
+files = [
+ {file = "backcall-0.2.0-py2.py3-none-any.whl", hash = "sha256:fbbce6a29f263178a1f7915c1940bde0ec2b2a967566fe1c65c1dfb7422bd255"},
+ {file = "backcall-0.2.0.tar.gz", hash = "sha256:5cbdbf27be5e7cfadb448baf0aa95508f91f2bbc6c6437cd9cd06e2a4c215e1e"},
+]
+
+[[package]]
+name = "beautifulsoup4"
+version = "4.11.2"
+description = "Screen-scraping library"
+category = "dev"
+optional = false
+python-versions = ">=3.6.0"
+files = [
+ {file = "beautifulsoup4-4.11.2-py3-none-any.whl", hash = "sha256:0e79446b10b3ecb499c1556f7e228a53e64a2bfcebd455f370d8927cb5b59e39"},
+ {file = "beautifulsoup4-4.11.2.tar.gz", hash = "sha256:bc4bdda6717de5a2987436fb8d72f45dc90dd856bdfd512a1314ce90349a0106"},
+]
+
+[package.dependencies]
+soupsieve = ">1.2"
+
+[package.extras]
+html5lib = ["html5lib"]
+lxml = ["lxml"]
+
+[[package]]
+name = "black"
+version = "22.12.0"
+description = "The uncompromising code formatter."
+category = "dev"
+optional = false
+python-versions = ">=3.7"
+files = [
+ {file = "black-22.12.0-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:9eedd20838bd5d75b80c9f5487dbcb06836a43833a37846cf1d8c1cc01cef59d"},
+ {file = "black-22.12.0-cp310-cp310-win_amd64.whl", hash = "sha256:159a46a4947f73387b4d83e87ea006dbb2337eab6c879620a3ba52699b1f4351"},
+ {file = "black-22.12.0-cp311-cp311-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:d30b212bffeb1e252b31dd269dfae69dd17e06d92b87ad26e23890f3efea366f"},
+ {file = "black-22.12.0-cp311-cp311-win_amd64.whl", hash = "sha256:7412e75863aa5c5411886804678b7d083c7c28421210180d67dfd8cf1221e1f4"},
+ {file = "black-22.12.0-cp37-cp37m-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:c116eed0efb9ff870ded8b62fe9f28dd61ef6e9ddd28d83d7d264a38417dcee2"},
+ {file = "black-22.12.0-cp37-cp37m-win_amd64.whl", hash = "sha256:1f58cbe16dfe8c12b7434e50ff889fa479072096d79f0a7f25e4ab8e94cd8350"},
+ {file = "black-22.12.0-cp38-cp38-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:77d86c9f3db9b1bf6761244bc0b3572a546f5fe37917a044e02f3166d5aafa7d"},
+ {file = "black-22.12.0-cp38-cp38-win_amd64.whl", hash = "sha256:82d9fe8fee3401e02e79767016b4907820a7dc28d70d137eb397b92ef3cc5bfc"},
+ {file = "black-22.12.0-cp39-cp39-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:101c69b23df9b44247bd88e1d7e90154336ac4992502d4197bdac35dd7ee3320"},
+ {file = "black-22.12.0-cp39-cp39-win_amd64.whl", hash = "sha256:559c7a1ba9a006226f09e4916060982fd27334ae1998e7a38b3f33a37f7a2148"},
+ {file = "black-22.12.0-py3-none-any.whl", hash = "sha256:436cc9167dd28040ad90d3b404aec22cedf24a6e4d7de221bec2730ec0c97bcf"},
+ {file = "black-22.12.0.tar.gz", hash = "sha256:229351e5a18ca30f447bf724d007f890f97e13af070bb6ad4c0a441cd7596a2f"},
+]
+
+[package.dependencies]
+click = ">=8.0.0"
+mypy-extensions = ">=0.4.3"
+pathspec = ">=0.9.0"
+platformdirs = ">=2"
+tomli = {version = ">=1.1.0", markers = "python_full_version < \"3.11.0a7\""}
+typed-ast = {version = ">=1.4.2", markers = "python_version < \"3.8\" and implementation_name == \"cpython\""}
+typing-extensions = {version = ">=3.10.0.0", markers = "python_version < \"3.10\""}
+
+[package.extras]
+colorama = ["colorama (>=0.4.3)"]
+d = ["aiohttp (>=3.7.4)"]
+jupyter = ["ipython (>=7.8.0)", "tokenize-rt (>=3.2.0)"]
+uvloop = ["uvloop (>=0.15.2)"]
+
+[[package]]
+name = "bleach"
+version = "6.0.0"
+description = "An easy safelist-based HTML-sanitizing tool."
+category = "dev"
+optional = false
+python-versions = ">=3.7"
+files = [
+ {file = "bleach-6.0.0-py3-none-any.whl", hash = "sha256:33c16e3353dbd13028ab4799a0f89a83f113405c766e9c122df8a06f5b85b3f4"},
+ {file = "bleach-6.0.0.tar.gz", hash = "sha256:1a1a85c1595e07d8db14c5f09f09e6433502c51c595970edc090551f0db99414"},
+]
+
+[package.dependencies]
+six = ">=1.9.0"
+webencodings = "*"
+
+[package.extras]
+css = ["tinycss2 (>=1.1.0,<1.2)"]
+
+[[package]]
+name = "cachecontrol"
+version = "0.12.11"
+description = "httplib2 caching for requests"
+category = "dev"
+optional = false
+python-versions = ">=3.6"
+files = [
+ {file = "CacheControl-0.12.11-py2.py3-none-any.whl", hash = "sha256:2c75d6a8938cb1933c75c50184549ad42728a27e9f6b92fd677c3151aa72555b"},
+ {file = "CacheControl-0.12.11.tar.gz", hash = "sha256:a5b9fcc986b184db101aa280b42ecdcdfc524892596f606858e0b7a8b4d9e144"},
+]
+
+[package.dependencies]
+lockfile = {version = ">=0.9", optional = true, markers = "extra == \"filecache\""}
+msgpack = ">=0.5.2"
+requests = "*"
+
+[package.extras]
+filecache = ["lockfile (>=0.9)"]
+redis = ["redis (>=2.10.5)"]
+
+[[package]]
+name = "certifi"
+version = "2022.12.7"
+description = "Python package for providing Mozilla's CA Bundle."
+category = "dev"
+optional = false
+python-versions = ">=3.6"
+files = [
+ {file = "certifi-2022.12.7-py3-none-any.whl", hash = "sha256:4ad3232f5e926d6718ec31cfc1fcadfde020920e278684144551c91769c7bc18"},
+ {file = "certifi-2022.12.7.tar.gz", hash = "sha256:35824b4c3a97115964b408844d64aa14db1cc518f6562e8d7261699d1350a9e3"},
+]
+
+[[package]]
+name = "cffi"
+version = "1.15.1"
+description = "Foreign Function Interface for Python calling C code."
+category = "dev"
+optional = false
+python-versions = "*"
+files = [
+ {file = "cffi-1.15.1-cp27-cp27m-macosx_10_9_x86_64.whl", hash = "sha256:a66d3508133af6e8548451b25058d5812812ec3798c886bf38ed24a98216fab2"},
+ {file = "cffi-1.15.1-cp27-cp27m-manylinux1_i686.whl", hash = "sha256:470c103ae716238bbe698d67ad020e1db9d9dba34fa5a899b5e21577e6d52ed2"},
+ {file = "cffi-1.15.1-cp27-cp27m-manylinux1_x86_64.whl", hash = "sha256:9ad5db27f9cabae298d151c85cf2bad1d359a1b9c686a275df03385758e2f914"},
+ {file = "cffi-1.15.1-cp27-cp27m-win32.whl", hash = "sha256:b3bbeb01c2b273cca1e1e0c5df57f12dce9a4dd331b4fa1635b8bec26350bde3"},
+ {file = "cffi-1.15.1-cp27-cp27m-win_amd64.whl", hash = "sha256:e00b098126fd45523dd056d2efba6c5a63b71ffe9f2bbe1a4fe1716e1d0c331e"},
+ {file = "cffi-1.15.1-cp27-cp27mu-manylinux1_i686.whl", hash = "sha256:d61f4695e6c866a23a21acab0509af1cdfd2c013cf256bbf5b6b5e2695827162"},
+ {file = "cffi-1.15.1-cp27-cp27mu-manylinux1_x86_64.whl", hash = "sha256:ed9cb427ba5504c1dc15ede7d516b84757c3e3d7868ccc85121d9310d27eed0b"},
+ {file = "cffi-1.15.1-cp310-cp310-macosx_10_9_x86_64.whl", hash = "sha256:39d39875251ca8f612b6f33e6b1195af86d1b3e60086068be9cc053aa4376e21"},
+ {file = "cffi-1.15.1-cp310-cp310-macosx_11_0_arm64.whl", hash = "sha256:285d29981935eb726a4399badae8f0ffdff4f5050eaa6d0cfc3f64b857b77185"},
+ {file = "cffi-1.15.1-cp310-cp310-manylinux_2_12_i686.manylinux2010_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:3eb6971dcff08619f8d91607cfc726518b6fa2a9eba42856be181c6d0d9515fd"},
+ {file = "cffi-1.15.1-cp310-cp310-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:21157295583fe8943475029ed5abdcf71eb3911894724e360acff1d61c1d54bc"},
+ {file = "cffi-1.15.1-cp310-cp310-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:5635bd9cb9731e6d4a1132a498dd34f764034a8ce60cef4f5319c0541159392f"},
+ {file = "cffi-1.15.1-cp310-cp310-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:2012c72d854c2d03e45d06ae57f40d78e5770d252f195b93f581acf3ba44496e"},
+ {file = "cffi-1.15.1-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:dd86c085fae2efd48ac91dd7ccffcfc0571387fe1193d33b6394db7ef31fe2a4"},
+ {file = "cffi-1.15.1-cp310-cp310-musllinux_1_1_i686.whl", hash = "sha256:fa6693661a4c91757f4412306191b6dc88c1703f780c8234035eac011922bc01"},
+ {file = "cffi-1.15.1-cp310-cp310-musllinux_1_1_x86_64.whl", hash = "sha256:59c0b02d0a6c384d453fece7566d1c7e6b7bae4fc5874ef2ef46d56776d61c9e"},
+ {file = "cffi-1.15.1-cp310-cp310-win32.whl", hash = "sha256:cba9d6b9a7d64d4bd46167096fc9d2f835e25d7e4c121fb2ddfc6528fb0413b2"},
+ {file = "cffi-1.15.1-cp310-cp310-win_amd64.whl", hash = "sha256:ce4bcc037df4fc5e3d184794f27bdaab018943698f4ca31630bc7f84a7b69c6d"},
+ {file = "cffi-1.15.1-cp311-cp311-macosx_10_9_x86_64.whl", hash = "sha256:3d08afd128ddaa624a48cf2b859afef385b720bb4b43df214f85616922e6a5ac"},
+ {file = "cffi-1.15.1-cp311-cp311-macosx_11_0_arm64.whl", hash = "sha256:3799aecf2e17cf585d977b780ce79ff0dc9b78d799fc694221ce814c2c19db83"},
+ {file = "cffi-1.15.1-cp311-cp311-manylinux_2_12_i686.manylinux2010_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:a591fe9e525846e4d154205572a029f653ada1a78b93697f3b5a8f1f2bc055b9"},
+ {file = "cffi-1.15.1-cp311-cp311-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:3548db281cd7d2561c9ad9984681c95f7b0e38881201e157833a2342c30d5e8c"},
+ {file = "cffi-1.15.1-cp311-cp311-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:91fc98adde3d7881af9b59ed0294046f3806221863722ba7d8d120c575314325"},
+ {file = "cffi-1.15.1-cp311-cp311-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:94411f22c3985acaec6f83c6df553f2dbe17b698cc7f8ae751ff2237d96b9e3c"},
+ {file = "cffi-1.15.1-cp311-cp311-musllinux_1_1_i686.whl", hash = "sha256:03425bdae262c76aad70202debd780501fabeaca237cdfddc008987c0e0f59ef"},
+ {file = "cffi-1.15.1-cp311-cp311-musllinux_1_1_x86_64.whl", hash = "sha256:cc4d65aeeaa04136a12677d3dd0b1c0c94dc43abac5860ab33cceb42b801c1e8"},
+ {file = "cffi-1.15.1-cp311-cp311-win32.whl", hash = "sha256:a0f100c8912c114ff53e1202d0078b425bee3649ae34d7b070e9697f93c5d52d"},
+ {file = "cffi-1.15.1-cp311-cp311-win_amd64.whl", hash = "sha256:04ed324bda3cda42b9b695d51bb7d54b680b9719cfab04227cdd1e04e5de3104"},
+ {file = "cffi-1.15.1-cp36-cp36m-macosx_10_9_x86_64.whl", hash = "sha256:50a74364d85fd319352182ef59c5c790484a336f6db772c1a9231f1c3ed0cbd7"},
+ {file = "cffi-1.15.1-cp36-cp36m-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:e263d77ee3dd201c3a142934a086a4450861778baaeeb45db4591ef65550b0a6"},
+ {file = "cffi-1.15.1-cp36-cp36m-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:cec7d9412a9102bdc577382c3929b337320c4c4c4849f2c5cdd14d7368c5562d"},
+ {file = "cffi-1.15.1-cp36-cp36m-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:4289fc34b2f5316fbb762d75362931e351941fa95fa18789191b33fc4cf9504a"},
+ {file = "cffi-1.15.1-cp36-cp36m-manylinux_2_5_i686.manylinux1_i686.whl", hash = "sha256:173379135477dc8cac4bc58f45db08ab45d228b3363adb7af79436135d028405"},
+ {file = "cffi-1.15.1-cp36-cp36m-manylinux_2_5_x86_64.manylinux1_x86_64.whl", hash = "sha256:6975a3fac6bc83c4a65c9f9fcab9e47019a11d3d2cf7f3c0d03431bf145a941e"},
+ {file = "cffi-1.15.1-cp36-cp36m-win32.whl", hash = "sha256:2470043b93ff09bf8fb1d46d1cb756ce6132c54826661a32d4e4d132e1977adf"},
+ {file = "cffi-1.15.1-cp36-cp36m-win_amd64.whl", hash = "sha256:30d78fbc8ebf9c92c9b7823ee18eb92f2e6ef79b45ac84db507f52fbe3ec4497"},
+ {file = "cffi-1.15.1-cp37-cp37m-macosx_10_9_x86_64.whl", hash = "sha256:198caafb44239b60e252492445da556afafc7d1e3ab7a1fb3f0584ef6d742375"},
+ {file = "cffi-1.15.1-cp37-cp37m-manylinux_2_12_i686.manylinux2010_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:5ef34d190326c3b1f822a5b7a45f6c4535e2f47ed06fec77d3d799c450b2651e"},
+ {file = "cffi-1.15.1-cp37-cp37m-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:8102eaf27e1e448db915d08afa8b41d6c7ca7a04b7d73af6514df10a3e74bd82"},
+ {file = "cffi-1.15.1-cp37-cp37m-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:5df2768244d19ab7f60546d0c7c63ce1581f7af8b5de3eb3004b9b6fc8a9f84b"},
+ {file = "cffi-1.15.1-cp37-cp37m-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:a8c4917bd7ad33e8eb21e9a5bbba979b49d9a97acb3a803092cbc1133e20343c"},
+ {file = "cffi-1.15.1-cp37-cp37m-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:0e2642fe3142e4cc4af0799748233ad6da94c62a8bec3a6648bf8ee68b1c7426"},
+ {file = "cffi-1.15.1-cp37-cp37m-win32.whl", hash = "sha256:e229a521186c75c8ad9490854fd8bbdd9a0c9aa3a524326b55be83b54d4e0ad9"},
+ {file = "cffi-1.15.1-cp37-cp37m-win_amd64.whl", hash = "sha256:a0b71b1b8fbf2b96e41c4d990244165e2c9be83d54962a9a1d118fd8657d2045"},
+ {file = "cffi-1.15.1-cp38-cp38-macosx_10_9_x86_64.whl", hash = "sha256:320dab6e7cb2eacdf0e658569d2575c4dad258c0fcc794f46215e1e39f90f2c3"},
+ {file = "cffi-1.15.1-cp38-cp38-manylinux_2_12_i686.manylinux2010_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:1e74c6b51a9ed6589199c787bf5f9875612ca4a8a0785fb2d4a84429badaf22a"},
+ {file = "cffi-1.15.1-cp38-cp38-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:a5c84c68147988265e60416b57fc83425a78058853509c1b0629c180094904a5"},
+ {file = "cffi-1.15.1-cp38-cp38-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:3b926aa83d1edb5aa5b427b4053dc420ec295a08e40911296b9eb1b6170f6cca"},
+ {file = "cffi-1.15.1-cp38-cp38-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:87c450779d0914f2861b8526e035c5e6da0a3199d8f1add1a665e1cbc6fc6d02"},
+ {file = "cffi-1.15.1-cp38-cp38-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:4f2c9f67e9821cad2e5f480bc8d83b8742896f1242dba247911072d4fa94c192"},
+ {file = "cffi-1.15.1-cp38-cp38-win32.whl", hash = "sha256:8b7ee99e510d7b66cdb6c593f21c043c248537a32e0bedf02e01e9553a172314"},
+ {file = "cffi-1.15.1-cp38-cp38-win_amd64.whl", hash = "sha256:00a9ed42e88df81ffae7a8ab6d9356b371399b91dbdf0c3cb1e84c03a13aceb5"},
+ {file = "cffi-1.15.1-cp39-cp39-macosx_10_9_x86_64.whl", hash = "sha256:54a2db7b78338edd780e7ef7f9f6c442500fb0d41a5a4ea24fff1c929d5af585"},
+ {file = "cffi-1.15.1-cp39-cp39-macosx_11_0_arm64.whl", hash = "sha256:fcd131dd944808b5bdb38e6f5b53013c5aa4f334c5cad0c72742f6eba4b73db0"},
+ {file = "cffi-1.15.1-cp39-cp39-manylinux_2_12_i686.manylinux2010_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:7473e861101c9e72452f9bf8acb984947aa1661a7704553a9f6e4baa5ba64415"},
+ {file = "cffi-1.15.1-cp39-cp39-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:6c9a799e985904922a4d207a94eae35c78ebae90e128f0c4e521ce339396be9d"},
+ {file = "cffi-1.15.1-cp39-cp39-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:3bcde07039e586f91b45c88f8583ea7cf7a0770df3a1649627bf598332cb6984"},
+ {file = "cffi-1.15.1-cp39-cp39-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:33ab79603146aace82c2427da5ca6e58f2b3f2fb5da893ceac0c42218a40be35"},
+ {file = "cffi-1.15.1-cp39-cp39-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:5d598b938678ebf3c67377cdd45e09d431369c3b1a5b331058c338e201f12b27"},
+ {file = "cffi-1.15.1-cp39-cp39-musllinux_1_1_i686.whl", hash = "sha256:db0fbb9c62743ce59a9ff687eb5f4afbe77e5e8403d6697f7446e5f609976f76"},
+ {file = "cffi-1.15.1-cp39-cp39-musllinux_1_1_x86_64.whl", hash = "sha256:98d85c6a2bef81588d9227dde12db8a7f47f639f4a17c9ae08e773aa9c697bf3"},
+ {file = "cffi-1.15.1-cp39-cp39-win32.whl", hash = "sha256:40f4774f5a9d4f5e344f31a32b5096977b5d48560c5592e2f3d2c4374bd543ee"},
+ {file = "cffi-1.15.1-cp39-cp39-win_amd64.whl", hash = "sha256:70df4e3b545a17496c9b3f41f5115e69a4f2e77e94e1d2a8e1070bc0c38c8a3c"},
+ {file = "cffi-1.15.1.tar.gz", hash = "sha256:d400bfb9a37b1351253cb402671cea7e89bdecc294e8016a707f6d1d8ac934f9"},
+]
+
+[package.dependencies]
+pycparser = "*"
+
+[[package]]
+name = "charset-normalizer"
+version = "3.0.1"
+description = "The Real First Universal Charset Detector. Open, modern and actively maintained alternative to Chardet."
+category = "dev"
+optional = false
+python-versions = "*"
+files = [
+ {file = "charset-normalizer-3.0.1.tar.gz", hash = "sha256:ebea339af930f8ca5d7a699b921106c6e29c617fe9606fa7baa043c1cdae326f"},
+ {file = "charset_normalizer-3.0.1-cp310-cp310-macosx_10_9_universal2.whl", hash = "sha256:88600c72ef7587fe1708fd242b385b6ed4b8904976d5da0893e31df8b3480cb6"},
+ {file = "charset_normalizer-3.0.1-cp310-cp310-macosx_10_9_x86_64.whl", hash = "sha256:c75ffc45f25324e68ab238cb4b5c0a38cd1c3d7f1fb1f72b5541de469e2247db"},
+ {file = "charset_normalizer-3.0.1-cp310-cp310-macosx_11_0_arm64.whl", hash = "sha256:db72b07027db150f468fbada4d85b3b2729a3db39178abf5c543b784c1254539"},
+ {file = "charset_normalizer-3.0.1-cp310-cp310-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:62595ab75873d50d57323a91dd03e6966eb79c41fa834b7a1661ed043b2d404d"},
+ {file = "charset_normalizer-3.0.1-cp310-cp310-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:ff6f3db31555657f3163b15a6b7c6938d08df7adbfc9dd13d9d19edad678f1e8"},
+ {file = "charset_normalizer-3.0.1-cp310-cp310-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:772b87914ff1152b92a197ef4ea40efe27a378606c39446ded52c8f80f79702e"},
+ {file = "charset_normalizer-3.0.1-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:70990b9c51340e4044cfc394a81f614f3f90d41397104d226f21e66de668730d"},
+ {file = "charset_normalizer-3.0.1-cp310-cp310-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:292d5e8ba896bbfd6334b096e34bffb56161c81408d6d036a7dfa6929cff8783"},
+ {file = "charset_normalizer-3.0.1-cp310-cp310-musllinux_1_1_aarch64.whl", hash = "sha256:2edb64ee7bf1ed524a1da60cdcd2e1f6e2b4f66ef7c077680739f1641f62f555"},
+ {file = "charset_normalizer-3.0.1-cp310-cp310-musllinux_1_1_i686.whl", hash = "sha256:31a9ddf4718d10ae04d9b18801bd776693487cbb57d74cc3458a7673f6f34639"},
+ {file = "charset_normalizer-3.0.1-cp310-cp310-musllinux_1_1_ppc64le.whl", hash = "sha256:44ba614de5361b3e5278e1241fda3dc1838deed864b50a10d7ce92983797fa76"},
+ {file = "charset_normalizer-3.0.1-cp310-cp310-musllinux_1_1_s390x.whl", hash = "sha256:12db3b2c533c23ab812c2b25934f60383361f8a376ae272665f8e48b88e8e1c6"},
+ {file = "charset_normalizer-3.0.1-cp310-cp310-musllinux_1_1_x86_64.whl", hash = "sha256:c512accbd6ff0270939b9ac214b84fb5ada5f0409c44298361b2f5e13f9aed9e"},
+ {file = "charset_normalizer-3.0.1-cp310-cp310-win32.whl", hash = "sha256:502218f52498a36d6bf5ea77081844017bf7982cdbe521ad85e64cabee1b608b"},
+ {file = "charset_normalizer-3.0.1-cp310-cp310-win_amd64.whl", hash = "sha256:601f36512f9e28f029d9481bdaf8e89e5148ac5d89cffd3b05cd533eeb423b59"},
+ {file = "charset_normalizer-3.0.1-cp311-cp311-macosx_10_9_universal2.whl", hash = "sha256:0298eafff88c99982a4cf66ba2efa1128e4ddaca0b05eec4c456bbc7db691d8d"},
+ {file = "charset_normalizer-3.0.1-cp311-cp311-macosx_10_9_x86_64.whl", hash = "sha256:a8d0fc946c784ff7f7c3742310cc8a57c5c6dc31631269876a88b809dbeff3d3"},
+ {file = "charset_normalizer-3.0.1-cp311-cp311-macosx_11_0_arm64.whl", hash = "sha256:87701167f2a5c930b403e9756fab1d31d4d4da52856143b609e30a1ce7160f3c"},
+ {file = "charset_normalizer-3.0.1-cp311-cp311-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:14e76c0f23218b8f46c4d87018ca2e441535aed3632ca134b10239dfb6dadd6b"},
+ {file = "charset_normalizer-3.0.1-cp311-cp311-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:0c0a590235ccd933d9892c627dec5bc7511ce6ad6c1011fdf5b11363022746c1"},
+ {file = "charset_normalizer-3.0.1-cp311-cp311-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:8c7fe7afa480e3e82eed58e0ca89f751cd14d767638e2550c77a92a9e749c317"},
+ {file = "charset_normalizer-3.0.1-cp311-cp311-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:79909e27e8e4fcc9db4addea88aa63f6423ebb171db091fb4373e3312cb6d603"},
+ {file = "charset_normalizer-3.0.1-cp311-cp311-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:8ac7b6a045b814cf0c47f3623d21ebd88b3e8cf216a14790b455ea7ff0135d18"},
+ {file = "charset_normalizer-3.0.1-cp311-cp311-musllinux_1_1_aarch64.whl", hash = "sha256:72966d1b297c741541ca8cf1223ff262a6febe52481af742036a0b296e35fa5a"},
+ {file = "charset_normalizer-3.0.1-cp311-cp311-musllinux_1_1_i686.whl", hash = "sha256:f9d0c5c045a3ca9bedfc35dca8526798eb91a07aa7a2c0fee134c6c6f321cbd7"},
+ {file = "charset_normalizer-3.0.1-cp311-cp311-musllinux_1_1_ppc64le.whl", hash = "sha256:5995f0164fa7df59db4746112fec3f49c461dd6b31b841873443bdb077c13cfc"},
+ {file = "charset_normalizer-3.0.1-cp311-cp311-musllinux_1_1_s390x.whl", hash = "sha256:4a8fcf28c05c1f6d7e177a9a46a1c52798bfe2ad80681d275b10dcf317deaf0b"},
+ {file = "charset_normalizer-3.0.1-cp311-cp311-musllinux_1_1_x86_64.whl", hash = "sha256:761e8904c07ad053d285670f36dd94e1b6ab7f16ce62b9805c475b7aa1cffde6"},
+ {file = "charset_normalizer-3.0.1-cp311-cp311-win32.whl", hash = "sha256:71140351489970dfe5e60fc621ada3e0f41104a5eddaca47a7acb3c1b851d6d3"},
+ {file = "charset_normalizer-3.0.1-cp311-cp311-win_amd64.whl", hash = "sha256:9ab77acb98eba3fd2a85cd160851816bfce6871d944d885febf012713f06659c"},
+ {file = "charset_normalizer-3.0.1-cp36-cp36m-macosx_10_9_x86_64.whl", hash = "sha256:84c3990934bae40ea69a82034912ffe5a62c60bbf6ec5bc9691419641d7d5c9a"},
+ {file = "charset_normalizer-3.0.1-cp36-cp36m-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:74292fc76c905c0ef095fe11e188a32ebd03bc38f3f3e9bcb85e4e6db177b7ea"},
+ {file = "charset_normalizer-3.0.1-cp36-cp36m-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:c95a03c79bbe30eec3ec2b7f076074f4281526724c8685a42872974ef4d36b72"},
+ {file = "charset_normalizer-3.0.1-cp36-cp36m-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:f4c39b0e3eac288fedc2b43055cfc2ca7a60362d0e5e87a637beac5d801ef478"},
+ {file = "charset_normalizer-3.0.1-cp36-cp36m-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:df2c707231459e8a4028eabcd3cfc827befd635b3ef72eada84ab13b52e1574d"},
+ {file = "charset_normalizer-3.0.1-cp36-cp36m-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:93ad6d87ac18e2a90b0fe89df7c65263b9a99a0eb98f0a3d2e079f12a0735837"},
+ {file = "charset_normalizer-3.0.1-cp36-cp36m-musllinux_1_1_aarch64.whl", hash = "sha256:59e5686dd847347e55dffcc191a96622f016bc0ad89105e24c14e0d6305acbc6"},
+ {file = "charset_normalizer-3.0.1-cp36-cp36m-musllinux_1_1_i686.whl", hash = "sha256:cd6056167405314a4dc3c173943f11249fa0f1b204f8b51ed4bde1a9cd1834dc"},
+ {file = "charset_normalizer-3.0.1-cp36-cp36m-musllinux_1_1_ppc64le.whl", hash = "sha256:083c8d17153ecb403e5e1eb76a7ef4babfc2c48d58899c98fcaa04833e7a2f9a"},
+ {file = "charset_normalizer-3.0.1-cp36-cp36m-musllinux_1_1_s390x.whl", hash = "sha256:f5057856d21e7586765171eac8b9fc3f7d44ef39425f85dbcccb13b3ebea806c"},
+ {file = "charset_normalizer-3.0.1-cp36-cp36m-musllinux_1_1_x86_64.whl", hash = "sha256:7eb33a30d75562222b64f569c642ff3dc6689e09adda43a082208397f016c39a"},
+ {file = "charset_normalizer-3.0.1-cp36-cp36m-win32.whl", hash = "sha256:95dea361dd73757c6f1c0a1480ac499952c16ac83f7f5f4f84f0658a01b8ef41"},
+ {file = "charset_normalizer-3.0.1-cp36-cp36m-win_amd64.whl", hash = "sha256:eaa379fcd227ca235d04152ca6704c7cb55564116f8bc52545ff357628e10602"},
+ {file = "charset_normalizer-3.0.1-cp37-cp37m-macosx_10_9_x86_64.whl", hash = "sha256:3e45867f1f2ab0711d60c6c71746ac53537f1684baa699f4f668d4c6f6ce8e14"},
+ {file = "charset_normalizer-3.0.1-cp37-cp37m-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:cadaeaba78750d58d3cc6ac4d1fd867da6fc73c88156b7a3212a3cd4819d679d"},
+ {file = "charset_normalizer-3.0.1-cp37-cp37m-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:911d8a40b2bef5b8bbae2e36a0b103f142ac53557ab421dc16ac4aafee6f53dc"},
+ {file = "charset_normalizer-3.0.1-cp37-cp37m-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:503e65837c71b875ecdd733877d852adbc465bd82c768a067badd953bf1bc5a3"},
+ {file = "charset_normalizer-3.0.1-cp37-cp37m-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:a60332922359f920193b1d4826953c507a877b523b2395ad7bc716ddd386d866"},
+ {file = "charset_normalizer-3.0.1-cp37-cp37m-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:16a8663d6e281208d78806dbe14ee9903715361cf81f6d4309944e4d1e59ac5b"},
+ {file = "charset_normalizer-3.0.1-cp37-cp37m-musllinux_1_1_aarch64.whl", hash = "sha256:a16418ecf1329f71df119e8a65f3aa68004a3f9383821edcb20f0702934d8087"},
+ {file = "charset_normalizer-3.0.1-cp37-cp37m-musllinux_1_1_i686.whl", hash = "sha256:9d9153257a3f70d5f69edf2325357251ed20f772b12e593f3b3377b5f78e7ef8"},
+ {file = "charset_normalizer-3.0.1-cp37-cp37m-musllinux_1_1_ppc64le.whl", hash = "sha256:02a51034802cbf38db3f89c66fb5d2ec57e6fe7ef2f4a44d070a593c3688667b"},
+ {file = "charset_normalizer-3.0.1-cp37-cp37m-musllinux_1_1_s390x.whl", hash = "sha256:2e396d70bc4ef5325b72b593a72c8979999aa52fb8bcf03f701c1b03e1166918"},
+ {file = "charset_normalizer-3.0.1-cp37-cp37m-musllinux_1_1_x86_64.whl", hash = "sha256:11b53acf2411c3b09e6af37e4b9005cba376c872503c8f28218c7243582df45d"},
+ {file = "charset_normalizer-3.0.1-cp37-cp37m-win32.whl", hash = "sha256:0bf2dae5291758b6f84cf923bfaa285632816007db0330002fa1de38bfcb7154"},
+ {file = "charset_normalizer-3.0.1-cp37-cp37m-win_amd64.whl", hash = "sha256:2c03cc56021a4bd59be889c2b9257dae13bf55041a3372d3295416f86b295fb5"},
+ {file = "charset_normalizer-3.0.1-cp38-cp38-macosx_10_9_universal2.whl", hash = "sha256:024e606be3ed92216e2b6952ed859d86b4cfa52cd5bc5f050e7dc28f9b43ec42"},
+ {file = "charset_normalizer-3.0.1-cp38-cp38-macosx_10_9_x86_64.whl", hash = "sha256:4b0d02d7102dd0f997580b51edc4cebcf2ab6397a7edf89f1c73b586c614272c"},
+ {file = "charset_normalizer-3.0.1-cp38-cp38-macosx_11_0_arm64.whl", hash = "sha256:358a7c4cb8ba9b46c453b1dd8d9e431452d5249072e4f56cfda3149f6ab1405e"},
+ {file = "charset_normalizer-3.0.1-cp38-cp38-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:81d6741ab457d14fdedc215516665050f3822d3e56508921cc7239f8c8e66a58"},
+ {file = "charset_normalizer-3.0.1-cp38-cp38-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:8b8af03d2e37866d023ad0ddea594edefc31e827fee64f8de5611a1dbc373174"},
+ {file = "charset_normalizer-3.0.1-cp38-cp38-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:9cf4e8ad252f7c38dd1f676b46514f92dc0ebeb0db5552f5f403509705e24753"},
+ {file = "charset_normalizer-3.0.1-cp38-cp38-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:e696f0dd336161fca9adbb846875d40752e6eba585843c768935ba5c9960722b"},
+ {file = "charset_normalizer-3.0.1-cp38-cp38-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:c22d3fe05ce11d3671297dc8973267daa0f938b93ec716e12e0f6dee81591dc1"},
+ {file = "charset_normalizer-3.0.1-cp38-cp38-musllinux_1_1_aarch64.whl", hash = "sha256:109487860ef6a328f3eec66f2bf78b0b72400280d8f8ea05f69c51644ba6521a"},
+ {file = "charset_normalizer-3.0.1-cp38-cp38-musllinux_1_1_i686.whl", hash = "sha256:37f8febc8ec50c14f3ec9637505f28e58d4f66752207ea177c1d67df25da5aed"},
+ {file = "charset_normalizer-3.0.1-cp38-cp38-musllinux_1_1_ppc64le.whl", hash = "sha256:f97e83fa6c25693c7a35de154681fcc257c1c41b38beb0304b9c4d2d9e164479"},
+ {file = "charset_normalizer-3.0.1-cp38-cp38-musllinux_1_1_s390x.whl", hash = "sha256:a152f5f33d64a6be73f1d30c9cc82dfc73cec6477ec268e7c6e4c7d23c2d2291"},
+ {file = "charset_normalizer-3.0.1-cp38-cp38-musllinux_1_1_x86_64.whl", hash = "sha256:39049da0ffb96c8cbb65cbf5c5f3ca3168990adf3551bd1dee10c48fce8ae820"},
+ {file = "charset_normalizer-3.0.1-cp38-cp38-win32.whl", hash = "sha256:4457ea6774b5611f4bed5eaa5df55f70abde42364d498c5134b7ef4c6958e20e"},
+ {file = "charset_normalizer-3.0.1-cp38-cp38-win_amd64.whl", hash = "sha256:e62164b50f84e20601c1ff8eb55620d2ad25fb81b59e3cd776a1902527a788af"},
+ {file = "charset_normalizer-3.0.1-cp39-cp39-macosx_10_9_universal2.whl", hash = "sha256:8eade758719add78ec36dc13201483f8e9b5d940329285edcd5f70c0a9edbd7f"},
+ {file = "charset_normalizer-3.0.1-cp39-cp39-macosx_10_9_x86_64.whl", hash = "sha256:8499ca8f4502af841f68135133d8258f7b32a53a1d594aa98cc52013fff55678"},
+ {file = "charset_normalizer-3.0.1-cp39-cp39-macosx_11_0_arm64.whl", hash = "sha256:3fc1c4a2ffd64890aebdb3f97e1278b0cc72579a08ca4de8cd2c04799a3a22be"},
+ {file = "charset_normalizer-3.0.1-cp39-cp39-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:00d3ffdaafe92a5dc603cb9bd5111aaa36dfa187c8285c543be562e61b755f6b"},
+ {file = "charset_normalizer-3.0.1-cp39-cp39-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:c2ac1b08635a8cd4e0cbeaf6f5e922085908d48eb05d44c5ae9eabab148512ca"},
+ {file = "charset_normalizer-3.0.1-cp39-cp39-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:f6f45710b4459401609ebebdbcfb34515da4fc2aa886f95107f556ac69a9147e"},
+ {file = "charset_normalizer-3.0.1-cp39-cp39-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:3ae1de54a77dc0d6d5fcf623290af4266412a7c4be0b1ff7444394f03f5c54e3"},
+ {file = "charset_normalizer-3.0.1-cp39-cp39-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:3b590df687e3c5ee0deef9fc8c547d81986d9a1b56073d82de008744452d6541"},
+ {file = "charset_normalizer-3.0.1-cp39-cp39-musllinux_1_1_aarch64.whl", hash = "sha256:ab5de034a886f616a5668aa5d098af2b5385ed70142090e2a31bcbd0af0fdb3d"},
+ {file = "charset_normalizer-3.0.1-cp39-cp39-musllinux_1_1_i686.whl", hash = "sha256:9cb3032517f1627cc012dbc80a8ec976ae76d93ea2b5feaa9d2a5b8882597579"},
+ {file = "charset_normalizer-3.0.1-cp39-cp39-musllinux_1_1_ppc64le.whl", hash = "sha256:608862a7bf6957f2333fc54ab4399e405baad0163dc9f8d99cb236816db169d4"},
+ {file = "charset_normalizer-3.0.1-cp39-cp39-musllinux_1_1_s390x.whl", hash = "sha256:0f438ae3532723fb6ead77e7c604be7c8374094ef4ee2c5e03a3a17f1fca256c"},
+ {file = "charset_normalizer-3.0.1-cp39-cp39-musllinux_1_1_x86_64.whl", hash = "sha256:356541bf4381fa35856dafa6a965916e54bed415ad8a24ee6de6e37deccf2786"},
+ {file = "charset_normalizer-3.0.1-cp39-cp39-win32.whl", hash = "sha256:39cf9ed17fe3b1bc81f33c9ceb6ce67683ee7526e65fde1447c772afc54a1bb8"},
+ {file = "charset_normalizer-3.0.1-cp39-cp39-win_amd64.whl", hash = "sha256:0a11e971ed097d24c534c037d298ad32c6ce81a45736d31e0ff0ad37ab437d59"},
+ {file = "charset_normalizer-3.0.1-py3-none-any.whl", hash = "sha256:7e189e2e1d3ed2f4aebabd2d5b0f931e883676e51c7624826e0a4e5fe8a0bf24"},
+]
+
+[[package]]
+name = "click"
+version = "8.1.3"
+description = "Composable command line interface toolkit"
+category = "dev"
+optional = false
+python-versions = ">=3.7"
+files = [
+ {file = "click-8.1.3-py3-none-any.whl", hash = "sha256:bb4d8133cb15a609f44e8213d9b391b0809795062913b383c62be0ee95b1db48"},
+ {file = "click-8.1.3.tar.gz", hash = "sha256:7682dc8afb30297001674575ea00d1814d808d6a36af415a82bd481d37ba7b8e"},
+]
+
+[package.dependencies]
+colorama = {version = "*", markers = "platform_system == \"Windows\""}
+importlib-metadata = {version = "*", markers = "python_version < \"3.8\""}
+
+[[package]]
+name = "click-log"
+version = "0.4.0"
+description = "Logging integration for Click"
+category = "dev"
+optional = false
+python-versions = "*"
+files = [
+ {file = "click-log-0.4.0.tar.gz", hash = "sha256:3970f8570ac54491237bcdb3d8ab5e3eef6c057df29f8c3d1151a51a9c23b975"},
+ {file = "click_log-0.4.0-py2.py3-none-any.whl", hash = "sha256:a43e394b528d52112af599f2fc9e4b7cf3c15f94e53581f74fa6867e68c91756"},
+]
+
+[package.dependencies]
+click = "*"
+
+[[package]]
+name = "colorama"
+version = "0.4.6"
+description = "Cross-platform colored terminal text."
+category = "dev"
+optional = false
+python-versions = "!=3.0.*,!=3.1.*,!=3.2.*,!=3.3.*,!=3.4.*,!=3.5.*,!=3.6.*,>=2.7"
+files = [
+ {file = "colorama-0.4.6-py2.py3-none-any.whl", hash = "sha256:4f1d9991f5acc0ca119f9d443620b77f9d6b33703e51011c16baf57afb285fc6"},
+ {file = "colorama-0.4.6.tar.gz", hash = "sha256:08695f5cb7ed6e0531a20572697297273c47b8cae5a63ffc6d6ed5c201be6e44"},
+]
+
+[[package]]
+name = "concrete-compiler"
+version = "0.24.0rc5"
+description = "Concrete Compiler"
+category = "main"
+optional = false
+python-versions = "*"
+files = [
+ {file = "concrete_compiler-0.24.0rc5-cp310-cp310-macosx_11_0_x86_64.whl", hash = "sha256:c99e79d72f55af33695361bdc366287c725c0c0e9bde148c97e5c63139554870"},
+ {file = "concrete_compiler-0.24.0rc5-cp310-cp310-manylinux_2_28_x86_64.whl", hash = "sha256:0cc24dbc02f6b974720dbb31cf9b737866d8a4e425a86d0342e6116b93381665"},
+ {file = "concrete_compiler-0.24.0rc5-cp37-cp37m-manylinux_2_28_x86_64.whl", hash = "sha256:b94f39d114be84ca483bf093304ab60ad73b8d9984a58697fe0f9d6e990c9b96"},
+ {file = "concrete_compiler-0.24.0rc5-cp38-cp38-macosx_11_0_x86_64.whl", hash = "sha256:72ef65c9827e8bcd38fd87a0ac356f89f850d5d9c178fe1224d3a8dcccc181c9"},
+ {file = "concrete_compiler-0.24.0rc5-cp38-cp38-manylinux_2_28_x86_64.whl", hash = "sha256:893789cb25227049e8e8c6a9dfb80c2a5132c4beef38105d6d464e10cd0221b3"},
+ {file = "concrete_compiler-0.24.0rc5-cp39-cp39-macosx_11_0_x86_64.whl", hash = "sha256:4e1aef7eae87daffcc500b84fbc36c1c460d5d202cdc31ee753a3eafe4866269"},
+ {file = "concrete_compiler-0.24.0rc5-cp39-cp39-manylinux_2_28_x86_64.whl", hash = "sha256:8de4396fbb48c0a20b65e448cf3ee2ef15f801d75bf358c343a5f350f49b5a56"},
+]
+
+[package.dependencies]
+numpy = "*"
+PyYAML = "*"
+setuptools = "*"
+
+[[package]]
+name = "coverage"
+version = "7.2.1"
+description = "Code coverage measurement for Python"
+category = "dev"
+optional = false
+python-versions = ">=3.7"
+files = [
+ {file = "coverage-7.2.1-cp310-cp310-macosx_10_9_x86_64.whl", hash = "sha256:49567ec91fc5e0b15356da07a2feabb421d62f52a9fff4b1ec40e9e19772f5f8"},
+ {file = "coverage-7.2.1-cp310-cp310-macosx_11_0_arm64.whl", hash = "sha256:d2ef6cae70168815ed91388948b5f4fcc69681480a0061114db737f957719f03"},
+ {file = "coverage-7.2.1-cp310-cp310-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:3004765bca3acd9e015794e5c2f0c9a05587f5e698127ff95e9cfba0d3f29339"},
+ {file = "coverage-7.2.1-cp310-cp310-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:cca7c0b7f5881dfe0291ef09ba7bb1582cb92ab0aeffd8afb00c700bf692415a"},
+ {file = "coverage-7.2.1-cp310-cp310-manylinux_2_5_x86_64.manylinux1_x86_64.manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:b2167d116309f564af56f9aa5e75ef710ef871c5f9b313a83050035097b56820"},
+ {file = "coverage-7.2.1-cp310-cp310-musllinux_1_1_aarch64.whl", hash = "sha256:cb5f152fb14857cbe7f3e8c9a5d98979c4c66319a33cad6e617f0067c9accdc4"},
+ {file = "coverage-7.2.1-cp310-cp310-musllinux_1_1_i686.whl", hash = "sha256:87dc37f16fb5e3a28429e094145bf7c1753e32bb50f662722e378c5851f7fdc6"},
+ {file = "coverage-7.2.1-cp310-cp310-musllinux_1_1_x86_64.whl", hash = "sha256:e191a63a05851f8bce77bc875e75457f9b01d42843f8bd7feed2fc26bbe60833"},
+ {file = "coverage-7.2.1-cp310-cp310-win32.whl", hash = "sha256:e3ea04b23b114572b98a88c85379e9e9ae031272ba1fb9b532aa934c621626d4"},
+ {file = "coverage-7.2.1-cp310-cp310-win_amd64.whl", hash = "sha256:0cf557827be7eca1c38a2480484d706693e7bb1929e129785fe59ec155a59de6"},
+ {file = "coverage-7.2.1-cp311-cp311-macosx_10_9_x86_64.whl", hash = "sha256:570c21a29493b350f591a4b04c158ce1601e8d18bdcd21db136fbb135d75efa6"},
+ {file = "coverage-7.2.1-cp311-cp311-macosx_11_0_arm64.whl", hash = "sha256:9e872b082b32065ac2834149dc0adc2a2e6d8203080501e1e3c3c77851b466f9"},
+ {file = "coverage-7.2.1-cp311-cp311-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:fac6343bae03b176e9b58104a9810df3cdccd5cfed19f99adfa807ffbf43cf9b"},
+ {file = "coverage-7.2.1-cp311-cp311-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:abacd0a738e71b20e224861bc87e819ef46fedba2fb01bc1af83dfd122e9c319"},
+ {file = "coverage-7.2.1-cp311-cp311-manylinux_2_5_x86_64.manylinux1_x86_64.manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:d9256d4c60c4bbfec92721b51579c50f9e5062c21c12bec56b55292464873508"},
+ {file = "coverage-7.2.1-cp311-cp311-musllinux_1_1_aarch64.whl", hash = "sha256:80559eaf6c15ce3da10edb7977a1548b393db36cbc6cf417633eca05d84dd1ed"},
+ {file = "coverage-7.2.1-cp311-cp311-musllinux_1_1_i686.whl", hash = "sha256:0bd7e628f6c3ec4e7d2d24ec0e50aae4e5ae95ea644e849d92ae4805650b4c4e"},
+ {file = "coverage-7.2.1-cp311-cp311-musllinux_1_1_x86_64.whl", hash = "sha256:09643fb0df8e29f7417adc3f40aaf379d071ee8f0350ab290517c7004f05360b"},
+ {file = "coverage-7.2.1-cp311-cp311-win32.whl", hash = "sha256:1b7fb13850ecb29b62a447ac3516c777b0e7a09ecb0f4bb6718a8654c87dfc80"},
+ {file = "coverage-7.2.1-cp311-cp311-win_amd64.whl", hash = "sha256:617a94ada56bbfe547aa8d1b1a2b8299e2ec1ba14aac1d4b26a9f7d6158e1273"},
+ {file = "coverage-7.2.1-cp37-cp37m-macosx_10_9_x86_64.whl", hash = "sha256:8649371570551d2fd7dee22cfbf0b61f1747cdfb2b7587bb551e4beaaa44cb97"},
+ {file = "coverage-7.2.1-cp37-cp37m-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:5d2b9b5e70a21474c105a133ba227c61bc95f2ac3b66861143ce39a5ea4b3f84"},
+ {file = "coverage-7.2.1-cp37-cp37m-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:ae82c988954722fa07ec5045c57b6d55bc1a0890defb57cf4a712ced65b26ddd"},
+ {file = "coverage-7.2.1-cp37-cp37m-manylinux_2_5_x86_64.manylinux1_x86_64.manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:861cc85dfbf55a7a768443d90a07e0ac5207704a9f97a8eb753292a7fcbdfcfc"},
+ {file = "coverage-7.2.1-cp37-cp37m-musllinux_1_1_aarch64.whl", hash = "sha256:0339dc3237c0d31c3b574f19c57985fcbe494280153bbcad33f2cdf469f4ac3e"},
+ {file = "coverage-7.2.1-cp37-cp37m-musllinux_1_1_i686.whl", hash = "sha256:5928b85416a388dd557ddc006425b0c37e8468bd1c3dc118c1a3de42f59e2a54"},
+ {file = "coverage-7.2.1-cp37-cp37m-musllinux_1_1_x86_64.whl", hash = "sha256:8d3843ca645f62c426c3d272902b9de90558e9886f15ddf5efe757b12dd376f5"},
+ {file = "coverage-7.2.1-cp37-cp37m-win32.whl", hash = "sha256:6a034480e9ebd4e83d1aa0453fd78986414b5d237aea89a8fdc35d330aa13bae"},
+ {file = "coverage-7.2.1-cp37-cp37m-win_amd64.whl", hash = "sha256:6fce673f79a0e017a4dc35e18dc7bb90bf6d307c67a11ad5e61ca8d42b87cbff"},
+ {file = "coverage-7.2.1-cp38-cp38-macosx_10_9_x86_64.whl", hash = "sha256:7f099da6958ddfa2ed84bddea7515cb248583292e16bb9231d151cd528eab657"},
+ {file = "coverage-7.2.1-cp38-cp38-macosx_11_0_arm64.whl", hash = "sha256:97a3189e019d27e914ecf5c5247ea9f13261d22c3bb0cfcfd2a9b179bb36f8b1"},
+ {file = "coverage-7.2.1-cp38-cp38-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:a81dbcf6c6c877986083d00b834ac1e84b375220207a059ad45d12f6e518a4e3"},
+ {file = "coverage-7.2.1-cp38-cp38-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:78d2c3dde4c0b9be4b02067185136b7ee4681978228ad5ec1278fa74f5ca3e99"},
+ {file = "coverage-7.2.1-cp38-cp38-manylinux_2_5_x86_64.manylinux1_x86_64.manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:3a209d512d157379cc9ab697cbdbb4cfd18daa3e7eebaa84c3d20b6af0037384"},
+ {file = "coverage-7.2.1-cp38-cp38-musllinux_1_1_aarch64.whl", hash = "sha256:f3d07edb912a978915576a776756069dede66d012baa503022d3a0adba1b6afa"},
+ {file = "coverage-7.2.1-cp38-cp38-musllinux_1_1_i686.whl", hash = "sha256:8dca3c1706670297851bca1acff9618455122246bdae623be31eca744ade05ec"},
+ {file = "coverage-7.2.1-cp38-cp38-musllinux_1_1_x86_64.whl", hash = "sha256:b1991a6d64231a3e5bbe3099fb0dd7c9aeaa4275ad0e0aeff4cb9ef885c62ba2"},
+ {file = "coverage-7.2.1-cp38-cp38-win32.whl", hash = "sha256:22c308bc508372576ffa3d2dbc4824bb70d28eeb4fcd79d4d1aed663a06630d0"},
+ {file = "coverage-7.2.1-cp38-cp38-win_amd64.whl", hash = "sha256:b0c0d46de5dd97f6c2d1b560bf0fcf0215658097b604f1840365296302a9d1fb"},
+ {file = "coverage-7.2.1-cp39-cp39-macosx_10_9_x86_64.whl", hash = "sha256:4dd34a935de268a133e4741827ae951283a28c0125ddcdbcbba41c4b98f2dfef"},
+ {file = "coverage-7.2.1-cp39-cp39-macosx_11_0_arm64.whl", hash = "sha256:0f8318ed0f3c376cfad8d3520f496946977abde080439d6689d7799791457454"},
+ {file = "coverage-7.2.1-cp39-cp39-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:834c2172edff5a08d78e2f53cf5e7164aacabeb66b369f76e7bb367ca4e2d993"},
+ {file = "coverage-7.2.1-cp39-cp39-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:e4d70c853f0546855f027890b77854508bdb4d6a81242a9d804482e667fff6e6"},
+ {file = "coverage-7.2.1-cp39-cp39-manylinux_2_5_x86_64.manylinux1_x86_64.manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:8a6450da4c7afc4534305b2b7d8650131e130610cea448ff240b6ab73d7eab63"},
+ {file = "coverage-7.2.1-cp39-cp39-musllinux_1_1_aarch64.whl", hash = "sha256:99f4dd81b2bb8fc67c3da68b1f5ee1650aca06faa585cbc6818dbf67893c6d58"},
+ {file = "coverage-7.2.1-cp39-cp39-musllinux_1_1_i686.whl", hash = "sha256:bdd3f2f285ddcf2e75174248b2406189261a79e7fedee2ceeadc76219b6faa0e"},
+ {file = "coverage-7.2.1-cp39-cp39-musllinux_1_1_x86_64.whl", hash = "sha256:f29351393eb05e6326f044a7b45ed8e38cb4dcc38570d12791f271399dc41431"},
+ {file = "coverage-7.2.1-cp39-cp39-win32.whl", hash = "sha256:e2b50ebc2b6121edf352336d503357321b9d8738bb7a72d06fc56153fd3f4cd8"},
+ {file = "coverage-7.2.1-cp39-cp39-win_amd64.whl", hash = "sha256:bd5a12239c0006252244f94863f1c518ac256160cd316ea5c47fb1a11b25889a"},
+ {file = "coverage-7.2.1-pp37.pp38.pp39-none-any.whl", hash = "sha256:436313d129db7cf5b4ac355dd2bd3f7c7e5294af077b090b85de75f8458b8616"},
+ {file = "coverage-7.2.1.tar.gz", hash = "sha256:c77f2a9093ccf329dd523a9b2b3c854c20d2a3d968b6def3b820272ca6732242"},
+]
+
+[package.dependencies]
+tomli = {version = "*", optional = true, markers = "python_full_version <= \"3.11.0a6\" and extra == \"toml\""}
+
+[package.extras]
+toml = ["tomli"]
+
+[[package]]
+name = "cryptography"
+version = "39.0.1"
+description = "cryptography is a package which provides cryptographic recipes and primitives to Python developers."
+category = "dev"
+optional = false
+python-versions = ">=3.6"
+files = [
+ {file = "cryptography-39.0.1-cp36-abi3-macosx_10_12_universal2.whl", hash = "sha256:6687ef6d0a6497e2b58e7c5b852b53f62142cfa7cd1555795758934da363a965"},
+ {file = "cryptography-39.0.1-cp36-abi3-macosx_10_12_x86_64.whl", hash = "sha256:706843b48f9a3f9b9911979761c91541e3d90db1ca905fd63fee540a217698bc"},
+ {file = "cryptography-39.0.1-cp36-abi3-manylinux_2_17_aarch64.manylinux2014_aarch64.manylinux_2_24_aarch64.whl", hash = "sha256:5d2d8b87a490bfcd407ed9d49093793d0f75198a35e6eb1a923ce1ee86c62b41"},
+ {file = "cryptography-39.0.1-cp36-abi3-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:83e17b26de248c33f3acffb922748151d71827d6021d98c70e6c1a25ddd78505"},
+ {file = "cryptography-39.0.1-cp36-abi3-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:e124352fd3db36a9d4a21c1aa27fd5d051e621845cb87fb851c08f4f75ce8be6"},
+ {file = "cryptography-39.0.1-cp36-abi3-manylinux_2_24_x86_64.whl", hash = "sha256:5aa67414fcdfa22cf052e640cb5ddc461924a045cacf325cd164e65312d99502"},
+ {file = "cryptography-39.0.1-cp36-abi3-manylinux_2_28_aarch64.whl", hash = "sha256:35f7c7d015d474f4011e859e93e789c87d21f6f4880ebdc29896a60403328f1f"},
+ {file = "cryptography-39.0.1-cp36-abi3-manylinux_2_28_x86_64.whl", hash = "sha256:f24077a3b5298a5a06a8e0536e3ea9ec60e4c7ac486755e5fb6e6ea9b3500106"},
+ {file = "cryptography-39.0.1-cp36-abi3-musllinux_1_1_aarch64.whl", hash = "sha256:f0c64d1bd842ca2633e74a1a28033d139368ad959872533b1bab8c80e8240a0c"},
+ {file = "cryptography-39.0.1-cp36-abi3-musllinux_1_1_x86_64.whl", hash = "sha256:0f8da300b5c8af9f98111ffd512910bc792b4c77392a9523624680f7956a99d4"},
+ {file = "cryptography-39.0.1-cp36-abi3-win32.whl", hash = "sha256:fe913f20024eb2cb2f323e42a64bdf2911bb9738a15dba7d3cce48151034e3a8"},
+ {file = "cryptography-39.0.1-cp36-abi3-win_amd64.whl", hash = "sha256:ced4e447ae29ca194449a3f1ce132ded8fcab06971ef5f618605aacaa612beac"},
+ {file = "cryptography-39.0.1-pp38-pypy38_pp73-macosx_10_12_x86_64.whl", hash = "sha256:807ce09d4434881ca3a7594733669bd834f5b2c6d5c7e36f8c00f691887042ad"},
+ {file = "cryptography-39.0.1-pp38-pypy38_pp73-manylinux_2_28_aarch64.whl", hash = "sha256:c5caeb8188c24888c90b5108a441c106f7faa4c4c075a2bcae438c6e8ca73cef"},
+ {file = "cryptography-39.0.1-pp38-pypy38_pp73-manylinux_2_28_x86_64.whl", hash = "sha256:4789d1e3e257965e960232345002262ede4d094d1a19f4d3b52e48d4d8f3b885"},
+ {file = "cryptography-39.0.1-pp38-pypy38_pp73-win_amd64.whl", hash = "sha256:96f1157a7c08b5b189b16b47bc9db2332269d6680a196341bf30046330d15388"},
+ {file = "cryptography-39.0.1-pp39-pypy39_pp73-macosx_10_12_x86_64.whl", hash = "sha256:e422abdec8b5fa8462aa016786680720d78bdce7a30c652b7fadf83a4ba35336"},
+ {file = "cryptography-39.0.1-pp39-pypy39_pp73-manylinux_2_17_aarch64.manylinux2014_aarch64.manylinux_2_24_aarch64.whl", hash = "sha256:b0afd054cd42f3d213bf82c629efb1ee5f22eba35bf0eec88ea9ea7304f511a2"},
+ {file = "cryptography-39.0.1-pp39-pypy39_pp73-manylinux_2_24_x86_64.whl", hash = "sha256:6f8ba7f0328b79f08bdacc3e4e66fb4d7aab0c3584e0bd41328dce5262e26b2e"},
+ {file = "cryptography-39.0.1-pp39-pypy39_pp73-manylinux_2_28_aarch64.whl", hash = "sha256:ef8b72fa70b348724ff1218267e7f7375b8de4e8194d1636ee60510aae104cd0"},
+ {file = "cryptography-39.0.1-pp39-pypy39_pp73-manylinux_2_28_x86_64.whl", hash = "sha256:aec5a6c9864be7df2240c382740fcf3b96928c46604eaa7f3091f58b878c0bb6"},
+ {file = "cryptography-39.0.1-pp39-pypy39_pp73-win_amd64.whl", hash = "sha256:fdd188c8a6ef8769f148f88f859884507b954cc64db6b52f66ef199bb9ad660a"},
+ {file = "cryptography-39.0.1.tar.gz", hash = "sha256:d1f6198ee6d9148405e49887803907fe8962a23e6c6f83ea7d98f1c0de375695"},
+]
+
+[package.dependencies]
+cffi = ">=1.12"
+
+[package.extras]
+docs = ["sphinx (>=5.3.0)", "sphinx-rtd-theme (>=1.1.1)"]
+docstest = ["pyenchant (>=1.6.11)", "sphinxcontrib-spelling (>=4.0.1)", "twine (>=1.12.0)"]
+pep8test = ["black", "check-manifest", "mypy", "ruff", "types-pytz", "types-requests"]
+sdist = ["setuptools-rust (>=0.11.4)"]
+ssh = ["bcrypt (>=3.1.5)"]
+test = ["hypothesis (>=1.11.4,!=3.79.2)", "iso8601", "pretend", "pytest (>=6.2.0)", "pytest-benchmark", "pytest-cov", "pytest-shard (>=0.1.2)", "pytest-subtests", "pytest-xdist", "pytz"]
+test-randomorder = ["pytest-randomly"]
+tox = ["tox"]
+
+[[package]]
+name = "cycler"
+version = "0.11.0"
+description = "Composable style cycles"
+category = "main"
+optional = false
+python-versions = ">=3.6"
+files = [
+ {file = "cycler-0.11.0-py3-none-any.whl", hash = "sha256:3a27e95f763a428a739d2add979fa7494c912a32c17c4c38c4d5f082cad165a3"},
+ {file = "cycler-0.11.0.tar.gz", hash = "sha256:9c87405839a19696e837b3b818fed3f5f69f16f1eec1a1ad77e043dcea9c772f"},
+]
+
+[[package]]
+name = "cyclonedx-python-lib"
+version = "0.12.3"
+description = "A library for producing CycloneDX SBOM (Software Bill of Materials) files."
+category = "dev"
+optional = false
+python-versions = ">=3.6,<4.0"
+files = [
+ {file = "cyclonedx-python-lib-0.12.3.tar.gz", hash = "sha256:ea2d5393a3de4a347e9c99c6c59efe4e3f64da2fb48e80f3e350fee289fa8a73"},
+ {file = "cyclonedx_python_lib-0.12.3-py3-none-any.whl", hash = "sha256:ba7297148c0f3a72ed3395495080be86a344539084f9d4df9c0f3b466e4f6e54"},
+]
+
+[package.dependencies]
+importlib-metadata = {version = ">=3.4", markers = "python_version < \"3.8\""}
+packageurl-python = ">=0.9"
+setuptools = ">=47.0.0"
+toml = ">=0.10.0,<0.11.0"
+types-setuptools = ">=57.0.0"
+types-toml = ">=0.10.0,<0.11.0"
+typing-extensions = {version = ">=3.10,<4.0", markers = "python_version < \"3.8\""}
+
+[[package]]
+name = "debugpy"
+version = "1.6.6"
+description = "An implementation of the Debug Adapter Protocol for Python"
+category = "dev"
+optional = false
+python-versions = ">=3.7"
+files = [
+ {file = "debugpy-1.6.6-cp310-cp310-macosx_11_0_x86_64.whl", hash = "sha256:0ea1011e94416e90fb3598cc3ef5e08b0a4dd6ce6b9b33ccd436c1dffc8cd664"},
+ {file = "debugpy-1.6.6-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:dff595686178b0e75580c24d316aa45a8f4d56e2418063865c114eef651a982e"},
+ {file = "debugpy-1.6.6-cp310-cp310-win32.whl", hash = "sha256:87755e173fcf2ec45f584bb9d61aa7686bb665d861b81faa366d59808bbd3494"},
+ {file = "debugpy-1.6.6-cp310-cp310-win_amd64.whl", hash = "sha256:72687b62a54d9d9e3fb85e7a37ea67f0e803aaa31be700e61d2f3742a5683917"},
+ {file = "debugpy-1.6.6-cp37-cp37m-macosx_10_15_x86_64.whl", hash = "sha256:78739f77c58048ec006e2b3eb2e0cd5a06d5f48c915e2fc7911a337354508110"},
+ {file = "debugpy-1.6.6-cp37-cp37m-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:23c29e40e39ad7d869d408ded414f6d46d82f8a93b5857ac3ac1e915893139ca"},
+ {file = "debugpy-1.6.6-cp37-cp37m-win32.whl", hash = "sha256:7aa7e103610e5867d19a7d069e02e72eb2b3045b124d051cfd1538f1d8832d1b"},
+ {file = "debugpy-1.6.6-cp37-cp37m-win_amd64.whl", hash = "sha256:f6383c29e796203a0bba74a250615ad262c4279d398e89d895a69d3069498305"},
+ {file = "debugpy-1.6.6-cp38-cp38-macosx_10_15_x86_64.whl", hash = "sha256:23363e6d2a04d726bbc1400bd4e9898d54419b36b2cdf7020e3e215e1dcd0f8e"},
+ {file = "debugpy-1.6.6-cp38-cp38-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:9b5d1b13d7c7bf5d7cf700e33c0b8ddb7baf030fcf502f76fc061ddd9405d16c"},
+ {file = "debugpy-1.6.6-cp38-cp38-win32.whl", hash = "sha256:70ab53918fd907a3ade01909b3ed783287ede362c80c75f41e79596d5ccacd32"},
+ {file = "debugpy-1.6.6-cp38-cp38-win_amd64.whl", hash = "sha256:c05349890804d846eca32ce0623ab66c06f8800db881af7a876dc073ac1c2225"},
+ {file = "debugpy-1.6.6-cp39-cp39-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:a771739902b1ae22a120dbbb6bd91b2cae6696c0e318b5007c5348519a4211c6"},
+ {file = "debugpy-1.6.6-cp39-cp39-win32.whl", hash = "sha256:549ae0cb2d34fc09d1675f9b01942499751d174381b6082279cf19cdb3c47cbe"},
+ {file = "debugpy-1.6.6-cp39-cp39-win_amd64.whl", hash = "sha256:de4a045fbf388e120bb6ec66501458d3134f4729faed26ff95de52a754abddb1"},
+ {file = "debugpy-1.6.6-py2.py3-none-any.whl", hash = "sha256:be596b44448aac14eb3614248c91586e2bc1728e020e82ef3197189aae556115"},
+ {file = "debugpy-1.6.6.zip", hash = "sha256:b9c2130e1c632540fbf9c2c88341493797ddf58016e7cba02e311de9b0a96b67"},
+]
+
+[[package]]
+name = "decorator"
+version = "5.1.1"
+description = "Decorators for Humans"
+category = "dev"
+optional = false
+python-versions = ">=3.5"
+files = [
+ {file = "decorator-5.1.1-py3-none-any.whl", hash = "sha256:b8c3f85900b9dc423225913c5aace94729fe1fa9763b38939a95226f02d37186"},
+ {file = "decorator-5.1.1.tar.gz", hash = "sha256:637996211036b6385ef91435e4fae22989472f9d571faba8927ba8253acbc330"},
+]
+
+[[package]]
+name = "defusedxml"
+version = "0.7.1"
+description = "XML bomb protection for Python stdlib modules"
+category = "dev"
+optional = false
+python-versions = ">=2.7, !=3.0.*, !=3.1.*, !=3.2.*, !=3.3.*, !=3.4.*"
+files = [
+ {file = "defusedxml-0.7.1-py2.py3-none-any.whl", hash = "sha256:a352e7e428770286cc899e2542b6cdaedb2b4953ff269a210103ec58f6198a61"},
+ {file = "defusedxml-0.7.1.tar.gz", hash = "sha256:1bb3032db185915b62d7c6209c5a8792be6a32ab2fedacc84e01b52c51aa3e69"},
+]
+
+[[package]]
+name = "docutils"
+version = "0.19"
+description = "Docutils -- Python Documentation Utilities"
+category = "dev"
+optional = false
+python-versions = ">=3.7"
+files = [
+ {file = "docutils-0.19-py3-none-any.whl", hash = "sha256:5e1de4d849fee02c63b040a4a3fd567f4ab104defd8a5511fbbc24a8a017efbc"},
+ {file = "docutils-0.19.tar.gz", hash = "sha256:33995a6753c30b7f577febfc2c50411fec6aac7f7ffeb7c4cfe5991072dcf9e6"},
+]
+
+[[package]]
+name = "dotty-dict"
+version = "1.3.1"
+description = "Dictionary wrapper for quick access to deeply nested keys."
+category = "dev"
+optional = false
+python-versions = ">=3.5,<4.0"
+files = [
+ {file = "dotty_dict-1.3.1-py3-none-any.whl", hash = "sha256:5022d234d9922f13aa711b4950372a06a6d64cb6d6db9ba43d0ba133ebfce31f"},
+ {file = "dotty_dict-1.3.1.tar.gz", hash = "sha256:4b016e03b8ae265539757a53eba24b9bfda506fb94fbce0bee843c6f05541a15"},
+]
+
+[[package]]
+name = "entrypoints"
+version = "0.4"
+description = "Discover and load entry points from installed packages."
+category = "dev"
+optional = false
+python-versions = ">=3.6"
+files = [
+ {file = "entrypoints-0.4-py3-none-any.whl", hash = "sha256:f174b5ff827504fd3cd97cc3f8649f3693f51538c7e4bdf3ef002c8429d42f9f"},
+ {file = "entrypoints-0.4.tar.gz", hash = "sha256:b706eddaa9218a19ebcd67b56818f05bb27589b1ca9e8d797b74affad4ccacd4"},
+]
+
+[[package]]
+name = "execnet"
+version = "1.9.0"
+description = "execnet: rapid multi-Python deployment"
+category = "dev"
+optional = false
+python-versions = ">=2.7, !=3.0.*, !=3.1.*, !=3.2.*, !=3.3.*, !=3.4.*"
+files = [
+ {file = "execnet-1.9.0-py2.py3-none-any.whl", hash = "sha256:a295f7cc774947aac58dde7fdc85f4aa00c42adf5d8f5468fc630c1acf30a142"},
+ {file = "execnet-1.9.0.tar.gz", hash = "sha256:8f694f3ba9cc92cab508b152dcfe322153975c29bda272e2fd7f3f00f36e47c5"},
+]
+
+[package.extras]
+testing = ["pre-commit"]
+
+[[package]]
+name = "fastjsonschema"
+version = "2.16.3"
+description = "Fastest Python implementation of JSON schema"
+category = "dev"
+optional = false
+python-versions = "*"
+files = [
+ {file = "fastjsonschema-2.16.3-py3-none-any.whl", hash = "sha256:04fbecc94300436f628517b05741b7ea009506ce8f946d40996567c669318490"},
+ {file = "fastjsonschema-2.16.3.tar.gz", hash = "sha256:4a30d6315a68c253cfa8f963b9697246315aa3db89f98b97235e345dedfb0b8e"},
+]
+
+[package.extras]
+devel = ["colorama", "json-spec", "jsonschema", "pylint", "pytest", "pytest-benchmark", "pytest-cache", "validictory"]
+
+[[package]]
+name = "flake8"
+version = "4.0.1"
+description = "the modular source code checker: pep8 pyflakes and co"
+category = "dev"
+optional = false
+python-versions = ">=3.6"
+files = [
+ {file = "flake8-4.0.1-py2.py3-none-any.whl", hash = "sha256:479b1304f72536a55948cb40a32dce8bb0ffe3501e26eaf292c7e60eb5e0428d"},
+ {file = "flake8-4.0.1.tar.gz", hash = "sha256:806e034dda44114815e23c16ef92f95c91e4c71100ff52813adf7132a6ad870d"},
+]
+
+[package.dependencies]
+importlib-metadata = {version = "<4.3", markers = "python_version < \"3.8\""}
+mccabe = ">=0.6.0,<0.7.0"
+pycodestyle = ">=2.8.0,<2.9.0"
+pyflakes = ">=2.4.0,<2.5.0"
+
+[[package]]
+name = "flake8-bugbear"
+version = "21.11.29"
+description = "A plugin for flake8 finding likely bugs and design problems in your program. Contains warnings that don't belong in pyflakes and pycodestyle."
+category = "dev"
+optional = false
+python-versions = ">=3.6"
+files = [
+ {file = "flake8-bugbear-21.11.29.tar.gz", hash = "sha256:8b04cb2fafc6a78e1a9d873bd3988e4282f7959bb6b0d7c1ae648ec09b937a7b"},
+ {file = "flake8_bugbear-21.11.29-py36.py37.py38-none-any.whl", hash = "sha256:179e41ddae5de5e3c20d1f61736feeb234e70958fbb56ab3c28a67739c8e9a82"},
+]
+
+[package.dependencies]
+attrs = ">=19.2.0"
+flake8 = ">=3.0.0"
+
+[package.extras]
+dev = ["coverage", "hypothesis", "hypothesmith (>=0.2)", "pre-commit"]
+
+[[package]]
+name = "fonttools"
+version = "4.38.0"
+description = "Tools to manipulate font files"
+category = "main"
+optional = false
+python-versions = ">=3.7"
+files = [
+ {file = "fonttools-4.38.0-py3-none-any.whl", hash = "sha256:820466f43c8be8c3009aef8b87e785014133508f0de64ec469e4efb643ae54fb"},
+ {file = "fonttools-4.38.0.zip", hash = "sha256:2bb244009f9bf3fa100fc3ead6aeb99febe5985fa20afbfbaa2f8946c2fbdaf1"},
+]
+
+[package.extras]
+all = ["brotli (>=1.0.1)", "brotlicffi (>=0.8.0)", "fs (>=2.2.0,<3)", "lxml (>=4.0,<5)", "lz4 (>=1.7.4.2)", "matplotlib", "munkres", "scipy", "skia-pathops (>=0.5.0)", "sympy", "uharfbuzz (>=0.23.0)", "unicodedata2 (>=14.0.0)", "xattr", "zopfli (>=0.1.4)"]
+graphite = ["lz4 (>=1.7.4.2)"]
+interpolatable = ["munkres", "scipy"]
+lxml = ["lxml (>=4.0,<5)"]
+pathops = ["skia-pathops (>=0.5.0)"]
+plot = ["matplotlib"]
+repacker = ["uharfbuzz (>=0.23.0)"]
+symfont = ["sympy"]
+type1 = ["xattr"]
+ufo = ["fs (>=2.2.0,<3)"]
+unicode = ["unicodedata2 (>=14.0.0)"]
+woff = ["brotli (>=1.0.1)", "brotlicffi (>=0.8.0)", "zopfli (>=0.1.4)"]
+
+[[package]]
+name = "gitdb"
+version = "4.0.10"
+description = "Git Object Database"
+category = "dev"
+optional = false
+python-versions = ">=3.7"
+files = [
+ {file = "gitdb-4.0.10-py3-none-any.whl", hash = "sha256:c286cf298426064079ed96a9e4a9d39e7f3e9bf15ba60701e95f5492f28415c7"},
+ {file = "gitdb-4.0.10.tar.gz", hash = "sha256:6eb990b69df4e15bad899ea868dc46572c3f75339735663b81de79b06f17eb9a"},
+]
+
+[package.dependencies]
+smmap = ">=3.0.1,<6"
+
+[[package]]
+name = "gitpython"
+version = "3.1.31"
+description = "GitPython is a Python library used to interact with Git repositories"
+category = "dev"
+optional = false
+python-versions = ">=3.7"
+files = [
+ {file = "GitPython-3.1.31-py3-none-any.whl", hash = "sha256:f04893614f6aa713a60cbbe1e6a97403ef633103cdd0ef5eb6efe0deb98dbe8d"},
+ {file = "GitPython-3.1.31.tar.gz", hash = "sha256:8ce3bcf69adfdf7c7d503e78fd3b1c492af782d58893b650adb2ac8912ddd573"},
+]
+
+[package.dependencies]
+gitdb = ">=4.0.1,<5"
+typing-extensions = {version = ">=3.7.4.3", markers = "python_version < \"3.8\""}
+
+[[package]]
+name = "html5lib"
+version = "1.1"
+description = "HTML parser based on the WHATWG HTML specification"
+category = "dev"
+optional = false
+python-versions = ">=2.7, !=3.0.*, !=3.1.*, !=3.2.*, !=3.3.*, !=3.4.*"
+files = [
+ {file = "html5lib-1.1-py2.py3-none-any.whl", hash = "sha256:0d78f8fde1c230e99fe37986a60526d7049ed4bf8a9fadbad5f00e22e58e041d"},
+ {file = "html5lib-1.1.tar.gz", hash = "sha256:b2e5b40261e20f354d198eae92afc10d750afb487ed5e50f9c4eaf07c184146f"},
+]
+
+[package.dependencies]
+six = ">=1.9"
+webencodings = "*"
+
+[package.extras]
+all = ["chardet (>=2.2)", "genshi", "lxml"]
+chardet = ["chardet (>=2.2)"]
+genshi = ["genshi"]
+lxml = ["lxml"]
+
+[[package]]
+name = "idna"
+version = "3.4"
+description = "Internationalized Domain Names in Applications (IDNA)"
+category = "dev"
+optional = false
+python-versions = ">=3.5"
+files = [
+ {file = "idna-3.4-py3-none-any.whl", hash = "sha256:90b77e79eaa3eba6de819a0c442c0b4ceefc341a7a2ab77d7562bf49f425c5c2"},
+ {file = "idna-3.4.tar.gz", hash = "sha256:814f528e8dead7d329833b91c5faa87d60bf71824cd12a7530b5526063d02cb4"},
+]
+
+[[package]]
+name = "importlib-metadata"
+version = "4.2.0"
+description = "Read metadata from Python packages"
+category = "dev"
+optional = false
+python-versions = ">=3.6"
+files = [
+ {file = "importlib_metadata-4.2.0-py3-none-any.whl", hash = "sha256:057e92c15bc8d9e8109738a48db0ccb31b4d9d5cfbee5a8670879a30be66304b"},
+ {file = "importlib_metadata-4.2.0.tar.gz", hash = "sha256:b7e52a1f8dec14a75ea73e0891f3060099ca1d8e6a462a4dff11c3e119ea1b31"},
+]
+
+[package.dependencies]
+typing-extensions = {version = ">=3.6.4", markers = "python_version < \"3.8\""}
+zipp = ">=0.5"
+
+[package.extras]
+docs = ["jaraco.packaging (>=8.2)", "rst.linker (>=1.9)", "sphinx"]
+testing = ["flufl.flake8", "importlib-resources (>=1.3)", "packaging", "pep517", "pyfakefs", "pytest (>=4.6)", "pytest-black (>=0.3.7)", "pytest-checkdocs (>=2.4)", "pytest-cov", "pytest-enabler (>=1.0.1)", "pytest-flake8", "pytest-mypy"]
+
+[[package]]
+name = "importlib-resources"
+version = "5.12.0"
+description = "Read resources from Python packages"
+category = "dev"
+optional = false
+python-versions = ">=3.7"
+files = [
+ {file = "importlib_resources-5.12.0-py3-none-any.whl", hash = "sha256:7b1deeebbf351c7578e09bf2f63fa2ce8b5ffec296e0d349139d43cca061a81a"},
+ {file = "importlib_resources-5.12.0.tar.gz", hash = "sha256:4be82589bf5c1d7999aedf2a45159d10cb3ca4f19b2271f8792bc8e6da7b22f6"},
+]
+
+[package.dependencies]
+zipp = {version = ">=3.1.0", markers = "python_version < \"3.10\""}
+
+[package.extras]
+docs = ["furo", "jaraco.packaging (>=9)", "jaraco.tidelift (>=1.4)", "rst.linker (>=1.9)", "sphinx (>=3.5)", "sphinx-lint"]
+testing = ["flake8 (<5)", "pytest (>=6)", "pytest-black (>=0.3.7)", "pytest-checkdocs (>=2.4)", "pytest-cov", "pytest-enabler (>=1.3)", "pytest-flake8", "pytest-mypy (>=0.9.1)"]
+
+[[package]]
+name = "iniconfig"
+version = "2.0.0"
+description = "brain-dead simple config-ini parsing"
+category = "dev"
+optional = false
+python-versions = ">=3.7"
+files = [
+ {file = "iniconfig-2.0.0-py3-none-any.whl", hash = "sha256:b6a85871a79d2e3b22d2d1b94ac2824226a63c6b741c88f7ae975f18b6778374"},
+ {file = "iniconfig-2.0.0.tar.gz", hash = "sha256:2d91e135bf72d31a410b17c16da610a82cb55f6b0477d1a902134b24a455b8b3"},
+]
+
+[[package]]
+name = "invoke"
+version = "1.7.3"
+description = "Pythonic task execution"
+category = "dev"
+optional = false
+python-versions = "*"
+files = [
+ {file = "invoke-1.7.3-py3-none-any.whl", hash = "sha256:d9694a865764dd3fd91f25f7e9a97fb41666e822bbb00e670091e3f43933574d"},
+ {file = "invoke-1.7.3.tar.gz", hash = "sha256:41b428342d466a82135d5ab37119685a989713742be46e42a3a399d685579314"},
+]
+
+[[package]]
+name = "ipykernel"
+version = "6.16.2"
+description = "IPython Kernel for Jupyter"
+category = "dev"
+optional = false
+python-versions = ">=3.7"
+files = [
+ {file = "ipykernel-6.16.2-py3-none-any.whl", hash = "sha256:67daf93e5b52456cd8eea87a8b59405d2bb80ae411864a1ea206c3631d8179af"},
+ {file = "ipykernel-6.16.2.tar.gz", hash = "sha256:463f3d87a92e99969b1605cb7a5b4d7b36b7145a0e72d06e65918a6ddefbe630"},
+]
+
+[package.dependencies]
+appnope = {version = "*", markers = "platform_system == \"Darwin\""}
+debugpy = ">=1.0"
+ipython = ">=7.23.1"
+jupyter-client = ">=6.1.12"
+matplotlib-inline = ">=0.1"
+nest-asyncio = "*"
+packaging = "*"
+psutil = "*"
+pyzmq = ">=17"
+tornado = ">=6.1"
+traitlets = ">=5.1.0"
+
+[package.extras]
+docs = ["myst-parser", "pydata-sphinx-theme", "sphinx", "sphinxcontrib-github-alt"]
+test = ["flaky", "ipyparallel", "pre-commit", "pytest (>=7.0)", "pytest-cov", "pytest-timeout"]
+
+[[package]]
+name = "ipython"
+version = "7.34.0"
+description = "IPython: Productive Interactive Computing"
+category = "dev"
+optional = false
+python-versions = ">=3.7"
+files = [
+ {file = "ipython-7.34.0-py3-none-any.whl", hash = "sha256:c175d2440a1caff76116eb719d40538fbb316e214eda85c5515c303aacbfb23e"},
+ {file = "ipython-7.34.0.tar.gz", hash = "sha256:af3bdb46aa292bce5615b1b2ebc76c2080c5f77f54bda2ec72461317273e7cd6"},
+]
+
+[package.dependencies]
+appnope = {version = "*", markers = "sys_platform == \"darwin\""}
+backcall = "*"
+colorama = {version = "*", markers = "sys_platform == \"win32\""}
+decorator = "*"
+jedi = ">=0.16"
+matplotlib-inline = "*"
+pexpect = {version = ">4.3", markers = "sys_platform != \"win32\""}
+pickleshare = "*"
+prompt-toolkit = ">=2.0.0,<3.0.0 || >3.0.0,<3.0.1 || >3.0.1,<3.1.0"
+pygments = "*"
+setuptools = ">=18.5"
+traitlets = ">=4.2"
+
+[package.extras]
+all = ["Sphinx (>=1.3)", "ipykernel", "ipyparallel", "ipywidgets", "nbconvert", "nbformat", "nose (>=0.10.1)", "notebook", "numpy (>=1.17)", "pygments", "qtconsole", "requests", "testpath"]
+doc = ["Sphinx (>=1.3)"]
+kernel = ["ipykernel"]
+nbconvert = ["nbconvert"]
+nbformat = ["nbformat"]
+notebook = ["ipywidgets", "notebook"]
+parallel = ["ipyparallel"]
+qtconsole = ["qtconsole"]
+test = ["ipykernel", "nbformat", "nose (>=0.10.1)", "numpy (>=1.17)", "pygments", "requests", "testpath"]
+
+[[package]]
+name = "ipython-genutils"
+version = "0.2.0"
+description = "Vestigial utilities from IPython"
+category = "dev"
+optional = false
+python-versions = "*"
+files = [
+ {file = "ipython_genutils-0.2.0-py2.py3-none-any.whl", hash = "sha256:72dd37233799e619666c9f639a9da83c34013a73e8bbc79a7a6348d93c61fab8"},
+ {file = "ipython_genutils-0.2.0.tar.gz", hash = "sha256:eb2e116e75ecef9d4d228fdc66af54269afa26ab4463042e33785b887c628ba8"},
+]
+
+[[package]]
+name = "ipywidgets"
+version = "8.0.4"
+description = "Jupyter interactive widgets"
+category = "dev"
+optional = false
+python-versions = ">=3.7"
+files = [
+ {file = "ipywidgets-8.0.4-py3-none-any.whl", hash = "sha256:ebb195e743b16c3947fe8827190fb87b4d00979c0fbf685afe4d2c4927059fa1"},
+ {file = "ipywidgets-8.0.4.tar.gz", hash = "sha256:c0005a77a47d77889cafed892b58e33b4a2a96712154404c6548ec22272811ea"},
+]
+
+[package.dependencies]
+ipykernel = ">=4.5.1"
+ipython = ">=6.1.0"
+jupyterlab-widgets = ">=3.0,<4.0"
+traitlets = ">=4.3.1"
+widgetsnbextension = ">=4.0,<5.0"
+
+[package.extras]
+test = ["jsonschema", "pytest (>=3.6.0)", "pytest-cov", "pytz"]
+
+[[package]]
+name = "isort"
+version = "5.11.5"
+description = "A Python utility / library to sort Python imports."
+category = "dev"
+optional = false
+python-versions = ">=3.7.0"
+files = [
+ {file = "isort-5.11.5-py3-none-any.whl", hash = "sha256:ba1d72fb2595a01c7895a5128f9585a5cc4b6d395f1c8d514989b9a7eb2a8746"},
+ {file = "isort-5.11.5.tar.gz", hash = "sha256:6be1f76a507cb2ecf16c7cf14a37e41609ca082330be4e3436a18ef74add55db"},
+]
+
+[package.extras]
+colors = ["colorama (>=0.4.3,<0.5.0)"]
+pipfile-deprecated-finder = ["pip-shims (>=0.5.2)", "pipreqs", "requirementslib"]
+plugins = ["setuptools"]
+requirements-deprecated-finder = ["pip-api", "pipreqs"]
+
+[[package]]
+name = "jaraco-classes"
+version = "3.2.3"
+description = "Utility functions for Python class constructs"
+category = "dev"
+optional = false
+python-versions = ">=3.7"
+files = [
+ {file = "jaraco.classes-3.2.3-py3-none-any.whl", hash = "sha256:2353de3288bc6b82120752201c6b1c1a14b058267fa424ed5ce5984e3b922158"},
+ {file = "jaraco.classes-3.2.3.tar.gz", hash = "sha256:89559fa5c1d3c34eff6f631ad80bb21f378dbcbb35dd161fd2c6b93f5be2f98a"},
+]
+
+[package.dependencies]
+more-itertools = "*"
+
+[package.extras]
+docs = ["jaraco.packaging (>=9)", "jaraco.tidelift (>=1.4)", "rst.linker (>=1.9)", "sphinx (>=3.5)"]
+testing = ["flake8 (<5)", "pytest (>=6)", "pytest-black (>=0.3.7)", "pytest-checkdocs (>=2.4)", "pytest-cov", "pytest-enabler (>=1.3)", "pytest-flake8", "pytest-mypy (>=0.9.1)"]
+
+[[package]]
+name = "jedi"
+version = "0.18.2"
+description = "An autocompletion tool for Python that can be used for text editors."
+category = "dev"
+optional = false
+python-versions = ">=3.6"
+files = [
+ {file = "jedi-0.18.2-py2.py3-none-any.whl", hash = "sha256:203c1fd9d969ab8f2119ec0a3342e0b49910045abe6af0a3ae83a5764d54639e"},
+ {file = "jedi-0.18.2.tar.gz", hash = "sha256:bae794c30d07f6d910d32a7048af09b5a39ed740918da923c6b780790ebac612"},
+]
+
+[package.dependencies]
+parso = ">=0.8.0,<0.9.0"
+
+[package.extras]
+docs = ["Jinja2 (==2.11.3)", "MarkupSafe (==1.1.1)", "Pygments (==2.8.1)", "alabaster (==0.7.12)", "babel (==2.9.1)", "chardet (==4.0.0)", "commonmark (==0.8.1)", "docutils (==0.17.1)", "future (==0.18.2)", "idna (==2.10)", "imagesize (==1.2.0)", "mock (==1.0.1)", "packaging (==20.9)", "pyparsing (==2.4.7)", "pytz (==2021.1)", "readthedocs-sphinx-ext (==2.1.4)", "recommonmark (==0.5.0)", "requests (==2.25.1)", "six (==1.15.0)", "snowballstemmer (==2.1.0)", "sphinx (==1.8.5)", "sphinx-rtd-theme (==0.4.3)", "sphinxcontrib-serializinghtml (==1.1.4)", "sphinxcontrib-websupport (==1.2.4)", "urllib3 (==1.26.4)"]
+qa = ["flake8 (==3.8.3)", "mypy (==0.782)"]
+testing = ["Django (<3.1)", "attrs", "colorama", "docopt", "pytest (<7.0.0)"]
+
+[[package]]
+name = "jeepney"
+version = "0.8.0"
+description = "Low-level, pure Python DBus protocol wrapper."
+category = "dev"
+optional = false
+python-versions = ">=3.7"
+files = [
+ {file = "jeepney-0.8.0-py3-none-any.whl", hash = "sha256:c0a454ad016ca575060802ee4d590dd912e35c122fa04e70306de3d076cce755"},
+ {file = "jeepney-0.8.0.tar.gz", hash = "sha256:5efe48d255973902f6badc3ce55e2aa6c5c3b3bc642059ef3a91247bcfcc5806"},
+]
+
+[package.extras]
+test = ["async-timeout", "pytest", "pytest-asyncio (>=0.17)", "pytest-trio", "testpath", "trio"]
+trio = ["async_generator", "trio"]
+
+[[package]]
+name = "jinja2"
+version = "3.1.2"
+description = "A very fast and expressive template engine."
+category = "dev"
+optional = false
+python-versions = ">=3.7"
+files = [
+ {file = "Jinja2-3.1.2-py3-none-any.whl", hash = "sha256:6088930bfe239f0e6710546ab9c19c9ef35e29792895fed6e6e31a023a182a61"},
+ {file = "Jinja2-3.1.2.tar.gz", hash = "sha256:31351a702a408a9e7595a8fc6150fc3f43bb6bf7e319770cbc0db9df9437e852"},
+]
+
+[package.dependencies]
+MarkupSafe = ">=2.0"
+
+[package.extras]
+i18n = ["Babel (>=2.7)"]
+
+[[package]]
+name = "jsonschema"
+version = "4.17.3"
+description = "An implementation of JSON Schema validation for Python"
+category = "dev"
+optional = false
+python-versions = ">=3.7"
+files = [
+ {file = "jsonschema-4.17.3-py3-none-any.whl", hash = "sha256:a870ad254da1a8ca84b6a2905cac29d265f805acc57af304784962a2aa6508f6"},
+ {file = "jsonschema-4.17.3.tar.gz", hash = "sha256:0f864437ab8b6076ba6707453ef8f98a6a0d512a80e93f8abdb676f737ecb60d"},
+]
+
+[package.dependencies]
+attrs = ">=17.4.0"
+importlib-metadata = {version = "*", markers = "python_version < \"3.8\""}
+importlib-resources = {version = ">=1.4.0", markers = "python_version < \"3.9\""}
+pkgutil-resolve-name = {version = ">=1.3.10", markers = "python_version < \"3.9\""}
+pyrsistent = ">=0.14.0,<0.17.0 || >0.17.0,<0.17.1 || >0.17.1,<0.17.2 || >0.17.2"
+typing-extensions = {version = "*", markers = "python_version < \"3.8\""}
+
+[package.extras]
+format = ["fqdn", "idna", "isoduration", "jsonpointer (>1.13)", "rfc3339-validator", "rfc3987", "uri-template", "webcolors (>=1.11)"]
+format-nongpl = ["fqdn", "idna", "isoduration", "jsonpointer (>1.13)", "rfc3339-validator", "rfc3986-validator (>0.1.0)", "uri-template", "webcolors (>=1.11)"]
+
+[[package]]
+name = "jupyter"
+version = "1.0.0"
+description = "Jupyter metapackage. Install all the Jupyter components in one go."
+category = "dev"
+optional = false
+python-versions = "*"
+files = [
+ {file = "jupyter-1.0.0-py2.py3-none-any.whl", hash = "sha256:5b290f93b98ffbc21c0c7e749f054b3267782166d72fa5e3ed1ed4eaf34a2b78"},
+ {file = "jupyter-1.0.0.tar.gz", hash = "sha256:d9dc4b3318f310e34c82951ea5d6683f67bed7def4b259fafbfe4f1beb1d8e5f"},
+ {file = "jupyter-1.0.0.zip", hash = "sha256:3e1f86076bbb7c8c207829390305a2b1fe836d471ed54be66a3b8c41e7f46cc7"},
+]
+
+[package.dependencies]
+ipykernel = "*"
+ipywidgets = "*"
+jupyter-console = "*"
+nbconvert = "*"
+notebook = "*"
+qtconsole = "*"
+
+[[package]]
+name = "jupyter-client"
+version = "7.4.9"
+description = "Jupyter protocol implementation and client libraries"
+category = "dev"
+optional = false
+python-versions = ">=3.7"
+files = [
+ {file = "jupyter_client-7.4.9-py3-none-any.whl", hash = "sha256:214668aaea208195f4c13d28eb272ba79f945fc0cf3f11c7092c20b2ca1980e7"},
+ {file = "jupyter_client-7.4.9.tar.gz", hash = "sha256:52be28e04171f07aed8f20e1616a5a552ab9fee9cbbe6c1896ae170c3880d392"},
+]
+
+[package.dependencies]
+entrypoints = "*"
+jupyter-core = ">=4.9.2"
+nest-asyncio = ">=1.5.4"
+python-dateutil = ">=2.8.2"
+pyzmq = ">=23.0"
+tornado = ">=6.2"
+traitlets = "*"
+
+[package.extras]
+doc = ["ipykernel", "myst-parser", "sphinx (>=1.3.6)", "sphinx-rtd-theme", "sphinxcontrib-github-alt"]
+test = ["codecov", "coverage", "ipykernel (>=6.12)", "ipython", "mypy", "pre-commit", "pytest", "pytest-asyncio (>=0.18)", "pytest-cov", "pytest-timeout"]
+
+[[package]]
+name = "jupyter-console"
+version = "6.6.2"
+description = "Jupyter terminal console"
+category = "dev"
+optional = false
+python-versions = ">=3.7"
+files = [
+ {file = "jupyter_console-6.6.2-py3-none-any.whl", hash = "sha256:0ba2da017be36bfae489f233f031f251da5b88b0ceafabea240b465ee474944a"},
+ {file = "jupyter_console-6.6.2.tar.gz", hash = "sha256:7385dfed8a01fd51e3c98449dae24f38c1ba015d2073835e785671653a0967fc"},
+]
+
+[package.dependencies]
+ipykernel = ">=6.14"
+ipython = "*"
+jupyter-client = ">=7.0.0"
+jupyter-core = ">=4.12,<5.0.0 || >=5.1.0"
+prompt-toolkit = ">=3.0.30"
+pygments = "*"
+pyzmq = ">=17"
+traitlets = ">=5.4"
+
+[package.extras]
+test = ["flaky", "pexpect", "pytest"]
+
+[[package]]
+name = "jupyter-core"
+version = "4.12.0"
+description = "Jupyter core package. A base package on which Jupyter projects rely."
+category = "dev"
+optional = false
+python-versions = ">=3.7"
+files = [
+ {file = "jupyter_core-4.12.0-py3-none-any.whl", hash = "sha256:a54672c539333258495579f6964144924e0aa7b07f7069947bef76d7ea5cb4c1"},
+ {file = "jupyter_core-4.12.0.tar.gz", hash = "sha256:87f39d7642412ae8a52291cc68e71ac01dfa2c735df2701f8108251d51b4f460"},
+]
+
+[package.dependencies]
+pywin32 = {version = ">=1.0", markers = "sys_platform == \"win32\" and platform_python_implementation != \"PyPy\""}
+traitlets = "*"
+
+[package.extras]
+test = ["ipykernel", "pre-commit", "pytest", "pytest-cov", "pytest-timeout"]
+
+[[package]]
+name = "jupyter-server"
+version = "1.23.6"
+description = "The backend—i.e. core services, APIs, and REST endpoints—to Jupyter web applications."
+category = "dev"
+optional = false
+python-versions = ">=3.7"
+files = [
+ {file = "jupyter_server-1.23.6-py3-none-any.whl", hash = "sha256:ede3a5c09b075541d960bb02854b617c0ffa58706c37de92e2d1c5acdc359c20"},
+ {file = "jupyter_server-1.23.6.tar.gz", hash = "sha256:fde15df6d11a053b17cf2450eea9984ec9a6f28ad3cb2caa73c31e53ea184fc1"},
+]
+
+[package.dependencies]
+anyio = ">=3.1.0,<4"
+argon2-cffi = "*"
+jinja2 = "*"
+jupyter-client = ">=6.1.12"
+jupyter-core = ">=4.12,<5.0.0 || >=5.1.0"
+nbconvert = ">=6.4.4"
+nbformat = ">=5.2.0"
+packaging = "*"
+prometheus-client = "*"
+pywinpty = {version = "*", markers = "os_name == \"nt\""}
+pyzmq = ">=17"
+Send2Trash = "*"
+terminado = ">=0.8.3"
+tornado = ">=6.1.0"
+traitlets = ">=5.1"
+websocket-client = "*"
+
+[package.extras]
+test = ["coverage", "ipykernel", "pre-commit", "pytest (>=7.0)", "pytest-console-scripts", "pytest-cov", "pytest-mock", "pytest-timeout", "pytest-tornasync", "requests"]
+
+[[package]]
+name = "jupyterlab-pygments"
+version = "0.2.2"
+description = "Pygments theme using JupyterLab CSS variables"
+category = "dev"
+optional = false
+python-versions = ">=3.7"
+files = [
+ {file = "jupyterlab_pygments-0.2.2-py2.py3-none-any.whl", hash = "sha256:2405800db07c9f770863bcf8049a529c3dd4d3e28536638bd7c1c01d2748309f"},
+ {file = "jupyterlab_pygments-0.2.2.tar.gz", hash = "sha256:7405d7fde60819d905a9fa8ce89e4cd830e318cdad22a0030f7a901da705585d"},
+]
+
+[[package]]
+name = "jupyterlab-widgets"
+version = "3.0.5"
+description = "Jupyter interactive widgets for JupyterLab"
+category = "dev"
+optional = false
+python-versions = ">=3.7"
+files = [
+ {file = "jupyterlab_widgets-3.0.5-py3-none-any.whl", hash = "sha256:a04a42e50231b355b7087e16a818f541e53589f7647144ea0344c4bf16f300e5"},
+ {file = "jupyterlab_widgets-3.0.5.tar.gz", hash = "sha256:eeaecdeaf6c03afc960ddae201ced88d5979b4ca9c3891bcb8f6631af705f5ef"},
+]
+
+[[package]]
+name = "keyring"
+version = "23.9.3"
+description = "Store and access your passwords safely."
+category = "dev"
+optional = false
+python-versions = ">=3.7"
+files = [
+ {file = "keyring-23.9.3-py3-none-any.whl", hash = "sha256:69732a15cb1433bdfbc3b980a8a36a04878a6cfd7cb99f497b573f31618001c0"},
+ {file = "keyring-23.9.3.tar.gz", hash = "sha256:69b01dd83c42f590250fe7a1f503fc229b14de83857314b1933a3ddbf595c4a5"},
+]
+
+[package.dependencies]
+importlib-metadata = {version = ">=3.6", markers = "python_version < \"3.10\""}
+"jaraco.classes" = "*"
+jeepney = {version = ">=0.4.2", markers = "sys_platform == \"linux\""}
+pywin32-ctypes = {version = "<0.1.0 || >0.1.0,<0.1.1 || >0.1.1", markers = "sys_platform == \"win32\""}
+SecretStorage = {version = ">=3.2", markers = "sys_platform == \"linux\""}
+
+[package.extras]
+docs = ["jaraco.packaging (>=9)", "jaraco.tidelift (>=1.4)", "rst.linker (>=1.9)", "sphinx"]
+testing = ["flake8 (<5)", "pytest (>=6)", "pytest-black (>=0.3.7)", "pytest-checkdocs (>=2.4)", "pytest-cov", "pytest-enabler (>=1.3)", "pytest-flake8", "pytest-mypy (>=0.9.1)"]
+
+[[package]]
+name = "kiwisolver"
+version = "1.4.4"
+description = "A fast implementation of the Cassowary constraint solver"
+category = "main"
+optional = false
+python-versions = ">=3.7"
+files = [
+ {file = "kiwisolver-1.4.4-cp310-cp310-macosx_10_9_universal2.whl", hash = "sha256:2f5e60fabb7343a836360c4f0919b8cd0d6dbf08ad2ca6b9cf90bf0c76a3c4f6"},
+ {file = "kiwisolver-1.4.4-cp310-cp310-macosx_10_9_x86_64.whl", hash = "sha256:10ee06759482c78bdb864f4109886dff7b8a56529bc1609d4f1112b93fe6423c"},
+ {file = "kiwisolver-1.4.4-cp310-cp310-macosx_11_0_arm64.whl", hash = "sha256:c79ebe8f3676a4c6630fd3f777f3cfecf9289666c84e775a67d1d358578dc2e3"},
+ {file = "kiwisolver-1.4.4-cp310-cp310-manylinux_2_12_i686.manylinux2010_i686.whl", hash = "sha256:abbe9fa13da955feb8202e215c4018f4bb57469b1b78c7a4c5c7b93001699938"},
+ {file = "kiwisolver-1.4.4-cp310-cp310-manylinux_2_12_x86_64.manylinux2010_x86_64.whl", hash = "sha256:7577c1987baa3adc4b3c62c33bd1118c3ef5c8ddef36f0f2c950ae0b199e100d"},
+ {file = "kiwisolver-1.4.4-cp310-cp310-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:f8ad8285b01b0d4695102546b342b493b3ccc6781fc28c8c6a1bb63e95d22f09"},
+ {file = "kiwisolver-1.4.4-cp310-cp310-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:8ed58b8acf29798b036d347791141767ccf65eee7f26bde03a71c944449e53de"},
+ {file = "kiwisolver-1.4.4-cp310-cp310-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:a68b62a02953b9841730db7797422f983935aeefceb1679f0fc85cbfbd311c32"},
+ {file = "kiwisolver-1.4.4-cp310-cp310-win32.whl", hash = "sha256:e92a513161077b53447160b9bd8f522edfbed4bd9759e4c18ab05d7ef7e49408"},
+ {file = "kiwisolver-1.4.4-cp310-cp310-win_amd64.whl", hash = "sha256:3fe20f63c9ecee44560d0e7f116b3a747a5d7203376abeea292ab3152334d004"},
+ {file = "kiwisolver-1.4.4-cp311-cp311-macosx_10_9_universal2.whl", hash = "sha256:e0ea21f66820452a3f5d1655f8704a60d66ba1191359b96541eaf457710a5fc6"},
+ {file = "kiwisolver-1.4.4-cp311-cp311-macosx_10_9_x86_64.whl", hash = "sha256:bc9db8a3efb3e403e4ecc6cd9489ea2bac94244f80c78e27c31dcc00d2790ac2"},
+ {file = "kiwisolver-1.4.4-cp311-cp311-macosx_11_0_arm64.whl", hash = "sha256:d5b61785a9ce44e5a4b880272baa7cf6c8f48a5180c3e81c59553ba0cb0821ca"},
+ {file = "kiwisolver-1.4.4-cp311-cp311-manylinux_2_12_i686.manylinux2010_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:c2dbb44c3f7e6c4d3487b31037b1bdbf424d97687c1747ce4ff2895795c9bf69"},
+ {file = "kiwisolver-1.4.4-cp311-cp311-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:6295ecd49304dcf3bfbfa45d9a081c96509e95f4b9d0eb7ee4ec0530c4a96514"},
+ {file = "kiwisolver-1.4.4-cp311-cp311-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:4bd472dbe5e136f96a4b18f295d159d7f26fd399136f5b17b08c4e5f498cd494"},
+ {file = "kiwisolver-1.4.4-cp311-cp311-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:bf7d9fce9bcc4752ca4a1b80aabd38f6d19009ea5cbda0e0856983cf6d0023f5"},
+ {file = "kiwisolver-1.4.4-cp311-cp311-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:78d6601aed50c74e0ef02f4204da1816147a6d3fbdc8b3872d263338a9052c51"},
+ {file = "kiwisolver-1.4.4-cp311-cp311-musllinux_1_1_aarch64.whl", hash = "sha256:877272cf6b4b7e94c9614f9b10140e198d2186363728ed0f701c6eee1baec1da"},
+ {file = "kiwisolver-1.4.4-cp311-cp311-musllinux_1_1_i686.whl", hash = "sha256:db608a6757adabb32f1cfe6066e39b3706d8c3aa69bbc353a5b61edad36a5cb4"},
+ {file = "kiwisolver-1.4.4-cp311-cp311-musllinux_1_1_ppc64le.whl", hash = "sha256:5853eb494c71e267912275e5586fe281444eb5e722de4e131cddf9d442615626"},
+ {file = "kiwisolver-1.4.4-cp311-cp311-musllinux_1_1_s390x.whl", hash = "sha256:f0a1dbdb5ecbef0d34eb77e56fcb3e95bbd7e50835d9782a45df81cc46949750"},
+ {file = "kiwisolver-1.4.4-cp311-cp311-musllinux_1_1_x86_64.whl", hash = "sha256:283dffbf061a4ec60391d51e6155e372a1f7a4f5b15d59c8505339454f8989e4"},
+ {file = "kiwisolver-1.4.4-cp311-cp311-win32.whl", hash = "sha256:d06adcfa62a4431d404c31216f0f8ac97397d799cd53800e9d3efc2fbb3cf14e"},
+ {file = "kiwisolver-1.4.4-cp311-cp311-win_amd64.whl", hash = "sha256:e7da3fec7408813a7cebc9e4ec55afed2d0fd65c4754bc376bf03498d4e92686"},
+ {file = "kiwisolver-1.4.4-cp37-cp37m-macosx_10_9_x86_64.whl", hash = "sha256:62ac9cc684da4cf1778d07a89bf5f81b35834cb96ca523d3a7fb32509380cbf6"},
+ {file = "kiwisolver-1.4.4-cp37-cp37m-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:41dae968a94b1ef1897cb322b39360a0812661dba7c682aa45098eb8e193dbdf"},
+ {file = "kiwisolver-1.4.4-cp37-cp37m-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:02f79693ec433cb4b5f51694e8477ae83b3205768a6fb48ffba60549080e295b"},
+ {file = "kiwisolver-1.4.4-cp37-cp37m-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:d0611a0a2a518464c05ddd5a3a1a0e856ccc10e67079bb17f265ad19ab3c7597"},
+ {file = "kiwisolver-1.4.4-cp37-cp37m-manylinux_2_5_i686.manylinux1_i686.whl", hash = "sha256:db5283d90da4174865d520e7366801a93777201e91e79bacbac6e6927cbceede"},
+ {file = "kiwisolver-1.4.4-cp37-cp37m-manylinux_2_5_x86_64.manylinux1_x86_64.whl", hash = "sha256:1041feb4cda8708ce73bb4dcb9ce1ccf49d553bf87c3954bdfa46f0c3f77252c"},
+ {file = "kiwisolver-1.4.4-cp37-cp37m-win32.whl", hash = "sha256:a553dadda40fef6bfa1456dc4be49b113aa92c2a9a9e8711e955618cd69622e3"},
+ {file = "kiwisolver-1.4.4-cp37-cp37m-win_amd64.whl", hash = "sha256:03baab2d6b4a54ddbb43bba1a3a2d1627e82d205c5cf8f4c924dc49284b87166"},
+ {file = "kiwisolver-1.4.4-cp38-cp38-macosx_10_9_universal2.whl", hash = "sha256:841293b17ad704d70c578f1f0013c890e219952169ce8a24ebc063eecf775454"},
+ {file = "kiwisolver-1.4.4-cp38-cp38-macosx_10_9_x86_64.whl", hash = "sha256:f4f270de01dd3e129a72efad823da90cc4d6aafb64c410c9033aba70db9f1ff0"},
+ {file = "kiwisolver-1.4.4-cp38-cp38-macosx_11_0_arm64.whl", hash = "sha256:f9f39e2f049db33a908319cf46624a569b36983c7c78318e9726a4cb8923b26c"},
+ {file = "kiwisolver-1.4.4-cp38-cp38-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:c97528e64cb9ebeff9701e7938653a9951922f2a38bd847787d4a8e498cc83ae"},
+ {file = "kiwisolver-1.4.4-cp38-cp38-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:1d1573129aa0fd901076e2bfb4275a35f5b7aa60fbfb984499d661ec950320b0"},
+ {file = "kiwisolver-1.4.4-cp38-cp38-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:ad881edc7ccb9d65b0224f4e4d05a1e85cf62d73aab798943df6d48ab0cd79a1"},
+ {file = "kiwisolver-1.4.4-cp38-cp38-manylinux_2_5_i686.manylinux1_i686.whl", hash = "sha256:b428ef021242344340460fa4c9185d0b1f66fbdbfecc6c63eff4b7c29fad429d"},
+ {file = "kiwisolver-1.4.4-cp38-cp38-manylinux_2_5_x86_64.manylinux1_x86_64.whl", hash = "sha256:2e407cb4bd5a13984a6c2c0fe1845e4e41e96f183e5e5cd4d77a857d9693494c"},
+ {file = "kiwisolver-1.4.4-cp38-cp38-win32.whl", hash = "sha256:75facbe9606748f43428fc91a43edb46c7ff68889b91fa31f53b58894503a191"},
+ {file = "kiwisolver-1.4.4-cp38-cp38-win_amd64.whl", hash = "sha256:5bce61af018b0cb2055e0e72e7d65290d822d3feee430b7b8203d8a855e78766"},
+ {file = "kiwisolver-1.4.4-cp39-cp39-macosx_10_9_universal2.whl", hash = "sha256:8c808594c88a025d4e322d5bb549282c93c8e1ba71b790f539567932722d7bd8"},
+ {file = "kiwisolver-1.4.4-cp39-cp39-macosx_10_9_x86_64.whl", hash = "sha256:f0a71d85ecdd570ded8ac3d1c0f480842f49a40beb423bb8014539a9f32a5897"},
+ {file = "kiwisolver-1.4.4-cp39-cp39-macosx_11_0_arm64.whl", hash = "sha256:b533558eae785e33e8c148a8d9921692a9fe5aa516efbdff8606e7d87b9d5824"},
+ {file = "kiwisolver-1.4.4-cp39-cp39-manylinux_2_12_i686.manylinux2010_i686.whl", hash = "sha256:efda5fc8cc1c61e4f639b8067d118e742b812c930f708e6667a5ce0d13499e29"},
+ {file = "kiwisolver-1.4.4-cp39-cp39-manylinux_2_12_x86_64.manylinux2010_x86_64.whl", hash = "sha256:7c43e1e1206cd421cd92e6b3280d4385d41d7166b3ed577ac20444b6995a445f"},
+ {file = "kiwisolver-1.4.4-cp39-cp39-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:bc8d3bd6c72b2dd9decf16ce70e20abcb3274ba01b4e1c96031e0c4067d1e7cd"},
+ {file = "kiwisolver-1.4.4-cp39-cp39-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:4ea39b0ccc4f5d803e3337dd46bcce60b702be4d86fd0b3d7531ef10fd99a1ac"},
+ {file = "kiwisolver-1.4.4-cp39-cp39-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:968f44fdbf6dd757d12920d63b566eeb4d5b395fd2d00d29d7ef00a00582aac9"},
+ {file = "kiwisolver-1.4.4-cp39-cp39-win32.whl", hash = "sha256:da7e547706e69e45d95e116e6939488d62174e033b763ab1496b4c29b76fabea"},
+ {file = "kiwisolver-1.4.4-cp39-cp39-win_amd64.whl", hash = "sha256:ba59c92039ec0a66103b1d5fe588fa546373587a7d68f5c96f743c3396afc04b"},
+ {file = "kiwisolver-1.4.4-pp37-pypy37_pp73-manylinux_2_12_i686.manylinux2010_i686.whl", hash = "sha256:91672bacaa030f92fc2f43b620d7b337fd9a5af28b0d6ed3f77afc43c4a64b5a"},
+ {file = "kiwisolver-1.4.4-pp37-pypy37_pp73-manylinux_2_12_x86_64.manylinux2010_x86_64.whl", hash = "sha256:787518a6789009c159453da4d6b683f468ef7a65bbde796bcea803ccf191058d"},
+ {file = "kiwisolver-1.4.4-pp37-pypy37_pp73-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:da152d8cdcab0e56e4f45eb08b9aea6455845ec83172092f09b0e077ece2cf7a"},
+ {file = "kiwisolver-1.4.4-pp37-pypy37_pp73-win_amd64.whl", hash = "sha256:ecb1fa0db7bf4cff9dac752abb19505a233c7f16684c5826d1f11ebd9472b871"},
+ {file = "kiwisolver-1.4.4-pp38-pypy38_pp73-macosx_10_9_x86_64.whl", hash = "sha256:28bc5b299f48150b5f822ce68624e445040595a4ac3d59251703779836eceff9"},
+ {file = "kiwisolver-1.4.4-pp38-pypy38_pp73-manylinux_2_12_i686.manylinux2010_i686.whl", hash = "sha256:81e38381b782cc7e1e46c4e14cd997ee6040768101aefc8fa3c24a4cc58e98f8"},
+ {file = "kiwisolver-1.4.4-pp38-pypy38_pp73-manylinux_2_12_x86_64.manylinux2010_x86_64.whl", hash = "sha256:2a66fdfb34e05b705620dd567f5a03f239a088d5a3f321e7b6ac3239d22aa286"},
+ {file = "kiwisolver-1.4.4-pp38-pypy38_pp73-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:872b8ca05c40d309ed13eb2e582cab0c5a05e81e987ab9c521bf05ad1d5cf5cb"},
+ {file = "kiwisolver-1.4.4-pp38-pypy38_pp73-win_amd64.whl", hash = "sha256:70e7c2e7b750585569564e2e5ca9845acfaa5da56ac46df68414f29fea97be9f"},
+ {file = "kiwisolver-1.4.4-pp39-pypy39_pp73-macosx_10_9_x86_64.whl", hash = "sha256:9f85003f5dfa867e86d53fac6f7e6f30c045673fa27b603c397753bebadc3008"},
+ {file = "kiwisolver-1.4.4-pp39-pypy39_pp73-manylinux_2_12_i686.manylinux2010_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:2e307eb9bd99801f82789b44bb45e9f541961831c7311521b13a6c85afc09767"},
+ {file = "kiwisolver-1.4.4-pp39-pypy39_pp73-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:b1792d939ec70abe76f5054d3f36ed5656021dcad1322d1cc996d4e54165cef9"},
+ {file = "kiwisolver-1.4.4-pp39-pypy39_pp73-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:f6cb459eea32a4e2cf18ba5fcece2dbdf496384413bc1bae15583f19e567f3b2"},
+ {file = "kiwisolver-1.4.4-pp39-pypy39_pp73-win_amd64.whl", hash = "sha256:36dafec3d6d6088d34e2de6b85f9d8e2324eb734162fba59d2ba9ed7a2043d5b"},
+ {file = "kiwisolver-1.4.4.tar.gz", hash = "sha256:d41997519fcba4a1e46eb4a2fe31bc12f0ff957b2b81bac28db24744f333e955"},
+]
+
+[package.dependencies]
+typing-extensions = {version = "*", markers = "python_version < \"3.8\""}
+
+[[package]]
+name = "lazy-object-proxy"
+version = "1.9.0"
+description = "A fast and thorough lazy object proxy."
+category = "dev"
+optional = false
+python-versions = ">=3.7"
+files = [
+ {file = "lazy-object-proxy-1.9.0.tar.gz", hash = "sha256:659fb5809fa4629b8a1ac5106f669cfc7bef26fbb389dda53b3e010d1ac4ebae"},
+ {file = "lazy_object_proxy-1.9.0-cp310-cp310-macosx_10_9_x86_64.whl", hash = "sha256:b40387277b0ed2d0602b8293b94d7257e17d1479e257b4de114ea11a8cb7f2d7"},
+ {file = "lazy_object_proxy-1.9.0-cp310-cp310-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:e8c6cfb338b133fbdbc5cfaa10fe3c6aeea827db80c978dbd13bc9dd8526b7d4"},
+ {file = "lazy_object_proxy-1.9.0-cp310-cp310-manylinux_2_5_x86_64.manylinux1_x86_64.manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:721532711daa7db0d8b779b0bb0318fa87af1c10d7fe5e52ef30f8eff254d0cd"},
+ {file = "lazy_object_proxy-1.9.0-cp310-cp310-musllinux_1_1_aarch64.whl", hash = "sha256:66a3de4a3ec06cd8af3f61b8e1ec67614fbb7c995d02fa224813cb7afefee701"},
+ {file = "lazy_object_proxy-1.9.0-cp310-cp310-musllinux_1_1_x86_64.whl", hash = "sha256:1aa3de4088c89a1b69f8ec0dcc169aa725b0ff017899ac568fe44ddc1396df46"},
+ {file = "lazy_object_proxy-1.9.0-cp310-cp310-win32.whl", hash = "sha256:f0705c376533ed2a9e5e97aacdbfe04cecd71e0aa84c7c0595d02ef93b6e4455"},
+ {file = "lazy_object_proxy-1.9.0-cp310-cp310-win_amd64.whl", hash = "sha256:ea806fd4c37bf7e7ad82537b0757999264d5f70c45468447bb2b91afdbe73a6e"},
+ {file = "lazy_object_proxy-1.9.0-cp311-cp311-macosx_10_9_x86_64.whl", hash = "sha256:946d27deaff6cf8452ed0dba83ba38839a87f4f7a9732e8f9fd4107b21e6ff07"},
+ {file = "lazy_object_proxy-1.9.0-cp311-cp311-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:79a31b086e7e68b24b99b23d57723ef7e2c6d81ed21007b6281ebcd1688acb0a"},
+ {file = "lazy_object_proxy-1.9.0-cp311-cp311-manylinux_2_5_x86_64.manylinux1_x86_64.manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:f699ac1c768270c9e384e4cbd268d6e67aebcfae6cd623b4d7c3bfde5a35db59"},
+ {file = "lazy_object_proxy-1.9.0-cp311-cp311-musllinux_1_1_aarch64.whl", hash = "sha256:bfb38f9ffb53b942f2b5954e0f610f1e721ccebe9cce9025a38c8ccf4a5183a4"},
+ {file = "lazy_object_proxy-1.9.0-cp311-cp311-musllinux_1_1_x86_64.whl", hash = "sha256:189bbd5d41ae7a498397287c408617fe5c48633e7755287b21d741f7db2706a9"},
+ {file = "lazy_object_proxy-1.9.0-cp311-cp311-win32.whl", hash = "sha256:81fc4d08b062b535d95c9ea70dbe8a335c45c04029878e62d744bdced5141586"},
+ {file = "lazy_object_proxy-1.9.0-cp311-cp311-win_amd64.whl", hash = "sha256:f2457189d8257dd41ae9b434ba33298aec198e30adf2dcdaaa3a28b9994f6adb"},
+ {file = "lazy_object_proxy-1.9.0-cp37-cp37m-macosx_10_9_x86_64.whl", hash = "sha256:d9e25ef10a39e8afe59a5c348a4dbf29b4868ab76269f81ce1674494e2565a6e"},
+ {file = "lazy_object_proxy-1.9.0-cp37-cp37m-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:cbf9b082426036e19c6924a9ce90c740a9861e2bdc27a4834fd0a910742ac1e8"},
+ {file = "lazy_object_proxy-1.9.0-cp37-cp37m-manylinux_2_5_x86_64.manylinux1_x86_64.manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:9f5fa4a61ce2438267163891961cfd5e32ec97a2c444e5b842d574251ade27d2"},
+ {file = "lazy_object_proxy-1.9.0-cp37-cp37m-musllinux_1_1_aarch64.whl", hash = "sha256:8fa02eaab317b1e9e03f69aab1f91e120e7899b392c4fc19807a8278a07a97e8"},
+ {file = "lazy_object_proxy-1.9.0-cp37-cp37m-musllinux_1_1_x86_64.whl", hash = "sha256:e7c21c95cae3c05c14aafffe2865bbd5e377cfc1348c4f7751d9dc9a48ca4bda"},
+ {file = "lazy_object_proxy-1.9.0-cp37-cp37m-win32.whl", hash = "sha256:f12ad7126ae0c98d601a7ee504c1122bcef553d1d5e0c3bfa77b16b3968d2734"},
+ {file = "lazy_object_proxy-1.9.0-cp37-cp37m-win_amd64.whl", hash = "sha256:edd20c5a55acb67c7ed471fa2b5fb66cb17f61430b7a6b9c3b4a1e40293b1671"},
+ {file = "lazy_object_proxy-1.9.0-cp38-cp38-macosx_10_9_x86_64.whl", hash = "sha256:2d0daa332786cf3bb49e10dc6a17a52f6a8f9601b4cf5c295a4f85854d61de63"},
+ {file = "lazy_object_proxy-1.9.0-cp38-cp38-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:9cd077f3d04a58e83d04b20e334f678c2b0ff9879b9375ed107d5d07ff160171"},
+ {file = "lazy_object_proxy-1.9.0-cp38-cp38-manylinux_2_5_x86_64.manylinux1_x86_64.manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:660c94ea760b3ce47d1855a30984c78327500493d396eac4dfd8bd82041b22be"},
+ {file = "lazy_object_proxy-1.9.0-cp38-cp38-musllinux_1_1_aarch64.whl", hash = "sha256:212774e4dfa851e74d393a2370871e174d7ff0ebc980907723bb67d25c8a7c30"},
+ {file = "lazy_object_proxy-1.9.0-cp38-cp38-musllinux_1_1_x86_64.whl", hash = "sha256:f0117049dd1d5635bbff65444496c90e0baa48ea405125c088e93d9cf4525b11"},
+ {file = "lazy_object_proxy-1.9.0-cp38-cp38-win32.whl", hash = "sha256:0a891e4e41b54fd5b8313b96399f8b0e173bbbfc03c7631f01efbe29bb0bcf82"},
+ {file = "lazy_object_proxy-1.9.0-cp38-cp38-win_amd64.whl", hash = "sha256:9990d8e71b9f6488e91ad25f322898c136b008d87bf852ff65391b004da5e17b"},
+ {file = "lazy_object_proxy-1.9.0-cp39-cp39-macosx_10_9_x86_64.whl", hash = "sha256:9e7551208b2aded9c1447453ee366f1c4070602b3d932ace044715d89666899b"},
+ {file = "lazy_object_proxy-1.9.0-cp39-cp39-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:5f83ac4d83ef0ab017683d715ed356e30dd48a93746309c8f3517e1287523ef4"},
+ {file = "lazy_object_proxy-1.9.0-cp39-cp39-manylinux_2_5_x86_64.manylinux1_x86_64.manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:7322c3d6f1766d4ef1e51a465f47955f1e8123caee67dd641e67d539a534d006"},
+ {file = "lazy_object_proxy-1.9.0-cp39-cp39-musllinux_1_1_aarch64.whl", hash = "sha256:18b78ec83edbbeb69efdc0e9c1cb41a3b1b1ed11ddd8ded602464c3fc6020494"},
+ {file = "lazy_object_proxy-1.9.0-cp39-cp39-musllinux_1_1_x86_64.whl", hash = "sha256:09763491ce220c0299688940f8dc2c5d05fd1f45af1e42e636b2e8b2303e4382"},
+ {file = "lazy_object_proxy-1.9.0-cp39-cp39-win32.whl", hash = "sha256:9090d8e53235aa280fc9239a86ae3ea8ac58eff66a705fa6aa2ec4968b95c821"},
+ {file = "lazy_object_proxy-1.9.0-cp39-cp39-win_amd64.whl", hash = "sha256:db1c1722726f47e10e0b5fdbf15ac3b8adb58c091d12b3ab713965795036985f"},
+]
+
+[[package]]
+name = "lockfile"
+version = "0.12.2"
+description = "Platform-independent file locking module"
+category = "dev"
+optional = false
+python-versions = "*"
+files = [
+ {file = "lockfile-0.12.2-py2.py3-none-any.whl", hash = "sha256:6c3cb24f344923d30b2785d5ad75182c8ea7ac1b6171b08657258ec7429d50fa"},
+ {file = "lockfile-0.12.2.tar.gz", hash = "sha256:6aed02de03cba24efabcd600b30540140634fc06cfa603822d508d5361e9f799"},
+]
+
+[[package]]
+name = "markupsafe"
+version = "2.1.2"
+description = "Safely add untrusted strings to HTML/XML markup."
+category = "dev"
+optional = false
+python-versions = ">=3.7"
+files = [
+ {file = "MarkupSafe-2.1.2-cp310-cp310-macosx_10_9_universal2.whl", hash = "sha256:665a36ae6f8f20a4676b53224e33d456a6f5a72657d9c83c2aa00765072f31f7"},
+ {file = "MarkupSafe-2.1.2-cp310-cp310-macosx_10_9_x86_64.whl", hash = "sha256:340bea174e9761308703ae988e982005aedf427de816d1afe98147668cc03036"},
+ {file = "MarkupSafe-2.1.2-cp310-cp310-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:22152d00bf4a9c7c83960521fc558f55a1adbc0631fbb00a9471e097b19d72e1"},
+ {file = "MarkupSafe-2.1.2-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:28057e985dace2f478e042eaa15606c7efccb700797660629da387eb289b9323"},
+ {file = "MarkupSafe-2.1.2-cp310-cp310-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:ca244fa73f50a800cf8c3ebf7fd93149ec37f5cb9596aa8873ae2c1d23498601"},
+ {file = "MarkupSafe-2.1.2-cp310-cp310-musllinux_1_1_aarch64.whl", hash = "sha256:d9d971ec1e79906046aa3ca266de79eac42f1dbf3612a05dc9368125952bd1a1"},
+ {file = "MarkupSafe-2.1.2-cp310-cp310-musllinux_1_1_i686.whl", hash = "sha256:7e007132af78ea9df29495dbf7b5824cb71648d7133cf7848a2a5dd00d36f9ff"},
+ {file = "MarkupSafe-2.1.2-cp310-cp310-musllinux_1_1_x86_64.whl", hash = "sha256:7313ce6a199651c4ed9d7e4cfb4aa56fe923b1adf9af3b420ee14e6d9a73df65"},
+ {file = "MarkupSafe-2.1.2-cp310-cp310-win32.whl", hash = "sha256:c4a549890a45f57f1ebf99c067a4ad0cb423a05544accaf2b065246827ed9603"},
+ {file = "MarkupSafe-2.1.2-cp310-cp310-win_amd64.whl", hash = "sha256:835fb5e38fd89328e9c81067fd642b3593c33e1e17e2fdbf77f5676abb14a156"},
+ {file = "MarkupSafe-2.1.2-cp311-cp311-macosx_10_9_universal2.whl", hash = "sha256:2ec4f2d48ae59bbb9d1f9d7efb9236ab81429a764dedca114f5fdabbc3788013"},
+ {file = "MarkupSafe-2.1.2-cp311-cp311-macosx_10_9_x86_64.whl", hash = "sha256:608e7073dfa9e38a85d38474c082d4281f4ce276ac0010224eaba11e929dd53a"},
+ {file = "MarkupSafe-2.1.2-cp311-cp311-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:65608c35bfb8a76763f37036547f7adfd09270fbdbf96608be2bead319728fcd"},
+ {file = "MarkupSafe-2.1.2-cp311-cp311-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:f2bfb563d0211ce16b63c7cb9395d2c682a23187f54c3d79bfec33e6705473c6"},
+ {file = "MarkupSafe-2.1.2-cp311-cp311-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:da25303d91526aac3672ee6d49a2f3db2d9502a4a60b55519feb1a4c7714e07d"},
+ {file = "MarkupSafe-2.1.2-cp311-cp311-musllinux_1_1_aarch64.whl", hash = "sha256:9cad97ab29dfc3f0249b483412c85c8ef4766d96cdf9dcf5a1e3caa3f3661cf1"},
+ {file = "MarkupSafe-2.1.2-cp311-cp311-musllinux_1_1_i686.whl", hash = "sha256:085fd3201e7b12809f9e6e9bc1e5c96a368c8523fad5afb02afe3c051ae4afcc"},
+ {file = "MarkupSafe-2.1.2-cp311-cp311-musllinux_1_1_x86_64.whl", hash = "sha256:1bea30e9bf331f3fef67e0a3877b2288593c98a21ccb2cf29b74c581a4eb3af0"},
+ {file = "MarkupSafe-2.1.2-cp311-cp311-win32.whl", hash = "sha256:7df70907e00c970c60b9ef2938d894a9381f38e6b9db73c5be35e59d92e06625"},
+ {file = "MarkupSafe-2.1.2-cp311-cp311-win_amd64.whl", hash = "sha256:e55e40ff0cc8cc5c07996915ad367fa47da6b3fc091fdadca7f5403239c5fec3"},
+ {file = "MarkupSafe-2.1.2-cp37-cp37m-macosx_10_9_x86_64.whl", hash = "sha256:a6e40afa7f45939ca356f348c8e23048e02cb109ced1eb8420961b2f40fb373a"},
+ {file = "MarkupSafe-2.1.2-cp37-cp37m-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:cf877ab4ed6e302ec1d04952ca358b381a882fbd9d1b07cccbfd61783561f98a"},
+ {file = "MarkupSafe-2.1.2-cp37-cp37m-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:63ba06c9941e46fa389d389644e2d8225e0e3e5ebcc4ff1ea8506dce646f8c8a"},
+ {file = "MarkupSafe-2.1.2-cp37-cp37m-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:f1cd098434e83e656abf198f103a8207a8187c0fc110306691a2e94a78d0abb2"},
+ {file = "MarkupSafe-2.1.2-cp37-cp37m-musllinux_1_1_aarch64.whl", hash = "sha256:55f44b440d491028addb3b88f72207d71eeebfb7b5dbf0643f7c023ae1fba619"},
+ {file = "MarkupSafe-2.1.2-cp37-cp37m-musllinux_1_1_i686.whl", hash = "sha256:a6f2fcca746e8d5910e18782f976489939d54a91f9411c32051b4aab2bd7c513"},
+ {file = "MarkupSafe-2.1.2-cp37-cp37m-musllinux_1_1_x86_64.whl", hash = "sha256:0b462104ba25f1ac006fdab8b6a01ebbfbce9ed37fd37fd4acd70c67c973e460"},
+ {file = "MarkupSafe-2.1.2-cp37-cp37m-win32.whl", hash = "sha256:7668b52e102d0ed87cb082380a7e2e1e78737ddecdde129acadb0eccc5423859"},
+ {file = "MarkupSafe-2.1.2-cp37-cp37m-win_amd64.whl", hash = "sha256:6d6607f98fcf17e534162f0709aaad3ab7a96032723d8ac8750ffe17ae5a0666"},
+ {file = "MarkupSafe-2.1.2-cp38-cp38-macosx_10_9_universal2.whl", hash = "sha256:a806db027852538d2ad7555b203300173dd1b77ba116de92da9afbc3a3be3eed"},
+ {file = "MarkupSafe-2.1.2-cp38-cp38-macosx_10_9_x86_64.whl", hash = "sha256:a4abaec6ca3ad8660690236d11bfe28dfd707778e2442b45addd2f086d6ef094"},
+ {file = "MarkupSafe-2.1.2-cp38-cp38-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:f03a532d7dee1bed20bc4884194a16160a2de9ffc6354b3878ec9682bb623c54"},
+ {file = "MarkupSafe-2.1.2-cp38-cp38-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:4cf06cdc1dda95223e9d2d3c58d3b178aa5dacb35ee7e3bbac10e4e1faacb419"},
+ {file = "MarkupSafe-2.1.2-cp38-cp38-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:22731d79ed2eb25059ae3df1dfc9cb1546691cc41f4e3130fe6bfbc3ecbbecfa"},
+ {file = "MarkupSafe-2.1.2-cp38-cp38-musllinux_1_1_aarch64.whl", hash = "sha256:f8ffb705ffcf5ddd0e80b65ddf7bed7ee4f5a441ea7d3419e861a12eaf41af58"},
+ {file = "MarkupSafe-2.1.2-cp38-cp38-musllinux_1_1_i686.whl", hash = "sha256:8db032bf0ce9022a8e41a22598eefc802314e81b879ae093f36ce9ddf39ab1ba"},
+ {file = "MarkupSafe-2.1.2-cp38-cp38-musllinux_1_1_x86_64.whl", hash = "sha256:2298c859cfc5463f1b64bd55cb3e602528db6fa0f3cfd568d3605c50678f8f03"},
+ {file = "MarkupSafe-2.1.2-cp38-cp38-win32.whl", hash = "sha256:50c42830a633fa0cf9e7d27664637532791bfc31c731a87b202d2d8ac40c3ea2"},
+ {file = "MarkupSafe-2.1.2-cp38-cp38-win_amd64.whl", hash = "sha256:bb06feb762bade6bf3c8b844462274db0c76acc95c52abe8dbed28ae3d44a147"},
+ {file = "MarkupSafe-2.1.2-cp39-cp39-macosx_10_9_universal2.whl", hash = "sha256:99625a92da8229df6d44335e6fcc558a5037dd0a760e11d84be2260e6f37002f"},
+ {file = "MarkupSafe-2.1.2-cp39-cp39-macosx_10_9_x86_64.whl", hash = "sha256:8bca7e26c1dd751236cfb0c6c72d4ad61d986e9a41bbf76cb445f69488b2a2bd"},
+ {file = "MarkupSafe-2.1.2-cp39-cp39-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:40627dcf047dadb22cd25ea7ecfe9cbf3bbbad0482ee5920b582f3809c97654f"},
+ {file = "MarkupSafe-2.1.2-cp39-cp39-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:40dfd3fefbef579ee058f139733ac336312663c6706d1163b82b3003fb1925c4"},
+ {file = "MarkupSafe-2.1.2-cp39-cp39-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:090376d812fb6ac5f171e5938e82e7f2d7adc2b629101cec0db8b267815c85e2"},
+ {file = "MarkupSafe-2.1.2-cp39-cp39-musllinux_1_1_aarch64.whl", hash = "sha256:2e7821bffe00aa6bd07a23913b7f4e01328c3d5cc0b40b36c0bd81d362faeb65"},
+ {file = "MarkupSafe-2.1.2-cp39-cp39-musllinux_1_1_i686.whl", hash = "sha256:c0a33bc9f02c2b17c3ea382f91b4db0e6cde90b63b296422a939886a7a80de1c"},
+ {file = "MarkupSafe-2.1.2-cp39-cp39-musllinux_1_1_x86_64.whl", hash = "sha256:b8526c6d437855442cdd3d87eede9c425c4445ea011ca38d937db299382e6fa3"},
+ {file = "MarkupSafe-2.1.2-cp39-cp39-win32.whl", hash = "sha256:137678c63c977754abe9086a3ec011e8fd985ab90631145dfb9294ad09c102a7"},
+ {file = "MarkupSafe-2.1.2-cp39-cp39-win_amd64.whl", hash = "sha256:0576fe974b40a400449768941d5d0858cc624e3249dfd1e0c33674e5c7ca7aed"},
+ {file = "MarkupSafe-2.1.2.tar.gz", hash = "sha256:abcabc8c2b26036d62d4c746381a6f7cf60aafcc653198ad678306986b09450d"},
+]
+
+[[package]]
+name = "matplotlib"
+version = "3.5.3"
+description = "Python plotting package"
+category = "main"
+optional = false
+python-versions = ">=3.7"
+files = [
+ {file = "matplotlib-3.5.3-cp310-cp310-macosx_10_9_universal2.whl", hash = "sha256:a206a1b762b39398efea838f528b3a6d60cdb26fe9d58b48265787e29cd1d693"},
+ {file = "matplotlib-3.5.3-cp310-cp310-macosx_10_9_x86_64.whl", hash = "sha256:cd45a6f3e93a780185f70f05cf2a383daed13c3489233faad83e81720f7ede24"},
+ {file = "matplotlib-3.5.3-cp310-cp310-macosx_11_0_arm64.whl", hash = "sha256:d62880e1f60e5a30a2a8484432bcb3a5056969dc97258d7326ad465feb7ae069"},
+ {file = "matplotlib-3.5.3-cp310-cp310-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:9ab29589cef03bc88acfa3a1490359000c18186fc30374d8aa77d33cc4a51a4a"},
+ {file = "matplotlib-3.5.3-cp310-cp310-manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:2886cc009f40e2984c083687251821f305d811d38e3df8ded414265e4583f0c5"},
+ {file = "matplotlib-3.5.3-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:c995f7d9568f18b5db131ab124c64e51b6820a92d10246d4f2b3f3a66698a15b"},
+ {file = "matplotlib-3.5.3-cp310-cp310-win32.whl", hash = "sha256:6bb93a0492d68461bd458eba878f52fdc8ac7bdb6c4acdfe43dba684787838c2"},
+ {file = "matplotlib-3.5.3-cp310-cp310-win_amd64.whl", hash = "sha256:2e6d184ebe291b9e8f7e78bbab7987d269c38ea3e062eace1fe7d898042ef804"},
+ {file = "matplotlib-3.5.3-cp37-cp37m-macosx_10_9_x86_64.whl", hash = "sha256:6ea6aef5c4338e58d8d376068e28f80a24f54e69f09479d1c90b7172bad9f25b"},
+ {file = "matplotlib-3.5.3-cp37-cp37m-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:839d47b8ead7ad9669aaacdbc03f29656dc21f0d41a6fea2d473d856c39c8b1c"},
+ {file = "matplotlib-3.5.3-cp37-cp37m-manylinux_2_5_i686.manylinux1_i686.whl", hash = "sha256:3b4fa56159dc3c7f9250df88f653f085068bcd32dcd38e479bba58909254af7f"},
+ {file = "matplotlib-3.5.3-cp37-cp37m-manylinux_2_5_x86_64.manylinux1_x86_64.whl", hash = "sha256:94ff86af56a3869a4ae26a9637a849effd7643858a1a04dd5ee50e9ab75069a7"},
+ {file = "matplotlib-3.5.3-cp37-cp37m-win32.whl", hash = "sha256:35a8ad4dddebd51f94c5d24bec689ec0ec66173bf614374a1244c6241c1595e0"},
+ {file = "matplotlib-3.5.3-cp37-cp37m-win_amd64.whl", hash = "sha256:43e9d3fa077bf0cc95ded13d331d2156f9973dce17c6f0c8b49ccd57af94dbd9"},
+ {file = "matplotlib-3.5.3-cp38-cp38-macosx_10_9_universal2.whl", hash = "sha256:22227c976ad4dc8c5a5057540421f0d8708c6560744ad2ad638d48e2984e1dbc"},
+ {file = "matplotlib-3.5.3-cp38-cp38-macosx_10_9_x86_64.whl", hash = "sha256:bf618a825deb6205f015df6dfe6167a5d9b351203b03fab82043ae1d30f16511"},
+ {file = "matplotlib-3.5.3-cp38-cp38-macosx_11_0_arm64.whl", hash = "sha256:9befa5954cdbc085e37d974ff6053da269474177921dd61facdad8023c4aeb51"},
+ {file = "matplotlib-3.5.3-cp38-cp38-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:f3840c280ebc87a48488a46f760ea1c0c0c83fcf7abbe2e6baf99d033fd35fd8"},
+ {file = "matplotlib-3.5.3-cp38-cp38-manylinux_2_5_i686.manylinux1_i686.whl", hash = "sha256:dacddf5bfcec60e3f26ec5c0ae3d0274853a258b6c3fc5ef2f06a8eb23e042be"},
+ {file = "matplotlib-3.5.3-cp38-cp38-manylinux_2_5_x86_64.manylinux1_x86_64.whl", hash = "sha256:b428076a55fb1c084c76cb93e68006f27d247169f056412607c5c88828d08f88"},
+ {file = "matplotlib-3.5.3-cp38-cp38-win32.whl", hash = "sha256:874df7505ba820e0400e7091199decf3ff1fde0583652120c50cd60d5820ca9a"},
+ {file = "matplotlib-3.5.3-cp38-cp38-win_amd64.whl", hash = "sha256:b28de401d928890187c589036857a270a032961411934bdac4cf12dde3d43094"},
+ {file = "matplotlib-3.5.3-cp39-cp39-macosx_10_9_universal2.whl", hash = "sha256:3211ba82b9f1518d346f6309df137b50c3dc4421b4ed4815d1d7eadc617f45a1"},
+ {file = "matplotlib-3.5.3-cp39-cp39-macosx_10_9_x86_64.whl", hash = "sha256:6fe807e8a22620b4cd95cfbc795ba310dc80151d43b037257250faf0bfcd82bc"},
+ {file = "matplotlib-3.5.3-cp39-cp39-macosx_11_0_arm64.whl", hash = "sha256:5c096363b206a3caf43773abebdbb5a23ea13faef71d701b21a9c27fdcef72f4"},
+ {file = "matplotlib-3.5.3-cp39-cp39-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:0bcdfcb0f976e1bac6721d7d457c17be23cf7501f977b6a38f9d38a3762841f7"},
+ {file = "matplotlib-3.5.3-cp39-cp39-manylinux_2_5_i686.manylinux1_i686.whl", hash = "sha256:1e64ac9be9da6bfff0a732e62116484b93b02a0b4d4b19934fb4f8e7ad26ad6a"},
+ {file = "matplotlib-3.5.3-cp39-cp39-manylinux_2_5_x86_64.manylinux1_x86_64.whl", hash = "sha256:73dd93dc35c85dece610cca8358003bf0760d7986f70b223e2306b4ea6d1406b"},
+ {file = "matplotlib-3.5.3-cp39-cp39-win32.whl", hash = "sha256:879c7e5fce4939c6aa04581dfe08d57eb6102a71f2e202e3314d5fbc072fd5a0"},
+ {file = "matplotlib-3.5.3-cp39-cp39-win_amd64.whl", hash = "sha256:ab8d26f07fe64f6f6736d635cce7bfd7f625320490ed5bfc347f2cdb4fae0e56"},
+ {file = "matplotlib-3.5.3-pp37-pypy37_pp73-macosx_10_9_x86_64.whl", hash = "sha256:99482b83ebf4eb6d5fc6813d7aacdefdd480f0d9c0b52dcf9f1cc3b2c4b3361a"},
+ {file = "matplotlib-3.5.3-pp37-pypy37_pp73-manylinux_2_12_i686.manylinux2010_i686.whl", hash = "sha256:f814504e459c68118bf2246a530ed953ebd18213dc20e3da524174d84ed010b2"},
+ {file = "matplotlib-3.5.3-pp37-pypy37_pp73-manylinux_2_12_x86_64.manylinux2010_x86_64.whl", hash = "sha256:57f1b4e69f438a99bb64d7f2c340db1b096b41ebaa515cf61ea72624279220ce"},
+ {file = "matplotlib-3.5.3-pp37-pypy37_pp73-win_amd64.whl", hash = "sha256:d2484b350bf3d32cae43f85dcfc89b3ed7bd2bcd781ef351f93eb6fb2cc483f9"},
+ {file = "matplotlib-3.5.3.tar.gz", hash = "sha256:339cac48b80ddbc8bfd05daae0a3a73414651a8596904c2a881cfd1edb65f26c"},
+]
+
+[package.dependencies]
+cycler = ">=0.10"
+fonttools = ">=4.22.0"
+kiwisolver = ">=1.0.1"
+numpy = ">=1.17"
+packaging = ">=20.0"
+pillow = ">=6.2.0"
+pyparsing = ">=2.2.1"
+python-dateutil = ">=2.7"
+
+[[package]]
+name = "matplotlib-inline"
+version = "0.1.6"
+description = "Inline Matplotlib backend for Jupyter"
+category = "dev"
+optional = false
+python-versions = ">=3.5"
+files = [
+ {file = "matplotlib-inline-0.1.6.tar.gz", hash = "sha256:f887e5f10ba98e8d2b150ddcf4702c1e5f8b3a20005eb0f74bfdbd360ee6f304"},
+ {file = "matplotlib_inline-0.1.6-py3-none-any.whl", hash = "sha256:f1f41aab5328aa5aaea9b16d083b128102f8712542f819fe7e6a420ff581b311"},
+]
+
+[package.dependencies]
+traitlets = "*"
+
+[[package]]
+name = "mccabe"
+version = "0.6.1"
+description = "McCabe checker, plugin for flake8"
+category = "dev"
+optional = false
+python-versions = "*"
+files = [
+ {file = "mccabe-0.6.1-py2.py3-none-any.whl", hash = "sha256:ab8a6258860da4b6677da4bd2fe5dc2c659cff31b3ee4f7f5d64e79735b80d42"},
+ {file = "mccabe-0.6.1.tar.gz", hash = "sha256:dd8d182285a0fe56bace7f45b5e7d1a6ebcbf524e8f3bd87eb0f125271b8831f"},
+]
+
+[[package]]
+name = "mistune"
+version = "2.0.5"
+description = "A sane Markdown parser with useful plugins and renderers"
+category = "dev"
+optional = false
+python-versions = "*"
+files = [
+ {file = "mistune-2.0.5-py2.py3-none-any.whl", hash = "sha256:bad7f5d431886fcbaf5f758118ecff70d31f75231b34024a1341120340a65ce8"},
+ {file = "mistune-2.0.5.tar.gz", hash = "sha256:0246113cb2492db875c6be56974a7c893333bf26cd92891c85f63151cee09d34"},
+]
+
+[[package]]
+name = "more-itertools"
+version = "9.1.0"
+description = "More routines for operating on iterables, beyond itertools"
+category = "dev"
+optional = false
+python-versions = ">=3.7"
+files = [
+ {file = "more-itertools-9.1.0.tar.gz", hash = "sha256:cabaa341ad0389ea83c17a94566a53ae4c9d07349861ecb14dc6d0345cf9ac5d"},
+ {file = "more_itertools-9.1.0-py3-none-any.whl", hash = "sha256:d2bc7f02446e86a68911e58ded76d6561eea00cddfb2a91e7019bbb586c799f3"},
+]
+
+[[package]]
+name = "msgpack"
+version = "1.0.4"
+description = "MessagePack serializer"
+category = "dev"
+optional = false
+python-versions = "*"
+files = [
+ {file = "msgpack-1.0.4-cp310-cp310-macosx_10_9_universal2.whl", hash = "sha256:4ab251d229d10498e9a2f3b1e68ef64cb393394ec477e3370c457f9430ce9250"},
+ {file = "msgpack-1.0.4-cp310-cp310-macosx_10_9_x86_64.whl", hash = "sha256:112b0f93202d7c0fef0b7810d465fde23c746a2d482e1e2de2aafd2ce1492c88"},
+ {file = "msgpack-1.0.4-cp310-cp310-macosx_11_0_arm64.whl", hash = "sha256:002b5c72b6cd9b4bafd790f364b8480e859b4712e91f43014fe01e4f957b8467"},
+ {file = "msgpack-1.0.4-cp310-cp310-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:35bc0faa494b0f1d851fd29129b2575b2e26d41d177caacd4206d81502d4c6a6"},
+ {file = "msgpack-1.0.4-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:4733359808c56d5d7756628736061c432ded018e7a1dff2d35a02439043321aa"},
+ {file = "msgpack-1.0.4-cp310-cp310-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:eb514ad14edf07a1dbe63761fd30f89ae79b42625731e1ccf5e1f1092950eaa6"},
+ {file = "msgpack-1.0.4-cp310-cp310-musllinux_1_1_aarch64.whl", hash = "sha256:c23080fdeec4716aede32b4e0ef7e213c7b1093eede9ee010949f2a418ced6ba"},
+ {file = "msgpack-1.0.4-cp310-cp310-musllinux_1_1_i686.whl", hash = "sha256:49565b0e3d7896d9ea71d9095df15b7f75a035c49be733051c34762ca95bbf7e"},
+ {file = "msgpack-1.0.4-cp310-cp310-musllinux_1_1_x86_64.whl", hash = "sha256:aca0f1644d6b5a73eb3e74d4d64d5d8c6c3d577e753a04c9e9c87d07692c58db"},
+ {file = "msgpack-1.0.4-cp310-cp310-win32.whl", hash = "sha256:0dfe3947db5fb9ce52aaea6ca28112a170db9eae75adf9339a1aec434dc954ef"},
+ {file = "msgpack-1.0.4-cp310-cp310-win_amd64.whl", hash = "sha256:4dea20515f660aa6b7e964433b1808d098dcfcabbebeaaad240d11f909298075"},
+ {file = "msgpack-1.0.4-cp36-cp36m-macosx_10_9_x86_64.whl", hash = "sha256:e83f80a7fec1a62cf4e6c9a660e39c7f878f603737a0cdac8c13131d11d97f52"},
+ {file = "msgpack-1.0.4-cp36-cp36m-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:3c11a48cf5e59026ad7cb0dc29e29a01b5a66a3e333dc11c04f7e991fc5510a9"},
+ {file = "msgpack-1.0.4-cp36-cp36m-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:1276e8f34e139aeff1c77a3cefb295598b504ac5314d32c8c3d54d24fadb94c9"},
+ {file = "msgpack-1.0.4-cp36-cp36m-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:6c9566f2c39ccced0a38d37c26cc3570983b97833c365a6044edef3574a00c08"},
+ {file = "msgpack-1.0.4-cp36-cp36m-musllinux_1_1_aarch64.whl", hash = "sha256:fcb8a47f43acc113e24e910399376f7277cf8508b27e5b88499f053de6b115a8"},
+ {file = "msgpack-1.0.4-cp36-cp36m-musllinux_1_1_i686.whl", hash = "sha256:76ee788122de3a68a02ed6f3a16bbcd97bc7c2e39bd4d94be2f1821e7c4a64e6"},
+ {file = "msgpack-1.0.4-cp36-cp36m-musllinux_1_1_x86_64.whl", hash = "sha256:0a68d3ac0104e2d3510de90a1091720157c319ceeb90d74f7b5295a6bee51bae"},
+ {file = "msgpack-1.0.4-cp36-cp36m-win32.whl", hash = "sha256:85f279d88d8e833ec015650fd15ae5eddce0791e1e8a59165318f371158efec6"},
+ {file = "msgpack-1.0.4-cp36-cp36m-win_amd64.whl", hash = "sha256:c1683841cd4fa45ac427c18854c3ec3cd9b681694caf5bff04edb9387602d661"},
+ {file = "msgpack-1.0.4-cp37-cp37m-macosx_10_9_x86_64.whl", hash = "sha256:a75dfb03f8b06f4ab093dafe3ddcc2d633259e6c3f74bb1b01996f5d8aa5868c"},
+ {file = "msgpack-1.0.4-cp37-cp37m-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:9667bdfdf523c40d2511f0e98a6c9d3603be6b371ae9a238b7ef2dc4e7a427b0"},
+ {file = "msgpack-1.0.4-cp37-cp37m-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:11184bc7e56fd74c00ead4f9cc9a3091d62ecb96e97653add7a879a14b003227"},
+ {file = "msgpack-1.0.4-cp37-cp37m-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:ac5bd7901487c4a1dd51a8c58f2632b15d838d07ceedaa5e4c080f7190925bff"},
+ {file = "msgpack-1.0.4-cp37-cp37m-musllinux_1_1_aarch64.whl", hash = "sha256:1e91d641d2bfe91ba4c52039adc5bccf27c335356055825c7f88742c8bb900dd"},
+ {file = "msgpack-1.0.4-cp37-cp37m-musllinux_1_1_i686.whl", hash = "sha256:2a2df1b55a78eb5f5b7d2a4bb221cd8363913830145fad05374a80bf0877cb1e"},
+ {file = "msgpack-1.0.4-cp37-cp37m-musllinux_1_1_x86_64.whl", hash = "sha256:545e3cf0cf74f3e48b470f68ed19551ae6f9722814ea969305794645da091236"},
+ {file = "msgpack-1.0.4-cp37-cp37m-win32.whl", hash = "sha256:2cc5ca2712ac0003bcb625c96368fd08a0f86bbc1a5578802512d87bc592fe44"},
+ {file = "msgpack-1.0.4-cp37-cp37m-win_amd64.whl", hash = "sha256:eba96145051ccec0ec86611fe9cf693ce55f2a3ce89c06ed307de0e085730ec1"},
+ {file = "msgpack-1.0.4-cp38-cp38-macosx_10_9_universal2.whl", hash = "sha256:7760f85956c415578c17edb39eed99f9181a48375b0d4a94076d84148cf67b2d"},
+ {file = "msgpack-1.0.4-cp38-cp38-macosx_10_9_x86_64.whl", hash = "sha256:449e57cc1ff18d3b444eb554e44613cffcccb32805d16726a5494038c3b93dab"},
+ {file = "msgpack-1.0.4-cp38-cp38-macosx_11_0_arm64.whl", hash = "sha256:d603de2b8d2ea3f3bcb2efe286849aa7a81531abc52d8454da12f46235092bcb"},
+ {file = "msgpack-1.0.4-cp38-cp38-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:48f5d88c99f64c456413d74a975bd605a9b0526293218a3b77220a2c15458ba9"},
+ {file = "msgpack-1.0.4-cp38-cp38-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:6916c78f33602ecf0509cc40379271ba0f9ab572b066bd4bdafd7434dee4bc6e"},
+ {file = "msgpack-1.0.4-cp38-cp38-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:81fc7ba725464651190b196f3cd848e8553d4d510114a954681fd0b9c479d7e1"},
+ {file = "msgpack-1.0.4-cp38-cp38-musllinux_1_1_aarch64.whl", hash = "sha256:d5b5b962221fa2c5d3a7f8133f9abffc114fe218eb4365e40f17732ade576c8e"},
+ {file = "msgpack-1.0.4-cp38-cp38-musllinux_1_1_i686.whl", hash = "sha256:77ccd2af37f3db0ea59fb280fa2165bf1b096510ba9fe0cc2bf8fa92a22fdb43"},
+ {file = "msgpack-1.0.4-cp38-cp38-musllinux_1_1_x86_64.whl", hash = "sha256:b17be2478b622939e39b816e0aa8242611cc8d3583d1cd8ec31b249f04623243"},
+ {file = "msgpack-1.0.4-cp38-cp38-win32.whl", hash = "sha256:2bb8cdf50dd623392fa75525cce44a65a12a00c98e1e37bf0fb08ddce2ff60d2"},
+ {file = "msgpack-1.0.4-cp38-cp38-win_amd64.whl", hash = "sha256:26b8feaca40a90cbe031b03d82b2898bf560027160d3eae1423f4a67654ec5d6"},
+ {file = "msgpack-1.0.4-cp39-cp39-macosx_10_9_universal2.whl", hash = "sha256:462497af5fd4e0edbb1559c352ad84f6c577ffbbb708566a0abaaa84acd9f3ae"},
+ {file = "msgpack-1.0.4-cp39-cp39-macosx_10_9_x86_64.whl", hash = "sha256:2999623886c5c02deefe156e8f869c3b0aaeba14bfc50aa2486a0415178fce55"},
+ {file = "msgpack-1.0.4-cp39-cp39-macosx_11_0_arm64.whl", hash = "sha256:f0029245c51fd9473dc1aede1160b0a29f4a912e6b1dd353fa6d317085b219da"},
+ {file = "msgpack-1.0.4-cp39-cp39-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:ed6f7b854a823ea44cf94919ba3f727e230da29feb4a99711433f25800cf747f"},
+ {file = "msgpack-1.0.4-cp39-cp39-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:0df96d6eaf45ceca04b3f3b4b111b86b33785683d682c655063ef8057d61fd92"},
+ {file = "msgpack-1.0.4-cp39-cp39-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:6a4192b1ab40f8dca3f2877b70e63799d95c62c068c84dc028b40a6cb03ccd0f"},
+ {file = "msgpack-1.0.4-cp39-cp39-musllinux_1_1_aarch64.whl", hash = "sha256:0e3590f9fb9f7fbc36df366267870e77269c03172d086fa76bb4eba8b2b46624"},
+ {file = "msgpack-1.0.4-cp39-cp39-musllinux_1_1_i686.whl", hash = "sha256:1576bd97527a93c44fa856770197dec00d223b0b9f36ef03f65bac60197cedf8"},
+ {file = "msgpack-1.0.4-cp39-cp39-musllinux_1_1_x86_64.whl", hash = "sha256:63e29d6e8c9ca22b21846234913c3466b7e4ee6e422f205a2988083de3b08cae"},
+ {file = "msgpack-1.0.4-cp39-cp39-win32.whl", hash = "sha256:fb62ea4b62bfcb0b380d5680f9a4b3f9a2d166d9394e9bbd9666c0ee09a3645c"},
+ {file = "msgpack-1.0.4-cp39-cp39-win_amd64.whl", hash = "sha256:4d5834a2a48965a349da1c5a79760d94a1a0172fbb5ab6b5b33cbf8447e109ce"},
+ {file = "msgpack-1.0.4.tar.gz", hash = "sha256:f5d869c18f030202eb412f08b28d2afeea553d6613aee89e200d7aca7ef01f5f"},
+]
+
+[[package]]
+name = "mypy"
+version = "1.0.1"
+description = "Optional static typing for Python"
+category = "dev"
+optional = false
+python-versions = ">=3.7"
+files = [
+ {file = "mypy-1.0.1-cp310-cp310-macosx_10_9_x86_64.whl", hash = "sha256:71a808334d3f41ef011faa5a5cd8153606df5fc0b56de5b2e89566c8093a0c9a"},
+ {file = "mypy-1.0.1-cp310-cp310-macosx_11_0_arm64.whl", hash = "sha256:920169f0184215eef19294fa86ea49ffd4635dedfdea2b57e45cb4ee85d5ccaf"},
+ {file = "mypy-1.0.1-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:27a0f74a298769d9fdc8498fcb4f2beb86f0564bcdb1a37b58cbbe78e55cf8c0"},
+ {file = "mypy-1.0.1-cp310-cp310-musllinux_1_1_x86_64.whl", hash = "sha256:65b122a993d9c81ea0bfde7689b3365318a88bde952e4dfa1b3a8b4ac05d168b"},
+ {file = "mypy-1.0.1-cp310-cp310-win_amd64.whl", hash = "sha256:5deb252fd42a77add936b463033a59b8e48eb2eaec2976d76b6878d031933fe4"},
+ {file = "mypy-1.0.1-cp311-cp311-macosx_10_9_x86_64.whl", hash = "sha256:2013226d17f20468f34feddd6aae4635a55f79626549099354ce641bc7d40262"},
+ {file = "mypy-1.0.1-cp311-cp311-macosx_11_0_arm64.whl", hash = "sha256:48525aec92b47baed9b3380371ab8ab6e63a5aab317347dfe9e55e02aaad22e8"},
+ {file = "mypy-1.0.1-cp311-cp311-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:c96b8a0c019fe29040d520d9257d8c8f122a7343a8307bf8d6d4a43f5c5bfcc8"},
+ {file = "mypy-1.0.1-cp311-cp311-musllinux_1_1_x86_64.whl", hash = "sha256:448de661536d270ce04f2d7dddaa49b2fdba6e3bd8a83212164d4174ff43aa65"},
+ {file = "mypy-1.0.1-cp311-cp311-win_amd64.whl", hash = "sha256:d42a98e76070a365a1d1c220fcac8aa4ada12ae0db679cb4d910fabefc88b994"},
+ {file = "mypy-1.0.1-cp37-cp37m-macosx_10_9_x86_64.whl", hash = "sha256:e64f48c6176e243ad015e995de05af7f22bbe370dbb5b32bd6988438ec873919"},
+ {file = "mypy-1.0.1-cp37-cp37m-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:5fdd63e4f50e3538617887e9aee91855368d9fc1dea30da743837b0df7373bc4"},
+ {file = "mypy-1.0.1-cp37-cp37m-musllinux_1_1_x86_64.whl", hash = "sha256:dbeb24514c4acbc78d205f85dd0e800f34062efcc1f4a4857c57e4b4b8712bff"},
+ {file = "mypy-1.0.1-cp37-cp37m-win_amd64.whl", hash = "sha256:a2948c40a7dd46c1c33765718936669dc1f628f134013b02ff5ac6c7ef6942bf"},
+ {file = "mypy-1.0.1-cp38-cp38-macosx_10_9_x86_64.whl", hash = "sha256:5bc8d6bd3b274dd3846597855d96d38d947aedba18776aa998a8d46fabdaed76"},
+ {file = "mypy-1.0.1-cp38-cp38-macosx_11_0_arm64.whl", hash = "sha256:17455cda53eeee0a4adb6371a21dd3dbf465897de82843751cf822605d152c8c"},
+ {file = "mypy-1.0.1-cp38-cp38-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:e831662208055b006eef68392a768ff83596035ffd6d846786578ba1714ba8f6"},
+ {file = "mypy-1.0.1-cp38-cp38-musllinux_1_1_x86_64.whl", hash = "sha256:e60d0b09f62ae97a94605c3f73fd952395286cf3e3b9e7b97f60b01ddfbbda88"},
+ {file = "mypy-1.0.1-cp38-cp38-win_amd64.whl", hash = "sha256:0af4f0e20706aadf4e6f8f8dc5ab739089146b83fd53cb4a7e0e850ef3de0bb6"},
+ {file = "mypy-1.0.1-cp39-cp39-macosx_10_9_x86_64.whl", hash = "sha256:24189f23dc66f83b839bd1cce2dfc356020dfc9a8bae03978477b15be61b062e"},
+ {file = "mypy-1.0.1-cp39-cp39-macosx_11_0_arm64.whl", hash = "sha256:93a85495fb13dc484251b4c1fd7a5ac370cd0d812bbfc3b39c1bafefe95275d5"},
+ {file = "mypy-1.0.1-cp39-cp39-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:5f546ac34093c6ce33f6278f7c88f0f147a4849386d3bf3ae193702f4fe31407"},
+ {file = "mypy-1.0.1-cp39-cp39-musllinux_1_1_x86_64.whl", hash = "sha256:c6c2ccb7af7154673c591189c3687b013122c5a891bb5651eca3db8e6c6c55bd"},
+ {file = "mypy-1.0.1-cp39-cp39-win_amd64.whl", hash = "sha256:15b5a824b58c7c822c51bc66308e759243c32631896743f030daf449fe3677f3"},
+ {file = "mypy-1.0.1-py3-none-any.whl", hash = "sha256:eda5c8b9949ed411ff752b9a01adda31afe7eae1e53e946dbdf9db23865e66c4"},
+ {file = "mypy-1.0.1.tar.gz", hash = "sha256:28cea5a6392bb43d266782983b5a4216c25544cd7d80be681a155ddcdafd152d"},
+]
+
+[package.dependencies]
+mypy-extensions = ">=0.4.3"
+tomli = {version = ">=1.1.0", markers = "python_version < \"3.11\""}
+typed-ast = {version = ">=1.4.0,<2", markers = "python_version < \"3.8\""}
+typing-extensions = ">=3.10"
+
+[package.extras]
+dmypy = ["psutil (>=4.0)"]
+install-types = ["pip"]
+python2 = ["typed-ast (>=1.4.0,<2)"]
+reports = ["lxml"]
+
+[[package]]
+name = "mypy-extensions"
+version = "1.0.0"
+description = "Type system extensions for programs checked with the mypy type checker."
+category = "dev"
+optional = false
+python-versions = ">=3.5"
+files = [
+ {file = "mypy_extensions-1.0.0-py3-none-any.whl", hash = "sha256:4392f6c0eb8a5668a69e23d168ffa70f0be9ccfd32b5cc2d26a34ae5b844552d"},
+ {file = "mypy_extensions-1.0.0.tar.gz", hash = "sha256:75dbf8955dc00442a438fc4d0666508a9a97b6bd41aa2f0ffe9d2f2725af0782"},
+]
+
+[[package]]
+name = "nbclassic"
+version = "0.5.2"
+description = "Jupyter Notebook as a Jupyter Server extension."
+category = "dev"
+optional = false
+python-versions = ">=3.7"
+files = [
+ {file = "nbclassic-0.5.2-py3-none-any.whl", hash = "sha256:6403a996562dadefa7fee9c49e17b663b5fd508241de5df655b90011cf3342d9"},
+ {file = "nbclassic-0.5.2.tar.gz", hash = "sha256:40f11bbcc59e8956c3d5ef132dec8e5a853e893ecf831e791d54da0d8a50d79d"},
+]
+
+[package.dependencies]
+argon2-cffi = "*"
+ipykernel = "*"
+ipython-genutils = "*"
+jinja2 = "*"
+jupyter-client = ">=6.1.1"
+jupyter-core = ">=4.6.1"
+jupyter-server = ">=1.8"
+nbconvert = ">=5"
+nbformat = "*"
+nest-asyncio = ">=1.5"
+notebook-shim = ">=0.1.0"
+prometheus-client = "*"
+pyzmq = ">=17"
+Send2Trash = ">=1.8.0"
+terminado = ">=0.8.3"
+tornado = ">=6.1"
+traitlets = ">=4.2.1"
+
+[package.extras]
+docs = ["myst-parser", "nbsphinx", "sphinx", "sphinx-rtd-theme", "sphinxcontrib-github-alt"]
+json-logging = ["json-logging"]
+test = ["coverage", "nbval", "pytest", "pytest-cov", "pytest-jupyter", "pytest-playwright", "pytest-tornasync", "requests", "requests-unixsocket", "testpath"]
+
+[[package]]
+name = "nbclient"
+version = "0.6.8"
+description = "A client library for executing notebooks. Formerly nbconvert's ExecutePreprocessor."
+category = "dev"
+optional = false
+python-versions = ">=3.7.0"
+files = [
+ {file = "nbclient-0.6.8-py3-none-any.whl", hash = "sha256:7cce8b415888539180535953f80ea2385cdbb444944cdeb73ffac1556fdbc228"},
+ {file = "nbclient-0.6.8.tar.gz", hash = "sha256:268fde3457cafe1539e32eb1c6d796bbedb90b9e92bacd3e43d83413734bb0e8"},
+]
+
+[package.dependencies]
+jupyter-client = ">=6.1.5"
+nbformat = ">=5.0"
+nest-asyncio = "*"
+traitlets = ">=5.2.2"
+
+[package.extras]
+sphinx = ["Sphinx (>=1.7)", "autodoc-traits", "mock", "moto", "myst-parser", "sphinx-book-theme"]
+test = ["black", "check-manifest", "flake8", "ipykernel", "ipython", "ipywidgets", "mypy", "nbconvert", "pip (>=18.1)", "pre-commit", "pytest (>=4.1)", "pytest-asyncio", "pytest-cov (>=2.6.1)", "setuptools (>=60.0)", "testpath", "twine (>=1.11.0)", "xmltodict"]
+
+[[package]]
+name = "nbconvert"
+version = "7.2.9"
+description = "Converting Jupyter Notebooks"
+category = "dev"
+optional = false
+python-versions = ">=3.7"
+files = [
+ {file = "nbconvert-7.2.9-py3-none-any.whl", hash = "sha256:495638c5e06005f4a5ce828d8a81d28e34f95c20f4384d5d7a22254b443836e7"},
+ {file = "nbconvert-7.2.9.tar.gz", hash = "sha256:a42c3ac137c64f70cbe4d763111bf358641ea53b37a01a5c202ed86374af5234"},
+]
+
+[package.dependencies]
+beautifulsoup4 = "*"
+bleach = "*"
+defusedxml = "*"
+importlib-metadata = {version = ">=3.6", markers = "python_version < \"3.10\""}
+jinja2 = ">=3.0"
+jupyter-core = ">=4.7"
+jupyterlab-pygments = "*"
+markupsafe = ">=2.0"
+mistune = ">=2.0.3,<3"
+nbclient = ">=0.5.0"
+nbformat = ">=5.1"
+packaging = "*"
+pandocfilters = ">=1.4.1"
+pygments = ">=2.4.1"
+tinycss2 = "*"
+traitlets = ">=5.0"
+
+[package.extras]
+all = ["nbconvert[docs,qtpdf,serve,test,webpdf]"]
+docs = ["ipykernel", "ipython", "myst-parser", "nbsphinx (>=0.2.12)", "pydata-sphinx-theme", "sphinx (==5.0.2)", "sphinxcontrib-spelling"]
+qtpdf = ["nbconvert[qtpng]"]
+qtpng = ["pyqtwebengine (>=5.15)"]
+serve = ["tornado (>=6.1)"]
+test = ["ipykernel", "ipywidgets (>=7)", "pre-commit", "pytest", "pytest-dependency"]
+webpdf = ["pyppeteer (>=1,<1.1)"]
+
+[[package]]
+name = "nbformat"
+version = "5.7.3"
+description = "The Jupyter Notebook format"
+category = "dev"
+optional = false
+python-versions = ">=3.7"
+files = [
+ {file = "nbformat-5.7.3-py3-none-any.whl", hash = "sha256:22a98a6516ca216002b0a34591af5bcb8072ca6c63910baffc901cfa07fefbf0"},
+ {file = "nbformat-5.7.3.tar.gz", hash = "sha256:4b021fca24d3a747bf4e626694033d792d594705829e5e35b14ee3369f9f6477"},
+]
+
+[package.dependencies]
+fastjsonschema = "*"
+importlib-metadata = {version = ">=3.6", markers = "python_version < \"3.8\""}
+jsonschema = ">=2.6"
+jupyter-core = "*"
+traitlets = ">=5.1"
+
+[package.extras]
+docs = ["myst-parser", "pydata-sphinx-theme", "sphinx", "sphinxcontrib-github-alt", "sphinxcontrib-spelling"]
+test = ["pep440", "pre-commit", "pytest", "testpath"]
+
+[[package]]
+name = "nbmake"
+version = "1.4.1"
+description = "Pytest plugin for testing notebooks"
+category = "dev"
+optional = false
+python-versions = ">=3.7.0,<4.0.0"
+files = [
+ {file = "nbmake-1.4.1-py3-none-any.whl", hash = "sha256:1c1619fc54a2fb64bfd84acbdf13b2ffba0e4a03bfea1684f4648f28ca850ada"},
+ {file = "nbmake-1.4.1.tar.gz", hash = "sha256:7f602ba5195e80e4f2527944bb06d3b4df0d1520e73ba66126b51132b1f646ea"},
+]
+
+[package.dependencies]
+ipykernel = ">=5.4.0"
+nbclient = ">=0.6.6,<0.7.0"
+nbformat = ">=5.0.8,<6.0.0"
+pydantic = ">=1.7.2,<2.0.0"
+Pygments = ">=2.7.3,<3.0.0"
+pytest = ">=6.1.0"
+
+[[package]]
+name = "nest-asyncio"
+version = "1.5.6"
+description = "Patch asyncio to allow nested event loops"
+category = "dev"
+optional = false
+python-versions = ">=3.5"
+files = [
+ {file = "nest_asyncio-1.5.6-py3-none-any.whl", hash = "sha256:b9a953fb40dceaa587d109609098db21900182b16440652454a146cffb06e8b8"},
+ {file = "nest_asyncio-1.5.6.tar.gz", hash = "sha256:d267cc1ff794403f7df692964d1d2a3fa9418ffea2a3f6859a439ff482fef290"},
+]
+
+[[package]]
+name = "networkx"
+version = "2.6.3"
+description = "Python package for creating and manipulating graphs and networks"
+category = "main"
+optional = false
+python-versions = ">=3.7"
+files = [
+ {file = "networkx-2.6.3-py3-none-any.whl", hash = "sha256:80b6b89c77d1dfb64a4c7854981b60aeea6360ac02c6d4e4913319e0a313abef"},
+ {file = "networkx-2.6.3.tar.gz", hash = "sha256:c0946ed31d71f1b732b5aaa6da5a0388a345019af232ce2f49c766e2d6795c51"},
+]
+
+[package.extras]
+default = ["matplotlib (>=3.3)", "numpy (>=1.19)", "pandas (>=1.1)", "scipy (>=1.5,!=1.6.1)"]
+developer = ["black (==21.5b1)", "pre-commit (>=2.12)"]
+doc = ["nb2plots (>=0.6)", "numpydoc (>=1.1)", "pillow (>=8.2)", "pydata-sphinx-theme (>=0.6,<1.0)", "sphinx (>=4.0,<5.0)", "sphinx-gallery (>=0.9,<1.0)", "texext (>=0.6.6)"]
+extra = ["lxml (>=4.5)", "pydot (>=1.4.1)", "pygraphviz (>=1.7)"]
+test = ["codecov (>=2.1)", "pytest (>=6.2)", "pytest-cov (>=2.12)"]
+
+[[package]]
+name = "notebook"
+version = "6.5.2"
+description = "A web-based notebook environment for interactive computing"
+category = "dev"
+optional = false
+python-versions = ">=3.7"
+files = [
+ {file = "notebook-6.5.2-py3-none-any.whl", hash = "sha256:e04f9018ceb86e4fa841e92ea8fb214f8d23c1cedfde530cc96f92446924f0e4"},
+ {file = "notebook-6.5.2.tar.gz", hash = "sha256:c1897e5317e225fc78b45549a6ab4b668e4c996fd03a04e938fe5e7af2bfffd0"},
+]
+
+[package.dependencies]
+argon2-cffi = "*"
+ipykernel = "*"
+ipython-genutils = "*"
+jinja2 = "*"
+jupyter-client = ">=5.3.4"
+jupyter-core = ">=4.6.1"
+nbclassic = ">=0.4.7"
+nbconvert = ">=5"
+nbformat = "*"
+nest-asyncio = ">=1.5"
+prometheus-client = "*"
+pyzmq = ">=17"
+Send2Trash = ">=1.8.0"
+terminado = ">=0.8.3"
+tornado = ">=6.1"
+traitlets = ">=4.2.1"
+
+[package.extras]
+docs = ["myst-parser", "nbsphinx", "sphinx", "sphinx-rtd-theme", "sphinxcontrib-github-alt"]
+json-logging = ["json-logging"]
+test = ["coverage", "nbval", "pytest", "pytest-cov", "requests", "requests-unixsocket", "selenium (==4.1.5)", "testpath"]
+
+[[package]]
+name = "notebook-shim"
+version = "0.2.2"
+description = "A shim layer for notebook traits and config"
+category = "dev"
+optional = false
+python-versions = ">=3.7"
+files = [
+ {file = "notebook_shim-0.2.2-py3-none-any.whl", hash = "sha256:9c6c30f74c4fbea6fce55c1be58e7fd0409b1c681b075dcedceb005db5026949"},
+ {file = "notebook_shim-0.2.2.tar.gz", hash = "sha256:090e0baf9a5582ff59b607af523ca2db68ff216da0c69956b62cab2ef4fc9c3f"},
+]
+
+[package.dependencies]
+jupyter-server = ">=1.8,<3"
+
+[package.extras]
+test = ["pytest", "pytest-console-scripts", "pytest-tornasync"]
+
+[[package]]
+name = "numpy"
+version = "1.21.6"
+description = "NumPy is the fundamental package for array computing with Python."
+category = "main"
+optional = false
+python-versions = ">=3.7,<3.11"
+files = [
+ {file = "numpy-1.21.6-cp310-cp310-macosx_10_9_universal2.whl", hash = "sha256:8737609c3bbdd48e380d463134a35ffad3b22dc56295eff6f79fd85bd0eeeb25"},
+ {file = "numpy-1.21.6-cp310-cp310-macosx_10_9_x86_64.whl", hash = "sha256:fdffbfb6832cd0b300995a2b08b8f6fa9f6e856d562800fea9182316d99c4e8e"},
+ {file = "numpy-1.21.6-cp310-cp310-macosx_11_0_arm64.whl", hash = "sha256:3820724272f9913b597ccd13a467cc492a0da6b05df26ea09e78b171a0bb9da6"},
+ {file = "numpy-1.21.6-cp310-cp310-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:f17e562de9edf691a42ddb1eb4a5541c20dd3f9e65b09ded2beb0799c0cf29bb"},
+ {file = "numpy-1.21.6-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:5f30427731561ce75d7048ac254dbe47a2ba576229250fb60f0fb74db96501a1"},
+ {file = "numpy-1.21.6-cp310-cp310-win32.whl", hash = "sha256:d4bf4d43077db55589ffc9009c0ba0a94fa4908b9586d6ccce2e0b164c86303c"},
+ {file = "numpy-1.21.6-cp310-cp310-win_amd64.whl", hash = "sha256:d136337ae3cc69aa5e447e78d8e1514be8c3ec9b54264e680cf0b4bd9011574f"},
+ {file = "numpy-1.21.6-cp37-cp37m-macosx_10_9_x86_64.whl", hash = "sha256:6aaf96c7f8cebc220cdfc03f1d5a31952f027dda050e5a703a0d1c396075e3e7"},
+ {file = "numpy-1.21.6-cp37-cp37m-manylinux_2_12_i686.manylinux2010_i686.whl", hash = "sha256:67c261d6c0a9981820c3a149d255a76918278a6b03b6a036800359aba1256d46"},
+ {file = "numpy-1.21.6-cp37-cp37m-manylinux_2_12_x86_64.manylinux2010_x86_64.whl", hash = "sha256:a6be4cb0ef3b8c9250c19cc122267263093eee7edd4e3fa75395dfda8c17a8e2"},
+ {file = "numpy-1.21.6-cp37-cp37m-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:7c4068a8c44014b2d55f3c3f574c376b2494ca9cc73d2f1bd692382b6dffe3db"},
+ {file = "numpy-1.21.6-cp37-cp37m-win32.whl", hash = "sha256:7c7e5fa88d9ff656e067876e4736379cc962d185d5cd808014a8a928d529ef4e"},
+ {file = "numpy-1.21.6-cp37-cp37m-win_amd64.whl", hash = "sha256:bcb238c9c96c00d3085b264e5c1a1207672577b93fa666c3b14a45240b14123a"},
+ {file = "numpy-1.21.6-cp38-cp38-macosx_10_9_universal2.whl", hash = "sha256:82691fda7c3f77c90e62da69ae60b5ac08e87e775b09813559f8901a88266552"},
+ {file = "numpy-1.21.6-cp38-cp38-macosx_10_9_x86_64.whl", hash = "sha256:643843bcc1c50526b3a71cd2ee561cf0d8773f062c8cbaf9ffac9fdf573f83ab"},
+ {file = "numpy-1.21.6-cp38-cp38-macosx_11_0_arm64.whl", hash = "sha256:357768c2e4451ac241465157a3e929b265dfac85d9214074985b1786244f2ef3"},
+ {file = "numpy-1.21.6-cp38-cp38-manylinux_2_12_i686.manylinux2010_i686.whl", hash = "sha256:9f411b2c3f3d76bba0865b35a425157c5dcf54937f82bbeb3d3c180789dd66a6"},
+ {file = "numpy-1.21.6-cp38-cp38-manylinux_2_12_x86_64.manylinux2010_x86_64.whl", hash = "sha256:4aa48afdce4660b0076a00d80afa54e8a97cd49f457d68a4342d188a09451c1a"},
+ {file = "numpy-1.21.6-cp38-cp38-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:d6a96eef20f639e6a97d23e57dd0c1b1069a7b4fd7027482a4c5c451cd7732f4"},
+ {file = "numpy-1.21.6-cp38-cp38-win32.whl", hash = "sha256:5c3c8def4230e1b959671eb959083661b4a0d2e9af93ee339c7dada6759a9470"},
+ {file = "numpy-1.21.6-cp38-cp38-win_amd64.whl", hash = "sha256:bf2ec4b75d0e9356edea834d1de42b31fe11f726a81dfb2c2112bc1eaa508fcf"},
+ {file = "numpy-1.21.6-cp39-cp39-macosx_10_9_universal2.whl", hash = "sha256:4391bd07606be175aafd267ef9bea87cf1b8210c787666ce82073b05f202add1"},
+ {file = "numpy-1.21.6-cp39-cp39-macosx_10_9_x86_64.whl", hash = "sha256:67f21981ba2f9d7ba9ade60c9e8cbaa8cf8e9ae51673934480e45cf55e953673"},
+ {file = "numpy-1.21.6-cp39-cp39-macosx_11_0_arm64.whl", hash = "sha256:ee5ec40fdd06d62fe5d4084bef4fd50fd4bb6bfd2bf519365f569dc470163ab0"},
+ {file = "numpy-1.21.6-cp39-cp39-manylinux_2_12_i686.manylinux2010_i686.whl", hash = "sha256:1dbe1c91269f880e364526649a52eff93ac30035507ae980d2fed33aaee633ac"},
+ {file = "numpy-1.21.6-cp39-cp39-manylinux_2_12_x86_64.manylinux2010_x86_64.whl", hash = "sha256:d9caa9d5e682102453d96a0ee10c7241b72859b01a941a397fd965f23b3e016b"},
+ {file = "numpy-1.21.6-cp39-cp39-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:58459d3bad03343ac4b1b42ed14d571b8743dc80ccbf27444f266729df1d6f5b"},
+ {file = "numpy-1.21.6-cp39-cp39-win32.whl", hash = "sha256:7f5ae4f304257569ef3b948810816bc87c9146e8c446053539947eedeaa32786"},
+ {file = "numpy-1.21.6-cp39-cp39-win_amd64.whl", hash = "sha256:e31f0bb5928b793169b87e3d1e070f2342b22d5245c755e2b81caa29756246c3"},
+ {file = "numpy-1.21.6-pp37-pypy37_pp73-manylinux_2_12_x86_64.manylinux2010_x86_64.whl", hash = "sha256:dd1c8f6bd65d07d3810b90d02eba7997e32abbdf1277a481d698969e921a3be0"},
+ {file = "numpy-1.21.6.zip", hash = "sha256:ecb55251139706669fdec2ff073c98ef8e9a84473e51e716211b41aa0f18e656"},
+]
+
+[[package]]
+name = "numpy"
+version = "1.24.2"
+description = "Fundamental package for array computing in Python"
+category = "main"
+optional = false
+python-versions = ">=3.8"
+files = [
+ {file = "numpy-1.24.2-cp310-cp310-macosx_10_9_x86_64.whl", hash = "sha256:eef70b4fc1e872ebddc38cddacc87c19a3709c0e3e5d20bf3954c147b1dd941d"},
+ {file = "numpy-1.24.2-cp310-cp310-macosx_11_0_arm64.whl", hash = "sha256:e8d2859428712785e8a8b7d2b3ef0a1d1565892367b32f915c4a4df44d0e64f5"},
+ {file = "numpy-1.24.2-cp310-cp310-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:6524630f71631be2dabe0c541e7675db82651eb998496bbe16bc4f77f0772253"},
+ {file = "numpy-1.24.2-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:a51725a815a6188c662fb66fb32077709a9ca38053f0274640293a14fdd22978"},
+ {file = "numpy-1.24.2-cp310-cp310-win32.whl", hash = "sha256:2620e8592136e073bd12ee4536149380695fbe9ebeae845b81237f986479ffc9"},
+ {file = "numpy-1.24.2-cp310-cp310-win_amd64.whl", hash = "sha256:97cf27e51fa078078c649a51d7ade3c92d9e709ba2bfb97493007103c741f1d0"},
+ {file = "numpy-1.24.2-cp311-cp311-macosx_10_9_x86_64.whl", hash = "sha256:7de8fdde0003f4294655aa5d5f0a89c26b9f22c0a58790c38fae1ed392d44a5a"},
+ {file = "numpy-1.24.2-cp311-cp311-macosx_11_0_arm64.whl", hash = "sha256:4173bde9fa2a005c2c6e2ea8ac1618e2ed2c1c6ec8a7657237854d42094123a0"},
+ {file = "numpy-1.24.2-cp311-cp311-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:4cecaed30dc14123020f77b03601559fff3e6cd0c048f8b5289f4eeabb0eb281"},
+ {file = "numpy-1.24.2-cp311-cp311-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:9a23f8440561a633204a67fb44617ce2a299beecf3295f0d13c495518908e910"},
+ {file = "numpy-1.24.2-cp311-cp311-win32.whl", hash = "sha256:e428c4fbfa085f947b536706a2fc349245d7baa8334f0c5723c56a10595f9b95"},
+ {file = "numpy-1.24.2-cp311-cp311-win_amd64.whl", hash = "sha256:557d42778a6869c2162deb40ad82612645e21d79e11c1dc62c6e82a2220ffb04"},
+ {file = "numpy-1.24.2-cp38-cp38-macosx_10_9_x86_64.whl", hash = "sha256:d0a2db9d20117bf523dde15858398e7c0858aadca7c0f088ac0d6edd360e9ad2"},
+ {file = "numpy-1.24.2-cp38-cp38-macosx_11_0_arm64.whl", hash = "sha256:c72a6b2f4af1adfe193f7beb91ddf708ff867a3f977ef2ec53c0ffb8283ab9f5"},
+ {file = "numpy-1.24.2-cp38-cp38-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:c29e6bd0ec49a44d7690ecb623a8eac5ab8a923bce0bea6293953992edf3a76a"},
+ {file = "numpy-1.24.2-cp38-cp38-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:2eabd64ddb96a1239791da78fa5f4e1693ae2dadc82a76bc76a14cbb2b966e96"},
+ {file = "numpy-1.24.2-cp38-cp38-win32.whl", hash = "sha256:e3ab5d32784e843fc0dd3ab6dcafc67ef806e6b6828dc6af2f689be0eb4d781d"},
+ {file = "numpy-1.24.2-cp38-cp38-win_amd64.whl", hash = "sha256:76807b4063f0002c8532cfeac47a3068a69561e9c8715efdad3c642eb27c0756"},
+ {file = "numpy-1.24.2-cp39-cp39-macosx_10_9_x86_64.whl", hash = "sha256:4199e7cfc307a778f72d293372736223e39ec9ac096ff0a2e64853b866a8e18a"},
+ {file = "numpy-1.24.2-cp39-cp39-macosx_11_0_arm64.whl", hash = "sha256:adbdce121896fd3a17a77ab0b0b5eedf05a9834a18699db6829a64e1dfccca7f"},
+ {file = "numpy-1.24.2-cp39-cp39-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:889b2cc88b837d86eda1b17008ebeb679d82875022200c6e8e4ce6cf549b7acb"},
+ {file = "numpy-1.24.2-cp39-cp39-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:f64bb98ac59b3ea3bf74b02f13836eb2e24e48e0ab0145bbda646295769bd780"},
+ {file = "numpy-1.24.2-cp39-cp39-win32.whl", hash = "sha256:63e45511ee4d9d976637d11e6c9864eae50e12dc9598f531c035265991910468"},
+ {file = "numpy-1.24.2-cp39-cp39-win_amd64.whl", hash = "sha256:a77d3e1163a7770164404607b7ba3967fb49b24782a6ef85d9b5f54126cc39e5"},
+ {file = "numpy-1.24.2-pp38-pypy38_pp73-macosx_10_9_x86_64.whl", hash = "sha256:92011118955724465fb6853def593cf397b4a1367495e0b59a7e69d40c4eb71d"},
+ {file = "numpy-1.24.2-pp38-pypy38_pp73-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:f9006288bcf4895917d02583cf3411f98631275bc67cce355a7f39f8c14338fa"},
+ {file = "numpy-1.24.2-pp38-pypy38_pp73-win_amd64.whl", hash = "sha256:150947adbdfeceec4e5926d956a06865c1c690f2fd902efede4ca6fe2e657c3f"},
+ {file = "numpy-1.24.2.tar.gz", hash = "sha256:003a9f530e880cb2cd177cba1af7220b9aa42def9c4afc2a2fc3ee6be7eb2b22"},
+]
+
+[[package]]
+name = "nvidia-cublas-cu11"
+version = "11.10.3.66"
+description = "CUBLAS native runtime libraries"
+category = "main"
+optional = false
+python-versions = ">=3"
+files = [
+ {file = "nvidia_cublas_cu11-11.10.3.66-py3-none-manylinux1_x86_64.whl", hash = "sha256:d32e4d75f94ddfb93ea0a5dda08389bcc65d8916a25cb9f37ac89edaeed3bded"},
+ {file = "nvidia_cublas_cu11-11.10.3.66-py3-none-win_amd64.whl", hash = "sha256:8ac17ba6ade3ed56ab898a036f9ae0756f1e81052a317bf98f8c6d18dc3ae49e"},
+]
+
+[package.dependencies]
+setuptools = "*"
+wheel = "*"
+
+[[package]]
+name = "nvidia-cuda-nvrtc-cu11"
+version = "11.7.99"
+description = "NVRTC native runtime libraries"
+category = "main"
+optional = false
+python-versions = ">=3"
+files = [
+ {file = "nvidia_cuda_nvrtc_cu11-11.7.99-2-py3-none-manylinux1_x86_64.whl", hash = "sha256:9f1562822ea264b7e34ed5930567e89242d266448e936b85bc97a3370feabb03"},
+ {file = "nvidia_cuda_nvrtc_cu11-11.7.99-py3-none-manylinux1_x86_64.whl", hash = "sha256:f7d9610d9b7c331fa0da2d1b2858a4a8315e6d49765091d28711c8946e7425e7"},
+ {file = "nvidia_cuda_nvrtc_cu11-11.7.99-py3-none-win_amd64.whl", hash = "sha256:f2effeb1309bdd1b3854fc9b17eaf997808f8b25968ce0c7070945c4265d64a3"},
+]
+
+[package.dependencies]
+setuptools = "*"
+wheel = "*"
+
+[[package]]
+name = "nvidia-cuda-runtime-cu11"
+version = "11.7.99"
+description = "CUDA Runtime native Libraries"
+category = "main"
+optional = false
+python-versions = ">=3"
+files = [
+ {file = "nvidia_cuda_runtime_cu11-11.7.99-py3-none-manylinux1_x86_64.whl", hash = "sha256:cc768314ae58d2641f07eac350f40f99dcb35719c4faff4bc458a7cd2b119e31"},
+ {file = "nvidia_cuda_runtime_cu11-11.7.99-py3-none-win_amd64.whl", hash = "sha256:bc77fa59a7679310df9d5c70ab13c4e34c64ae2124dd1efd7e5474b71be125c7"},
+]
+
+[package.dependencies]
+setuptools = "*"
+wheel = "*"
+
+[[package]]
+name = "nvidia-cudnn-cu11"
+version = "8.5.0.96"
+description = "cuDNN runtime libraries"
+category = "main"
+optional = false
+python-versions = ">=3"
+files = [
+ {file = "nvidia_cudnn_cu11-8.5.0.96-2-py3-none-manylinux1_x86_64.whl", hash = "sha256:402f40adfc6f418f9dae9ab402e773cfed9beae52333f6d86ae3107a1b9527e7"},
+ {file = "nvidia_cudnn_cu11-8.5.0.96-py3-none-manylinux1_x86_64.whl", hash = "sha256:71f8111eb830879ff2836db3cccf03bbd735df9b0d17cd93761732ac50a8a108"},
+]
+
+[package.dependencies]
+setuptools = "*"
+wheel = "*"
+
+[[package]]
+name = "packageurl-python"
+version = "0.10.4"
+description = "A purl aka. Package URL parser and builder"
+category = "dev"
+optional = false
+python-versions = ">=3.6"
+files = [
+ {file = "packageurl-python-0.10.4.tar.gz", hash = "sha256:5c91334f942cd55d45eb0c67dd339a535ef90e25f05b9ec016ad188ed0ef9048"},
+ {file = "packageurl_python-0.10.4-py3-none-any.whl", hash = "sha256:bf8a1ffe755634776f6563904d792fb0aa13b377fc86115c36fe17f69b6e59db"},
+]
+
+[package.extras]
+build = ["wheel"]
+test = ["black", "isort", "pytest"]
+
+[[package]]
+name = "packaging"
+version = "23.0"
+description = "Core utilities for Python packages"
+category = "main"
+optional = false
+python-versions = ">=3.7"
+files = [
+ {file = "packaging-23.0-py3-none-any.whl", hash = "sha256:714ac14496c3e68c99c29b00845f7a2b85f3bb6f1078fd9f72fd20f0570002b2"},
+ {file = "packaging-23.0.tar.gz", hash = "sha256:b6ad297f8907de0fa2fe1ccbd26fdaf387f5f47c7275fedf8cce89f99446cf97"},
+]
+
+[[package]]
+name = "pandocfilters"
+version = "1.5.0"
+description = "Utilities for writing pandoc filters in python"
+category = "dev"
+optional = false
+python-versions = ">=2.7, !=3.0.*, !=3.1.*, !=3.2.*, !=3.3.*"
+files = [
+ {file = "pandocfilters-1.5.0-py2.py3-none-any.whl", hash = "sha256:33aae3f25fd1a026079f5d27bdd52496f0e0803b3469282162bafdcbdf6ef14f"},
+ {file = "pandocfilters-1.5.0.tar.gz", hash = "sha256:0b679503337d233b4339a817bfc8c50064e2eff681314376a47cb582305a7a38"},
+]
+
+[[package]]
+name = "parso"
+version = "0.8.3"
+description = "A Python Parser"
+category = "dev"
+optional = false
+python-versions = ">=3.6"
+files = [
+ {file = "parso-0.8.3-py2.py3-none-any.whl", hash = "sha256:c001d4636cd3aecdaf33cbb40aebb59b094be2a74c556778ef5576c175e19e75"},
+ {file = "parso-0.8.3.tar.gz", hash = "sha256:8c07be290bb59f03588915921e29e8a50002acaf2cdc5fa0e0114f91709fafa0"},
+]
+
+[package.extras]
+qa = ["flake8 (==3.8.3)", "mypy (==0.782)"]
+testing = ["docopt", "pytest (<6.0.0)"]
+
+[[package]]
+name = "pathspec"
+version = "0.11.0"
+description = "Utility library for gitignore style pattern matching of file paths."
+category = "dev"
+optional = false
+python-versions = ">=3.7"
+files = [
+ {file = "pathspec-0.11.0-py3-none-any.whl", hash = "sha256:3a66eb970cbac598f9e5ccb5b2cf58930cd8e3ed86d393d541eaf2d8b1705229"},
+ {file = "pathspec-0.11.0.tar.gz", hash = "sha256:64d338d4e0914e91c1792321e6907b5a593f1ab1851de7fc269557a21b30ebbc"},
+]
+
+[[package]]
+name = "pexpect"
+version = "4.8.0"
+description = "Pexpect allows easy control of interactive console applications."
+category = "dev"
+optional = false
+python-versions = "*"
+files = [
+ {file = "pexpect-4.8.0-py2.py3-none-any.whl", hash = "sha256:0b48a55dcb3c05f3329815901ea4fc1537514d6ba867a152b581d69ae3710937"},
+ {file = "pexpect-4.8.0.tar.gz", hash = "sha256:fc65a43959d153d0114afe13997d439c22823a27cefceb5ff35c2178c6784c0c"},
+]
+
+[package.dependencies]
+ptyprocess = ">=0.5"
+
+[[package]]
+name = "pickleshare"
+version = "0.7.5"
+description = "Tiny 'shelve'-like database with concurrency support"
+category = "dev"
+optional = false
+python-versions = "*"
+files = [
+ {file = "pickleshare-0.7.5-py2.py3-none-any.whl", hash = "sha256:9649af414d74d4df115d5d718f82acb59c9d418196b7b4290ed47a12ce62df56"},
+ {file = "pickleshare-0.7.5.tar.gz", hash = "sha256:87683d47965c1da65cdacaf31c8441d12b8044cdec9aca500cd78fc2c683afca"},
+]
+
+[[package]]
+name = "pillow"
+version = "9.4.0"
+description = "Python Imaging Library (Fork)"
+category = "main"
+optional = false
+python-versions = ">=3.7"
+files = [
+ {file = "Pillow-9.4.0-1-cp310-cp310-macosx_10_10_x86_64.whl", hash = "sha256:1b4b4e9dda4f4e4c4e6896f93e84a8f0bcca3b059de9ddf67dac3c334b1195e1"},
+ {file = "Pillow-9.4.0-1-cp311-cp311-macosx_10_10_x86_64.whl", hash = "sha256:fb5c1ad6bad98c57482236a21bf985ab0ef42bd51f7ad4e4538e89a997624e12"},
+ {file = "Pillow-9.4.0-1-cp37-cp37m-macosx_10_10_x86_64.whl", hash = "sha256:f0caf4a5dcf610d96c3bd32932bfac8aee61c96e60481c2a0ea58da435e25acd"},
+ {file = "Pillow-9.4.0-1-cp38-cp38-macosx_10_10_x86_64.whl", hash = "sha256:3f4cc516e0b264c8d4ccd6b6cbc69a07c6d582d8337df79be1e15a5056b258c9"},
+ {file = "Pillow-9.4.0-1-cp39-cp39-macosx_10_10_x86_64.whl", hash = "sha256:b8c2f6eb0df979ee99433d8b3f6d193d9590f735cf12274c108bd954e30ca858"},
+ {file = "Pillow-9.4.0-1-pp38-pypy38_pp73-macosx_10_10_x86_64.whl", hash = "sha256:b70756ec9417c34e097f987b4d8c510975216ad26ba6e57ccb53bc758f490dab"},
+ {file = "Pillow-9.4.0-1-pp39-pypy39_pp73-macosx_10_10_x86_64.whl", hash = "sha256:43521ce2c4b865d385e78579a082b6ad1166ebed2b1a2293c3be1d68dd7ca3b9"},
+ {file = "Pillow-9.4.0-2-cp310-cp310-macosx_10_10_x86_64.whl", hash = "sha256:9d9a62576b68cd90f7075876f4e8444487db5eeea0e4df3ba298ee38a8d067b0"},
+ {file = "Pillow-9.4.0-2-cp311-cp311-macosx_10_10_x86_64.whl", hash = "sha256:87708d78a14d56a990fbf4f9cb350b7d89ee8988705e58e39bdf4d82c149210f"},
+ {file = "Pillow-9.4.0-2-cp37-cp37m-macosx_10_10_x86_64.whl", hash = "sha256:8a2b5874d17e72dfb80d917213abd55d7e1ed2479f38f001f264f7ce7bae757c"},
+ {file = "Pillow-9.4.0-2-cp38-cp38-macosx_10_10_x86_64.whl", hash = "sha256:83125753a60cfc8c412de5896d10a0a405e0bd88d0470ad82e0869ddf0cb3848"},
+ {file = "Pillow-9.4.0-2-cp39-cp39-macosx_10_10_x86_64.whl", hash = "sha256:9e5f94742033898bfe84c93c831a6f552bb629448d4072dd312306bab3bd96f1"},
+ {file = "Pillow-9.4.0-2-pp38-pypy38_pp73-macosx_10_10_x86_64.whl", hash = "sha256:013016af6b3a12a2f40b704677f8b51f72cb007dac785a9933d5c86a72a7fe33"},
+ {file = "Pillow-9.4.0-2-pp39-pypy39_pp73-macosx_10_10_x86_64.whl", hash = "sha256:99d92d148dd03fd19d16175b6d355cc1b01faf80dae93c6c3eb4163709edc0a9"},
+ {file = "Pillow-9.4.0-cp310-cp310-macosx_10_10_x86_64.whl", hash = "sha256:2968c58feca624bb6c8502f9564dd187d0e1389964898f5e9e1fbc8533169157"},
+ {file = "Pillow-9.4.0-cp310-cp310-macosx_11_0_arm64.whl", hash = "sha256:c5c1362c14aee73f50143d74389b2c158707b4abce2cb055b7ad37ce60738d47"},
+ {file = "Pillow-9.4.0-cp310-cp310-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:bd752c5ff1b4a870b7661234694f24b1d2b9076b8bf337321a814c612665f343"},
+ {file = "Pillow-9.4.0-cp310-cp310-manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:9a3049a10261d7f2b6514d35bbb7a4dfc3ece4c4de14ef5876c4b7a23a0e566d"},
+ {file = "Pillow-9.4.0-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:16a8df99701f9095bea8a6c4b3197da105df6f74e6176c5b410bc2df2fd29a57"},
+ {file = "Pillow-9.4.0-cp310-cp310-manylinux_2_28_aarch64.whl", hash = "sha256:94cdff45173b1919350601f82d61365e792895e3c3a3443cf99819e6fbf717a5"},
+ {file = "Pillow-9.4.0-cp310-cp310-manylinux_2_28_x86_64.whl", hash = "sha256:ed3e4b4e1e6de75fdc16d3259098de7c6571b1a6cc863b1a49e7d3d53e036070"},
+ {file = "Pillow-9.4.0-cp310-cp310-musllinux_1_1_aarch64.whl", hash = "sha256:d5b2f8a31bd43e0f18172d8ac82347c8f37ef3e0b414431157718aa234991b28"},
+ {file = "Pillow-9.4.0-cp310-cp310-musllinux_1_1_x86_64.whl", hash = "sha256:09b89ddc95c248ee788328528e6a2996e09eaccddeeb82a5356e92645733be35"},
+ {file = "Pillow-9.4.0-cp310-cp310-win32.whl", hash = "sha256:f09598b416ba39a8f489c124447b007fe865f786a89dbfa48bb5cf395693132a"},
+ {file = "Pillow-9.4.0-cp310-cp310-win_amd64.whl", hash = "sha256:f6e78171be3fb7941f9910ea15b4b14ec27725865a73c15277bc39f5ca4f8391"},
+ {file = "Pillow-9.4.0-cp311-cp311-macosx_10_10_x86_64.whl", hash = "sha256:3fa1284762aacca6dc97474ee9c16f83990b8eeb6697f2ba17140d54b453e133"},
+ {file = "Pillow-9.4.0-cp311-cp311-macosx_11_0_arm64.whl", hash = "sha256:eaef5d2de3c7e9b21f1e762f289d17b726c2239a42b11e25446abf82b26ac132"},
+ {file = "Pillow-9.4.0-cp311-cp311-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:a4dfdae195335abb4e89cc9762b2edc524f3c6e80d647a9a81bf81e17e3fb6f0"},
+ {file = "Pillow-9.4.0-cp311-cp311-manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:6abfb51a82e919e3933eb137e17c4ae9c0475a25508ea88993bb59faf82f3b35"},
+ {file = "Pillow-9.4.0-cp311-cp311-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:451f10ef963918e65b8869e17d67db5e2f4ab40e716ee6ce7129b0cde2876eab"},
+ {file = "Pillow-9.4.0-cp311-cp311-manylinux_2_28_aarch64.whl", hash = "sha256:6663977496d616b618b6cfa43ec86e479ee62b942e1da76a2c3daa1c75933ef4"},
+ {file = "Pillow-9.4.0-cp311-cp311-manylinux_2_28_x86_64.whl", hash = "sha256:60e7da3a3ad1812c128750fc1bc14a7ceeb8d29f77e0a2356a8fb2aa8925287d"},
+ {file = "Pillow-9.4.0-cp311-cp311-musllinux_1_1_aarch64.whl", hash = "sha256:19005a8e58b7c1796bc0167862b1f54a64d3b44ee5d48152b06bb861458bc0f8"},
+ {file = "Pillow-9.4.0-cp311-cp311-musllinux_1_1_x86_64.whl", hash = "sha256:f715c32e774a60a337b2bb8ad9839b4abf75b267a0f18806f6f4f5f1688c4b5a"},
+ {file = "Pillow-9.4.0-cp311-cp311-win32.whl", hash = "sha256:b222090c455d6d1a64e6b7bb5f4035c4dff479e22455c9eaa1bdd4c75b52c80c"},
+ {file = "Pillow-9.4.0-cp311-cp311-win_amd64.whl", hash = "sha256:ba6612b6548220ff5e9df85261bddc811a057b0b465a1226b39bfb8550616aee"},
+ {file = "Pillow-9.4.0-cp37-cp37m-macosx_10_10_x86_64.whl", hash = "sha256:5f532a2ad4d174eb73494e7397988e22bf427f91acc8e6ebf5bb10597b49c493"},
+ {file = "Pillow-9.4.0-cp37-cp37m-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:5dd5a9c3091a0f414a963d427f920368e2b6a4c2f7527fdd82cde8ef0bc7a327"},
+ {file = "Pillow-9.4.0-cp37-cp37m-manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:ef21af928e807f10bf4141cad4746eee692a0dd3ff56cfb25fce076ec3cc8abe"},
+ {file = "Pillow-9.4.0-cp37-cp37m-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:847b114580c5cc9ebaf216dd8c8dbc6b00a3b7ab0131e173d7120e6deade1f57"},
+ {file = "Pillow-9.4.0-cp37-cp37m-manylinux_2_28_aarch64.whl", hash = "sha256:653d7fb2df65efefbcbf81ef5fe5e5be931f1ee4332c2893ca638c9b11a409c4"},
+ {file = "Pillow-9.4.0-cp37-cp37m-manylinux_2_28_x86_64.whl", hash = "sha256:46f39cab8bbf4a384ba7cb0bc8bae7b7062b6a11cfac1ca4bc144dea90d4a9f5"},
+ {file = "Pillow-9.4.0-cp37-cp37m-win32.whl", hash = "sha256:7ac7594397698f77bce84382929747130765f66406dc2cd8b4ab4da68ade4c6e"},
+ {file = "Pillow-9.4.0-cp37-cp37m-win_amd64.whl", hash = "sha256:46c259e87199041583658457372a183636ae8cd56dbf3f0755e0f376a7f9d0e6"},
+ {file = "Pillow-9.4.0-cp38-cp38-macosx_10_10_x86_64.whl", hash = "sha256:0e51f608da093e5d9038c592b5b575cadc12fd748af1479b5e858045fff955a9"},
+ {file = "Pillow-9.4.0-cp38-cp38-macosx_11_0_arm64.whl", hash = "sha256:765cb54c0b8724a7c12c55146ae4647e0274a839fb6de7bcba841e04298e1011"},
+ {file = "Pillow-9.4.0-cp38-cp38-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:519e14e2c49fcf7616d6d2cfc5c70adae95682ae20f0395e9280db85e8d6c4df"},
+ {file = "Pillow-9.4.0-cp38-cp38-manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:d197df5489004db87d90b918033edbeee0bd6df3848a204bca3ff0a903bef837"},
+ {file = "Pillow-9.4.0-cp38-cp38-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:0845adc64fe9886db00f5ab68c4a8cd933ab749a87747555cec1c95acea64b0b"},
+ {file = "Pillow-9.4.0-cp38-cp38-manylinux_2_28_aarch64.whl", hash = "sha256:e1339790c083c5a4de48f688b4841f18df839eb3c9584a770cbd818b33e26d5d"},
+ {file = "Pillow-9.4.0-cp38-cp38-manylinux_2_28_x86_64.whl", hash = "sha256:a96e6e23f2b79433390273eaf8cc94fec9c6370842e577ab10dabdcc7ea0a66b"},
+ {file = "Pillow-9.4.0-cp38-cp38-musllinux_1_1_aarch64.whl", hash = "sha256:7cfc287da09f9d2a7ec146ee4d72d6ea1342e770d975e49a8621bf54eaa8f30f"},
+ {file = "Pillow-9.4.0-cp38-cp38-musllinux_1_1_x86_64.whl", hash = "sha256:d7081c084ceb58278dd3cf81f836bc818978c0ccc770cbbb202125ddabec6628"},
+ {file = "Pillow-9.4.0-cp38-cp38-win32.whl", hash = "sha256:df41112ccce5d47770a0c13651479fbcd8793f34232a2dd9faeccb75eb5d0d0d"},
+ {file = "Pillow-9.4.0-cp38-cp38-win_amd64.whl", hash = "sha256:7a21222644ab69ddd9967cfe6f2bb420b460dae4289c9d40ff9a4896e7c35c9a"},
+ {file = "Pillow-9.4.0-cp39-cp39-macosx_10_10_x86_64.whl", hash = "sha256:0f3269304c1a7ce82f1759c12ce731ef9b6e95b6df829dccd9fe42912cc48569"},
+ {file = "Pillow-9.4.0-cp39-cp39-macosx_11_0_arm64.whl", hash = "sha256:cb362e3b0976dc994857391b776ddaa8c13c28a16f80ac6522c23d5257156bed"},
+ {file = "Pillow-9.4.0-cp39-cp39-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:a2e0f87144fcbbe54297cae708c5e7f9da21a4646523456b00cc956bd4c65815"},
+ {file = "Pillow-9.4.0-cp39-cp39-manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:28676836c7796805914b76b1837a40f76827ee0d5398f72f7dcc634bae7c6264"},
+ {file = "Pillow-9.4.0-cp39-cp39-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:0884ba7b515163a1a05440a138adeb722b8a6ae2c2b33aea93ea3118dd3a899e"},
+ {file = "Pillow-9.4.0-cp39-cp39-manylinux_2_28_aarch64.whl", hash = "sha256:53dcb50fbdc3fb2c55431a9b30caeb2f7027fcd2aeb501459464f0214200a503"},
+ {file = "Pillow-9.4.0-cp39-cp39-manylinux_2_28_x86_64.whl", hash = "sha256:e8c5cf126889a4de385c02a2c3d3aba4b00f70234bfddae82a5eaa3ee6d5e3e6"},
+ {file = "Pillow-9.4.0-cp39-cp39-musllinux_1_1_aarch64.whl", hash = "sha256:6c6b1389ed66cdd174d040105123a5a1bc91d0aa7059c7261d20e583b6d8cbd2"},
+ {file = "Pillow-9.4.0-cp39-cp39-musllinux_1_1_x86_64.whl", hash = "sha256:0dd4c681b82214b36273c18ca7ee87065a50e013112eea7d78c7a1b89a739153"},
+ {file = "Pillow-9.4.0-cp39-cp39-win32.whl", hash = "sha256:6d9dfb9959a3b0039ee06c1a1a90dc23bac3b430842dcb97908ddde05870601c"},
+ {file = "Pillow-9.4.0-cp39-cp39-win_amd64.whl", hash = "sha256:54614444887e0d3043557d9dbc697dbb16cfb5a35d672b7a0fcc1ed0cf1c600b"},
+ {file = "Pillow-9.4.0-pp38-pypy38_pp73-macosx_10_10_x86_64.whl", hash = "sha256:b9b752ab91e78234941e44abdecc07f1f0d8f51fb62941d32995b8161f68cfe5"},
+ {file = "Pillow-9.4.0-pp38-pypy38_pp73-manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:d3b56206244dc8711f7e8b7d6cad4663917cd5b2d950799425076681e8766286"},
+ {file = "Pillow-9.4.0-pp38-pypy38_pp73-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:aabdab8ec1e7ca7f1434d042bf8b1e92056245fb179790dc97ed040361f16bfd"},
+ {file = "Pillow-9.4.0-pp38-pypy38_pp73-manylinux_2_28_x86_64.whl", hash = "sha256:db74f5562c09953b2c5f8ec4b7dfd3f5421f31811e97d1dbc0a7c93d6e3a24df"},
+ {file = "Pillow-9.4.0-pp38-pypy38_pp73-win_amd64.whl", hash = "sha256:e9d7747847c53a16a729b6ee5e737cf170f7a16611c143d95aa60a109a59c336"},
+ {file = "Pillow-9.4.0-pp39-pypy39_pp73-macosx_10_10_x86_64.whl", hash = "sha256:b52ff4f4e002f828ea6483faf4c4e8deea8d743cf801b74910243c58acc6eda3"},
+ {file = "Pillow-9.4.0-pp39-pypy39_pp73-manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:575d8912dca808edd9acd6f7795199332696d3469665ef26163cd090fa1f8bfa"},
+ {file = "Pillow-9.4.0-pp39-pypy39_pp73-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:c3c4ed2ff6760e98d262e0cc9c9a7f7b8a9f61aa4d47c58835cdaf7b0b8811bb"},
+ {file = "Pillow-9.4.0-pp39-pypy39_pp73-manylinux_2_28_x86_64.whl", hash = "sha256:e621b0246192d3b9cb1dc62c78cfa4c6f6d2ddc0ec207d43c0dedecb914f152a"},
+ {file = "Pillow-9.4.0-pp39-pypy39_pp73-win_amd64.whl", hash = "sha256:8f127e7b028900421cad64f51f75c051b628db17fb00e099eb148761eed598c9"},
+ {file = "Pillow-9.4.0.tar.gz", hash = "sha256:a1c2d7780448eb93fbcc3789bf3916aa5720d942e37945f4056680317f1cd23e"},
+]
+
+[package.extras]
+docs = ["furo", "olefile", "sphinx (>=2.4)", "sphinx-copybutton", "sphinx-inline-tabs", "sphinx-issues (>=3.0.1)", "sphinx-removed-in", "sphinxext-opengraph"]
+tests = ["check-manifest", "coverage", "defusedxml", "markdown2", "olefile", "packaging", "pyroma", "pytest", "pytest-cov", "pytest-timeout"]
+
+[[package]]
+name = "pip"
+version = "23.0.1"
+description = "The PyPA recommended tool for installing Python packages."
+category = "dev"
+optional = false
+python-versions = ">=3.7"
+files = [
+ {file = "pip-23.0.1-py3-none-any.whl", hash = "sha256:236bcb61156d76c4b8a05821b988c7b8c35bf0da28a4b614e8d6ab5212c25c6f"},
+ {file = "pip-23.0.1.tar.gz", hash = "sha256:cd015ea1bfb0fcef59d8a286c1f8bebcb983f6317719d415dc5351efb7cd7024"},
+]
+
+[[package]]
+name = "pip-api"
+version = "0.0.30"
+description = "An unofficial, importable pip API"
+category = "dev"
+optional = false
+python-versions = ">=3.7"
+files = [
+ {file = "pip-api-0.0.30.tar.gz", hash = "sha256:a05df2c7aa9b7157374bcf4273544201a0c7bae60a9c65bcf84f3959ef3896f3"},
+ {file = "pip_api-0.0.30-py3-none-any.whl", hash = "sha256:2a0314bd31522eb9ffe8a99668b0d07fee34ebc537931e7b6483001dbedcbdc9"},
+]
+
+[package.dependencies]
+pip = "*"
+
+[[package]]
+name = "pip-audit"
+version = "1.1.2"
+description = "A tool for scanning Python environments for known vulnerabilities"
+category = "dev"
+optional = false
+python-versions = ">=3.6"
+files = [
+ {file = "pip-audit-1.1.2.tar.gz", hash = "sha256:374e8528a1376145cbe0f0ec4a7b6a5ebfd6152f665d274498ea49d8bffef24c"},
+ {file = "pip_audit-1.1.2-py3-none-any.whl", hash = "sha256:48325027b803376bee22ca273f8a1b477324c10663c6218a5acebfdc4a107328"},
+]
+
+[package.dependencies]
+CacheControl = {version = ">=0.12.10", extras = ["filecache"]}
+cyclonedx-python-lib = ">=0.11.1,<1.0.0"
+html5lib = ">=1.1"
+packaging = ">=21.0.0"
+pip-api = ">=0.0.26"
+progress = ">=1.6"
+resolvelib = ">=0.8.0"
+
+[package.extras]
+dev = ["black", "bump", "coverage[toml]", "flake8", "interrogate", "isort", "mypy", "pdoc3", "pretend", "pytest", "pytest-cov", "types-dataclasses", "types-html5lib", "types-requests"]
+
+[[package]]
+name = "pip-licenses"
+version = "3.5.5"
+description = "Dump the software license list of Python packages installed with pip."
+category = "dev"
+optional = false
+python-versions = "~=3.7"
+files = [
+ {file = "pip-licenses-3.5.5.tar.gz", hash = "sha256:748cfd7aca6e05032f9fa85691301295f4d943e87955be6914ca49abe3c075a4"},
+ {file = "pip_licenses-3.5.5-py3-none-any.whl", hash = "sha256:6129c116bab2b202d90d6e3a96092df4ad84c0c4d57bb70192fc03f8bf06d181"},
+]
+
+[package.dependencies]
+PTable = "*"
+
+[package.extras]
+test = ["docutils", "pytest-cov", "pytest-pycodestyle", "pytest-runner"]
+
+[[package]]
+name = "pkginfo"
+version = "1.9.6"
+description = "Query metadata from sdists / bdists / installed packages."
+category = "dev"
+optional = false
+python-versions = ">=3.6"
+files = [
+ {file = "pkginfo-1.9.6-py3-none-any.whl", hash = "sha256:4b7a555a6d5a22169fcc9cf7bfd78d296b0361adad412a346c1226849af5e546"},
+ {file = "pkginfo-1.9.6.tar.gz", hash = "sha256:8fd5896e8718a4372f0ea9cc9d96f6417c9b986e23a4d116dda26b62cc29d046"},
+]
+
+[package.extras]
+testing = ["pytest", "pytest-cov"]
+
+[[package]]
+name = "pkgutil-resolve-name"
+version = "1.3.10"
+description = "Resolve a name to an object."
+category = "dev"
+optional = false
+python-versions = ">=3.6"
+files = [
+ {file = "pkgutil_resolve_name-1.3.10-py3-none-any.whl", hash = "sha256:ca27cc078d25c5ad71a9de0a7a330146c4e014c2462d9af19c6b828280649c5e"},
+ {file = "pkgutil_resolve_name-1.3.10.tar.gz", hash = "sha256:357d6c9e6a755653cfd78893817c0853af365dd51ec97f3d358a819373bbd174"},
+]
+
+[[package]]
+name = "platformdirs"
+version = "2.6.1"
+description = "A small Python package for determining appropriate platform-specific dirs, e.g. a \"user data dir\"."
+category = "dev"
+optional = false
+python-versions = ">=3.7"
+files = [
+ {file = "platformdirs-2.6.1-py3-none-any.whl", hash = "sha256:69de5933ec873bd7ddae497f004be17cf200bce048dc987c28fc4e347d5349ff"},
+ {file = "platformdirs-2.6.1.tar.gz", hash = "sha256:e13f076e0f725f1beb58e7d26f80eff94099941740d3c664db03efecd6561271"},
+]
+
+[package.extras]
+docs = ["furo (>=2022.12.7)", "proselint (>=0.13)", "sphinx (>=5.3)", "sphinx-autodoc-typehints (>=1.19.5)"]
+test = ["appdirs (==1.4.4)", "covdefaults (>=2.2.2)", "pytest (>=7.2)", "pytest-cov (>=4)", "pytest-mock (>=3.10)"]
+
+[[package]]
+name = "pluggy"
+version = "1.0.0"
+description = "plugin and hook calling mechanisms for python"
+category = "dev"
+optional = false
+python-versions = ">=3.6"
+files = [
+ {file = "pluggy-1.0.0-py2.py3-none-any.whl", hash = "sha256:74134bbf457f031a36d68416e1509f34bd5ccc019f0bcc952c7b909d06b37bd3"},
+ {file = "pluggy-1.0.0.tar.gz", hash = "sha256:4224373bacce55f955a878bf9cfa763c1e360858e330072059e10bad68531159"},
+]
+
+[package.dependencies]
+importlib-metadata = {version = ">=0.12", markers = "python_version < \"3.8\""}
+
+[package.extras]
+dev = ["pre-commit", "tox"]
+testing = ["pytest", "pytest-benchmark"]
+
+[[package]]
+name = "progress"
+version = "1.6"
+description = "Easy to use progress bars"
+category = "dev"
+optional = false
+python-versions = "*"
+files = [
+ {file = "progress-1.6.tar.gz", hash = "sha256:c9c86e98b5c03fa1fe11e3b67c1feda4788b8d0fe7336c2ff7d5644ccfba34cd"},
+]
+
+[[package]]
+name = "prometheus-client"
+version = "0.16.0"
+description = "Python client for the Prometheus monitoring system."
+category = "dev"
+optional = false
+python-versions = ">=3.6"
+files = [
+ {file = "prometheus_client-0.16.0-py3-none-any.whl", hash = "sha256:0836af6eb2c8f4fed712b2f279f6c0a8bbab29f9f4aa15276b91c7cb0d1616ab"},
+ {file = "prometheus_client-0.16.0.tar.gz", hash = "sha256:a03e35b359f14dd1630898543e2120addfdeacd1a6069c1367ae90fd93ad3f48"},
+]
+
+[package.extras]
+twisted = ["twisted"]
+
+[[package]]
+name = "prompt-toolkit"
+version = "3.0.38"
+description = "Library for building powerful interactive command lines in Python"
+category = "dev"
+optional = false
+python-versions = ">=3.7.0"
+files = [
+ {file = "prompt_toolkit-3.0.38-py3-none-any.whl", hash = "sha256:45ea77a2f7c60418850331366c81cf6b5b9cf4c7fd34616f733c5427e6abbb1f"},
+ {file = "prompt_toolkit-3.0.38.tar.gz", hash = "sha256:23ac5d50538a9a38c8bde05fecb47d0b403ecd0662857a86f886f798563d5b9b"},
+]
+
+[package.dependencies]
+wcwidth = "*"
+
+[[package]]
+name = "psutil"
+version = "5.9.4"
+description = "Cross-platform lib for process and system monitoring in Python."
+category = "dev"
+optional = false
+python-versions = ">=2.7, !=3.0.*, !=3.1.*, !=3.2.*, !=3.3.*"
+files = [
+ {file = "psutil-5.9.4-cp27-cp27m-macosx_10_9_x86_64.whl", hash = "sha256:c1ca331af862803a42677c120aff8a814a804e09832f166f226bfd22b56feee8"},
+ {file = "psutil-5.9.4-cp27-cp27m-manylinux2010_i686.whl", hash = "sha256:68908971daf802203f3d37e78d3f8831b6d1014864d7a85937941bb35f09aefe"},
+ {file = "psutil-5.9.4-cp27-cp27m-manylinux2010_x86_64.whl", hash = "sha256:3ff89f9b835100a825b14c2808a106b6fdcc4b15483141482a12c725e7f78549"},
+ {file = "psutil-5.9.4-cp27-cp27m-win32.whl", hash = "sha256:852dd5d9f8a47169fe62fd4a971aa07859476c2ba22c2254d4a1baa4e10b95ad"},
+ {file = "psutil-5.9.4-cp27-cp27m-win_amd64.whl", hash = "sha256:9120cd39dca5c5e1c54b59a41d205023d436799b1c8c4d3ff71af18535728e94"},
+ {file = "psutil-5.9.4-cp27-cp27mu-manylinux2010_i686.whl", hash = "sha256:6b92c532979bafc2df23ddc785ed116fced1f492ad90a6830cf24f4d1ea27d24"},
+ {file = "psutil-5.9.4-cp27-cp27mu-manylinux2010_x86_64.whl", hash = "sha256:efeae04f9516907be44904cc7ce08defb6b665128992a56957abc9b61dca94b7"},
+ {file = "psutil-5.9.4-cp36-abi3-macosx_10_9_x86_64.whl", hash = "sha256:54d5b184728298f2ca8567bf83c422b706200bcbbfafdc06718264f9393cfeb7"},
+ {file = "psutil-5.9.4-cp36-abi3-manylinux_2_12_i686.manylinux2010_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:16653106f3b59386ffe10e0bad3bb6299e169d5327d3f187614b1cb8f24cf2e1"},
+ {file = "psutil-5.9.4-cp36-abi3-manylinux_2_12_x86_64.manylinux2010_x86_64.manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:54c0d3d8e0078b7666984e11b12b88af2db11d11249a8ac8920dd5ef68a66e08"},
+ {file = "psutil-5.9.4-cp36-abi3-win32.whl", hash = "sha256:149555f59a69b33f056ba1c4eb22bb7bf24332ce631c44a319cec09f876aaeff"},
+ {file = "psutil-5.9.4-cp36-abi3-win_amd64.whl", hash = "sha256:fd8522436a6ada7b4aad6638662966de0d61d241cb821239b2ae7013d41a43d4"},
+ {file = "psutil-5.9.4-cp38-abi3-macosx_11_0_arm64.whl", hash = "sha256:6001c809253a29599bc0dfd5179d9f8a5779f9dffea1da0f13c53ee568115e1e"},
+ {file = "psutil-5.9.4.tar.gz", hash = "sha256:3d7f9739eb435d4b1338944abe23f49584bde5395f27487d2ee25ad9a8774a62"},
+]
+
+[package.extras]
+test = ["enum34", "ipaddress", "mock", "pywin32", "wmi"]
+
+[[package]]
+name = "ptable"
+version = "0.9.2"
+description = "A simple Python library for easily displaying tabular data in a visually appealing ASCII table format"
+category = "dev"
+optional = false
+python-versions = "*"
+files = [
+ {file = "PTable-0.9.2.tar.gz", hash = "sha256:aa7fc151cb40f2dabcd2275ba6f7fd0ff8577a86be3365cd3fb297cbe09cc292"},
+]
+
+[[package]]
+name = "ptyprocess"
+version = "0.7.0"
+description = "Run a subprocess in a pseudo terminal"
+category = "dev"
+optional = false
+python-versions = "*"
+files = [
+ {file = "ptyprocess-0.7.0-py2.py3-none-any.whl", hash = "sha256:4b41f3967fce3af57cc7e94b888626c18bf37a083e3651ca8feeb66d492fef35"},
+ {file = "ptyprocess-0.7.0.tar.gz", hash = "sha256:5c5d0a3b48ceee0b48485e0c26037c0acd7d29765ca3fbb5cb3831d347423220"},
+]
+
+[[package]]
+name = "py"
+version = "1.11.0"
+description = "library with cross-python path, ini-parsing, io, code, log facilities"
+category = "dev"
+optional = false
+python-versions = ">=2.7, !=3.0.*, !=3.1.*, !=3.2.*, !=3.3.*, !=3.4.*"
+files = [
+ {file = "py-1.11.0-py2.py3-none-any.whl", hash = "sha256:607c53218732647dff4acdfcd50cb62615cedf612e72d1724fb1a0cc6405b378"},
+ {file = "py-1.11.0.tar.gz", hash = "sha256:51c75c4126074b472f746a24399ad32f6053d1b34b68d2fa41e558e6f4a98719"},
+]
+
+[[package]]
+name = "py-cpuinfo"
+version = "8.0.0"
+description = "Get CPU info with pure Python 2 & 3"
+category = "dev"
+optional = false
+python-versions = "*"
+files = [
+ {file = "py-cpuinfo-8.0.0.tar.gz", hash = "sha256:5f269be0e08e33fd959de96b34cd4aeeeacac014dd8305f70eb28d06de2345c5"},
+]
+
+[[package]]
+name = "pycodestyle"
+version = "2.8.0"
+description = "Python style guide checker"
+category = "dev"
+optional = false
+python-versions = ">=2.7, !=3.0.*, !=3.1.*, !=3.2.*, !=3.3.*, !=3.4.*"
+files = [
+ {file = "pycodestyle-2.8.0-py2.py3-none-any.whl", hash = "sha256:720f8b39dde8b293825e7ff02c475f3077124006db4f440dcbc9a20b76548a20"},
+ {file = "pycodestyle-2.8.0.tar.gz", hash = "sha256:eddd5847ef438ea1c7870ca7eb78a9d47ce0cdb4851a5523949f2601d0cbbe7f"},
+]
+
+[[package]]
+name = "pycparser"
+version = "2.21"
+description = "C parser in Python"
+category = "dev"
+optional = false
+python-versions = ">=2.7, !=3.0.*, !=3.1.*, !=3.2.*, !=3.3.*"
+files = [
+ {file = "pycparser-2.21-py2.py3-none-any.whl", hash = "sha256:8ee45429555515e1f6b185e78100aea234072576aa43ab53aefcae078162fca9"},
+ {file = "pycparser-2.21.tar.gz", hash = "sha256:e644fdec12f7872f86c58ff790da456218b10f863970249516d60a5eaca77206"},
+]
+
+[[package]]
+name = "pydantic"
+version = "1.9.2"
+description = "Data validation and settings management using python type hints"
+category = "dev"
+optional = false
+python-versions = ">=3.6.1"
+files = [
+ {file = "pydantic-1.9.2-cp310-cp310-macosx_10_9_x86_64.whl", hash = "sha256:9c9e04a6cdb7a363d7cb3ccf0efea51e0abb48e180c0d31dca8d247967d85c6e"},
+ {file = "pydantic-1.9.2-cp310-cp310-macosx_11_0_arm64.whl", hash = "sha256:fafe841be1103f340a24977f61dee76172e4ae5f647ab9e7fd1e1fca51524f08"},
+ {file = "pydantic-1.9.2-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:afacf6d2a41ed91fc631bade88b1d319c51ab5418870802cedb590b709c5ae3c"},
+ {file = "pydantic-1.9.2-cp310-cp310-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:3ee0d69b2a5b341fc7927e92cae7ddcfd95e624dfc4870b32a85568bd65e6131"},
+ {file = "pydantic-1.9.2-cp310-cp310-musllinux_1_1_i686.whl", hash = "sha256:ff68fc85355532ea77559ede81f35fff79a6a5543477e168ab3a381887caea76"},
+ {file = "pydantic-1.9.2-cp310-cp310-musllinux_1_1_x86_64.whl", hash = "sha256:c0f5e142ef8217019e3eef6ae1b6b55f09a7a15972958d44fbd228214cede567"},
+ {file = "pydantic-1.9.2-cp310-cp310-win_amd64.whl", hash = "sha256:615661bfc37e82ac677543704437ff737418e4ea04bef9cf11c6d27346606044"},
+ {file = "pydantic-1.9.2-cp36-cp36m-macosx_10_9_x86_64.whl", hash = "sha256:328558c9f2eed77bd8fffad3cef39dbbe3edc7044517f4625a769d45d4cf7555"},
+ {file = "pydantic-1.9.2-cp36-cp36m-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:2bd446bdb7755c3a94e56d7bdfd3ee92396070efa8ef3a34fab9579fe6aa1d84"},
+ {file = "pydantic-1.9.2-cp36-cp36m-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:e0b214e57623a535936005797567231a12d0da0c29711eb3514bc2b3cd008d0f"},
+ {file = "pydantic-1.9.2-cp36-cp36m-musllinux_1_1_i686.whl", hash = "sha256:d8ce3fb0841763a89322ea0432f1f59a2d3feae07a63ea2c958b2315e1ae8adb"},
+ {file = "pydantic-1.9.2-cp36-cp36m-musllinux_1_1_x86_64.whl", hash = "sha256:b34ba24f3e2d0b39b43f0ca62008f7ba962cff51efa56e64ee25c4af6eed987b"},
+ {file = "pydantic-1.9.2-cp36-cp36m-win_amd64.whl", hash = "sha256:84d76ecc908d917f4684b354a39fd885d69dd0491be175f3465fe4b59811c001"},
+ {file = "pydantic-1.9.2-cp37-cp37m-macosx_10_9_x86_64.whl", hash = "sha256:4de71c718c9756d679420c69f216776c2e977459f77e8f679a4a961dc7304a56"},
+ {file = "pydantic-1.9.2-cp37-cp37m-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:5803ad846cdd1ed0d97eb00292b870c29c1f03732a010e66908ff48a762f20e4"},
+ {file = "pydantic-1.9.2-cp37-cp37m-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:a8c5360a0297a713b4123608a7909e6869e1b56d0e96eb0d792c27585d40757f"},
+ {file = "pydantic-1.9.2-cp37-cp37m-musllinux_1_1_i686.whl", hash = "sha256:cdb4272678db803ddf94caa4f94f8672e9a46bae4a44f167095e4d06fec12979"},
+ {file = "pydantic-1.9.2-cp37-cp37m-musllinux_1_1_x86_64.whl", hash = "sha256:19b5686387ea0d1ea52ecc4cffb71abb21702c5e5b2ac626fd4dbaa0834aa49d"},
+ {file = "pydantic-1.9.2-cp37-cp37m-win_amd64.whl", hash = "sha256:32e0b4fb13ad4db4058a7c3c80e2569adbd810c25e6ca3bbd8b2a9cc2cc871d7"},
+ {file = "pydantic-1.9.2-cp38-cp38-macosx_10_9_x86_64.whl", hash = "sha256:91089b2e281713f3893cd01d8e576771cd5bfdfbff5d0ed95969f47ef6d676c3"},
+ {file = "pydantic-1.9.2-cp38-cp38-macosx_11_0_arm64.whl", hash = "sha256:e631c70c9280e3129f071635b81207cad85e6c08e253539467e4ead0e5b219aa"},
+ {file = "pydantic-1.9.2-cp38-cp38-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:4b3946f87e5cef3ba2e7bd3a4eb5a20385fe36521d6cc1ebf3c08a6697c6cfb3"},
+ {file = "pydantic-1.9.2-cp38-cp38-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:5565a49effe38d51882cb7bac18bda013cdb34d80ac336428e8908f0b72499b0"},
+ {file = "pydantic-1.9.2-cp38-cp38-musllinux_1_1_i686.whl", hash = "sha256:bd67cb2c2d9602ad159389c29e4ca964b86fa2f35c2faef54c3eb28b4efd36c8"},
+ {file = "pydantic-1.9.2-cp38-cp38-musllinux_1_1_x86_64.whl", hash = "sha256:4aafd4e55e8ad5bd1b19572ea2df546ccace7945853832bb99422a79c70ce9b8"},
+ {file = "pydantic-1.9.2-cp38-cp38-win_amd64.whl", hash = "sha256:d70916235d478404a3fa8c997b003b5f33aeac4686ac1baa767234a0f8ac2326"},
+ {file = "pydantic-1.9.2-cp39-cp39-macosx_10_9_x86_64.whl", hash = "sha256:f0ca86b525264daa5f6b192f216a0d1e860b7383e3da1c65a1908f9c02f42801"},
+ {file = "pydantic-1.9.2-cp39-cp39-macosx_11_0_arm64.whl", hash = "sha256:1061c6ee6204f4f5a27133126854948e3b3d51fcc16ead2e5d04378c199b2f44"},
+ {file = "pydantic-1.9.2-cp39-cp39-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:e78578f0c7481c850d1c969aca9a65405887003484d24f6110458fb02cca7747"},
+ {file = "pydantic-1.9.2-cp39-cp39-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:5da164119602212a3fe7e3bc08911a89db4710ae51444b4224c2382fd09ad453"},
+ {file = "pydantic-1.9.2-cp39-cp39-musllinux_1_1_i686.whl", hash = "sha256:7ead3cd020d526f75b4188e0a8d71c0dbbe1b4b6b5dc0ea775a93aca16256aeb"},
+ {file = "pydantic-1.9.2-cp39-cp39-musllinux_1_1_x86_64.whl", hash = "sha256:7d0f183b305629765910eaad707800d2f47c6ac5bcfb8c6397abdc30b69eeb15"},
+ {file = "pydantic-1.9.2-cp39-cp39-win_amd64.whl", hash = "sha256:f1a68f4f65a9ee64b6ccccb5bf7e17db07caebd2730109cb8a95863cfa9c4e55"},
+ {file = "pydantic-1.9.2-py3-none-any.whl", hash = "sha256:78a4d6bdfd116a559aeec9a4cfe77dda62acc6233f8b56a716edad2651023e5e"},
+ {file = "pydantic-1.9.2.tar.gz", hash = "sha256:8cb0bc509bfb71305d7a59d00163d5f9fc4530f0881ea32c74ff4f74c85f3d3d"},
+]
+
+[package.dependencies]
+typing-extensions = ">=3.7.4.3"
+
+[package.extras]
+dotenv = ["python-dotenv (>=0.10.4)"]
+email = ["email-validator (>=1.0.3)"]
+
+[[package]]
+name = "pydocstyle"
+version = "6.3.0"
+description = "Python docstring style checker"
+category = "dev"
+optional = false
+python-versions = ">=3.6"
+files = [
+ {file = "pydocstyle-6.3.0-py3-none-any.whl", hash = "sha256:118762d452a49d6b05e194ef344a55822987a462831ade91ec5c06fd2169d019"},
+ {file = "pydocstyle-6.3.0.tar.gz", hash = "sha256:7ce43f0c0ac87b07494eb9c0b462c0b73e6ff276807f204d6b53edc72b7e44e1"},
+]
+
+[package.dependencies]
+importlib-metadata = {version = ">=2.0.0,<5.0.0", markers = "python_version < \"3.8\""}
+snowballstemmer = ">=2.2.0"
+
+[package.extras]
+toml = ["tomli (>=1.2.3)"]
+
+[[package]]
+name = "pyflakes"
+version = "2.4.0"
+description = "passive checker of Python programs"
+category = "dev"
+optional = false
+python-versions = ">=2.7, !=3.0.*, !=3.1.*, !=3.2.*, !=3.3.*"
+files = [
+ {file = "pyflakes-2.4.0-py2.py3-none-any.whl", hash = "sha256:3bb3a3f256f4b7968c9c788781e4ff07dce46bdf12339dcda61053375426ee2e"},
+ {file = "pyflakes-2.4.0.tar.gz", hash = "sha256:05a85c2872edf37a4ed30b0cce2f6093e1d0581f8c19d7393122da7e25b2b24c"},
+]
+
+[[package]]
+name = "pygments"
+version = "2.14.0"
+description = "Pygments is a syntax highlighting package written in Python."
+category = "dev"
+optional = false
+python-versions = ">=3.6"
+files = [
+ {file = "Pygments-2.14.0-py3-none-any.whl", hash = "sha256:fa7bd7bd2771287c0de303af8bfdfc731f51bd2c6a47ab69d117138893b82717"},
+ {file = "Pygments-2.14.0.tar.gz", hash = "sha256:b3ed06a9e8ac9a9aae5a6f5dbe78a8a58655d17b43b93c078f094ddc476ae297"},
+]
+
+[package.extras]
+plugins = ["importlib-metadata"]
+
+[[package]]
+name = "pygments-style-tomorrow"
+version = "1.0.0.1"
+description = "Pygments version of the tomorrow theme, Based on mozmorris/tomorrow-pygments."
+category = "dev"
+optional = false
+python-versions = "*"
+files = [
+ {file = "pygments-style-tomorrow-1.0.0.1.tar.gz", hash = "sha256:4132c31f11738f6bed52d43a3f187aa3b889a118f9a29b40d1f6afb5bb1037be"},
+ {file = "pygments_style_tomorrow-1.0.0.1-py3-none-any.whl", hash = "sha256:f5060ce35d598bf063a092db499782160b94d3d79f0fec96d527f20b5cd12743"},
+]
+
+[package.dependencies]
+pygments = ">=1.5"
+
+[[package]]
+name = "pylint"
+version = "2.11.1"
+description = "python code static checker"
+category = "dev"
+optional = false
+python-versions = "~=3.6"
+files = [
+ {file = "pylint-2.11.1-py3-none-any.whl", hash = "sha256:0f358e221c45cbd4dad2a1e4b883e75d28acdcccd29d40c76eb72b307269b126"},
+ {file = "pylint-2.11.1.tar.gz", hash = "sha256:2c9843fff1a88ca0ad98a256806c82c5a8f86086e7ccbdb93297d86c3f90c436"},
+]
+
+[package.dependencies]
+astroid = ">=2.8.0,<2.9"
+colorama = {version = "*", markers = "sys_platform == \"win32\""}
+isort = ">=4.2.5,<6"
+mccabe = ">=0.6,<0.7"
+platformdirs = ">=2.2.0"
+toml = ">=0.7.1"
+typing-extensions = {version = ">=3.10.0", markers = "python_version < \"3.10\""}
+
+[[package]]
+name = "pyparsing"
+version = "3.0.9"
+description = "pyparsing module - Classes and methods to define and execute parsing grammars"
+category = "main"
+optional = false
+python-versions = ">=3.6.8"
+files = [
+ {file = "pyparsing-3.0.9-py3-none-any.whl", hash = "sha256:5026bae9a10eeaefb61dab2f09052b9f4307d44aee4eda64b309723d8d206bbc"},
+ {file = "pyparsing-3.0.9.tar.gz", hash = "sha256:2b020ecf7d21b687f219b71ecad3631f644a47f01403fa1d1036b0c6416d70fb"},
+]
+
+[package.extras]
+diagrams = ["jinja2", "railroad-diagrams"]
+
+[[package]]
+name = "pyrsistent"
+version = "0.19.3"
+description = "Persistent/Functional/Immutable data structures"
+category = "dev"
+optional = false
+python-versions = ">=3.7"
+files = [
+ {file = "pyrsistent-0.19.3-cp310-cp310-macosx_10_9_universal2.whl", hash = "sha256:20460ac0ea439a3e79caa1dbd560344b64ed75e85d8703943e0b66c2a6150e4a"},
+ {file = "pyrsistent-0.19.3-cp310-cp310-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:4c18264cb84b5e68e7085a43723f9e4c1fd1d935ab240ce02c0324a8e01ccb64"},
+ {file = "pyrsistent-0.19.3-cp310-cp310-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:4b774f9288dda8d425adb6544e5903f1fb6c273ab3128a355c6b972b7df39dcf"},
+ {file = "pyrsistent-0.19.3-cp310-cp310-win32.whl", hash = "sha256:5a474fb80f5e0d6c9394d8db0fc19e90fa540b82ee52dba7d246a7791712f74a"},
+ {file = "pyrsistent-0.19.3-cp310-cp310-win_amd64.whl", hash = "sha256:49c32f216c17148695ca0e02a5c521e28a4ee6c5089f97e34fe24163113722da"},
+ {file = "pyrsistent-0.19.3-cp311-cp311-macosx_10_9_universal2.whl", hash = "sha256:f0774bf48631f3a20471dd7c5989657b639fd2d285b861237ea9e82c36a415a9"},
+ {file = "pyrsistent-0.19.3-cp311-cp311-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:3ab2204234c0ecd8b9368dbd6a53e83c3d4f3cab10ecaf6d0e772f456c442393"},
+ {file = "pyrsistent-0.19.3-cp311-cp311-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:e42296a09e83028b3476f7073fcb69ffebac0e66dbbfd1bd847d61f74db30f19"},
+ {file = "pyrsistent-0.19.3-cp311-cp311-win32.whl", hash = "sha256:64220c429e42a7150f4bfd280f6f4bb2850f95956bde93c6fda1b70507af6ef3"},
+ {file = "pyrsistent-0.19.3-cp311-cp311-win_amd64.whl", hash = "sha256:016ad1afadf318eb7911baa24b049909f7f3bb2c5b1ed7b6a8f21db21ea3faa8"},
+ {file = "pyrsistent-0.19.3-cp37-cp37m-macosx_10_9_x86_64.whl", hash = "sha256:c4db1bd596fefd66b296a3d5d943c94f4fac5bcd13e99bffe2ba6a759d959a28"},
+ {file = "pyrsistent-0.19.3-cp37-cp37m-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:aeda827381f5e5d65cced3024126529ddc4289d944f75e090572c77ceb19adbf"},
+ {file = "pyrsistent-0.19.3-cp37-cp37m-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:42ac0b2f44607eb92ae88609eda931a4f0dfa03038c44c772e07f43e738bcac9"},
+ {file = "pyrsistent-0.19.3-cp37-cp37m-win32.whl", hash = "sha256:e8f2b814a3dc6225964fa03d8582c6e0b6650d68a232df41e3cc1b66a5d2f8d1"},
+ {file = "pyrsistent-0.19.3-cp37-cp37m-win_amd64.whl", hash = "sha256:c9bb60a40a0ab9aba40a59f68214eed5a29c6274c83b2cc206a359c4a89fa41b"},
+ {file = "pyrsistent-0.19.3-cp38-cp38-macosx_10_9_universal2.whl", hash = "sha256:a2471f3f8693101975b1ff85ffd19bb7ca7dd7c38f8a81701f67d6b4f97b87d8"},
+ {file = "pyrsistent-0.19.3-cp38-cp38-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:cc5d149f31706762c1f8bda2e8c4f8fead6e80312e3692619a75301d3dbb819a"},
+ {file = "pyrsistent-0.19.3-cp38-cp38-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:3311cb4237a341aa52ab8448c27e3a9931e2ee09561ad150ba94e4cfd3fc888c"},
+ {file = "pyrsistent-0.19.3-cp38-cp38-win32.whl", hash = "sha256:f0e7c4b2f77593871e918be000b96c8107da48444d57005b6a6bc61fb4331b2c"},
+ {file = "pyrsistent-0.19.3-cp38-cp38-win_amd64.whl", hash = "sha256:c147257a92374fde8498491f53ffa8f4822cd70c0d85037e09028e478cababb7"},
+ {file = "pyrsistent-0.19.3-cp39-cp39-macosx_10_9_universal2.whl", hash = "sha256:b735e538f74ec31378f5a1e3886a26d2ca6351106b4dfde376a26fc32a044edc"},
+ {file = "pyrsistent-0.19.3-cp39-cp39-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:99abb85579e2165bd8522f0c0138864da97847875ecbd45f3e7e2af569bfc6f2"},
+ {file = "pyrsistent-0.19.3-cp39-cp39-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:3a8cb235fa6d3fd7aae6a4f1429bbb1fec1577d978098da1252f0489937786f3"},
+ {file = "pyrsistent-0.19.3-cp39-cp39-win32.whl", hash = "sha256:c74bed51f9b41c48366a286395c67f4e894374306b197e62810e0fdaf2364da2"},
+ {file = "pyrsistent-0.19.3-cp39-cp39-win_amd64.whl", hash = "sha256:878433581fc23e906d947a6814336eee031a00e6defba224234169ae3d3d6a98"},
+ {file = "pyrsistent-0.19.3-py3-none-any.whl", hash = "sha256:ccf0d6bd208f8111179f0c26fdf84ed7c3891982f2edaeae7422575f47e66b64"},
+ {file = "pyrsistent-0.19.3.tar.gz", hash = "sha256:1a2994773706bbb4995c31a97bc94f1418314923bd1048c6d964837040376440"},
+]
+
+[[package]]
+name = "pytest"
+version = "6.2.5"
+description = "pytest: simple powerful testing with Python"
+category = "dev"
+optional = false
+python-versions = ">=3.6"
+files = [
+ {file = "pytest-6.2.5-py3-none-any.whl", hash = "sha256:7310f8d27bc79ced999e760ca304d69f6ba6c6649c0b60fb0e04a4a77cacc134"},
+ {file = "pytest-6.2.5.tar.gz", hash = "sha256:131b36680866a76e6781d13f101efb86cf674ebb9762eb70d3082b6f29889e89"},
+]
+
+[package.dependencies]
+atomicwrites = {version = ">=1.0", markers = "sys_platform == \"win32\""}
+attrs = ">=19.2.0"
+colorama = {version = "*", markers = "sys_platform == \"win32\""}
+importlib-metadata = {version = ">=0.12", markers = "python_version < \"3.8\""}
+iniconfig = "*"
+packaging = "*"
+pluggy = ">=0.12,<2.0"
+py = ">=1.8.2"
+toml = "*"
+
+[package.extras]
+testing = ["argcomplete", "hypothesis (>=3.56)", "mock", "nose", "requests", "xmlschema"]
+
+[[package]]
+name = "pytest-codeblocks"
+version = "0.12.2"
+description = "Test code blocks in your READMEs"
+category = "dev"
+optional = false
+python-versions = ">=3.7"
+files = [
+ {file = "pytest-codeblocks-0.12.2.tar.gz", hash = "sha256:6554cb970bdc5933dd70397b9f10c9495dc10b4765f83e6abbe2e96839053492"},
+ {file = "pytest_codeblocks-0.12.2-py3-none-any.whl", hash = "sha256:6be59c283c9a5226eb77ea4b066f913a1f7078828ace6cca26147b75d151b3fb"},
+]
+
+[package.dependencies]
+pytest = ">=6"
+
+[[package]]
+name = "pytest-cov"
+version = "3.0.0"
+description = "Pytest plugin for measuring coverage."
+category = "dev"
+optional = false
+python-versions = ">=3.6"
+files = [
+ {file = "pytest-cov-3.0.0.tar.gz", hash = "sha256:e7f0f5b1617d2210a2cabc266dfe2f4c75a8d32fb89eafb7ad9d06f6d076d470"},
+ {file = "pytest_cov-3.0.0-py3-none-any.whl", hash = "sha256:578d5d15ac4a25e5f961c938b85a05b09fdaae9deef3bb6de9a6e766622ca7a6"},
+]
+
+[package.dependencies]
+coverage = {version = ">=5.2.1", extras = ["toml"]}
+pytest = ">=4.6"
+
+[package.extras]
+testing = ["fields", "hunter", "process-tests", "pytest-xdist", "six", "virtualenv"]
+
+[[package]]
+name = "pytest-forked"
+version = "1.6.0"
+description = "run tests in isolated forked subprocesses"
+category = "dev"
+optional = false
+python-versions = ">=3.7"
+files = [
+ {file = "pytest-forked-1.6.0.tar.gz", hash = "sha256:4dafd46a9a600f65d822b8f605133ecf5b3e1941ebb3588e943b4e3eb71a5a3f"},
+ {file = "pytest_forked-1.6.0-py3-none-any.whl", hash = "sha256:810958f66a91afb1a1e2ae83089d8dc1cd2437ac96b12963042fbb9fb4d16af0"},
+]
+
+[package.dependencies]
+py = "*"
+pytest = ">=3.10"
+
+[[package]]
+name = "pytest-randomly"
+version = "3.12.0"
+description = "Pytest plugin to randomly order tests and control random.seed."
+category = "dev"
+optional = false
+python-versions = ">=3.7"
+files = [
+ {file = "pytest-randomly-3.12.0.tar.gz", hash = "sha256:d60c2db71ac319aee0fc6c4110a7597d611a8b94a5590918bfa8583f00caccb2"},
+ {file = "pytest_randomly-3.12.0-py3-none-any.whl", hash = "sha256:f4f2e803daf5d1ba036cc22bf4fe9dbbf99389ec56b00e5cba732fb5c1d07fdd"},
+]
+
+[package.dependencies]
+importlib-metadata = {version = ">=3.6.0", markers = "python_version < \"3.10\""}
+pytest = "*"
+
+[[package]]
+name = "pytest-xdist"
+version = "2.5.0"
+description = "pytest xdist plugin for distributed testing and loop-on-failing modes"
+category = "dev"
+optional = false
+python-versions = ">=3.6"
+files = [
+ {file = "pytest-xdist-2.5.0.tar.gz", hash = "sha256:4580deca3ff04ddb2ac53eba39d76cb5dd5edeac050cb6fbc768b0dd712b4edf"},
+ {file = "pytest_xdist-2.5.0-py3-none-any.whl", hash = "sha256:6fe5c74fec98906deb8f2d2b616b5c782022744978e7bd4695d39c8f42d0ce65"},
+]
+
+[package.dependencies]
+execnet = ">=1.1"
+pytest = ">=6.2.0"
+pytest-forked = "*"
+
+[package.extras]
+psutil = ["psutil (>=3.0)"]
+setproctitle = ["setproctitle"]
+testing = ["filelock"]
+
+[[package]]
+name = "python-dateutil"
+version = "2.8.2"
+description = "Extensions to the standard Python datetime module"
+category = "main"
+optional = false
+python-versions = "!=3.0.*,!=3.1.*,!=3.2.*,>=2.7"
+files = [
+ {file = "python-dateutil-2.8.2.tar.gz", hash = "sha256:0123cacc1627ae19ddf3c27a5de5bd67ee4586fbdd6440d9748f8abb483d3e86"},
+ {file = "python_dateutil-2.8.2-py2.py3-none-any.whl", hash = "sha256:961d03dc3453ebbc59dbdea9e4e11c5651520a876d0f4db161e8674aae935da9"},
+]
+
+[package.dependencies]
+six = ">=1.5"
+
+[[package]]
+name = "python-dotenv"
+version = "0.19.2"
+description = "Read key-value pairs from a .env file and set them as environment variables"
+category = "dev"
+optional = false
+python-versions = ">=3.5"
+files = [
+ {file = "python-dotenv-0.19.2.tar.gz", hash = "sha256:a5de49a31e953b45ff2d2fd434bbc2670e8db5273606c1e737cc6b93eff3655f"},
+ {file = "python_dotenv-0.19.2-py2.py3-none-any.whl", hash = "sha256:32b2bdc1873fd3a3c346da1c6db83d0053c3c62f28f1f38516070c4c8971b1d3"},
+]
+
+[package.extras]
+cli = ["click (>=5.0)"]
+
+[[package]]
+name = "python-gitlab"
+version = "2.10.1"
+description = "Interact with GitLab API"
+category = "dev"
+optional = false
+python-versions = ">=3.6.0"
+files = [
+ {file = "python-gitlab-2.10.1.tar.gz", hash = "sha256:7afa7d7c062fa62c173190452265a30feefb844428efc58ea5244f3b9fc0d40f"},
+ {file = "python_gitlab-2.10.1-py3-none-any.whl", hash = "sha256:581a219759515513ea9399e936ed7137437cfb681f52d2641626685c492c999d"},
+]
+
+[package.dependencies]
+requests = ">=2.25.0"
+requests-toolbelt = ">=0.9.1"
+
+[package.extras]
+autocompletion = ["argcomplete (>=1.10.0,<2)"]
+yaml = ["PyYaml (>=5.2)"]
+
+[[package]]
+name = "python-semantic-release"
+version = "7.23.0"
+description = "Automatic Semantic Versioning for Python projects"
+category = "dev"
+optional = false
+python-versions = "*"
+files = [
+ {file = "python-semantic-release-7.23.0.tar.gz", hash = "sha256:48c33bf671dafa1257e7d955543856eb98486a3f976f586053556ae180d725da"},
+ {file = "python_semantic_release-7.23.0-py3-none-any.whl", hash = "sha256:5bf7fcdb28e5e9888c9a15a1168afe53302116a6874d818580d4c58db60283ab"},
+]
+
+[package.dependencies]
+click = ">=7,<9"
+click-log = ">=0.3,<1"
+dotty-dict = ">=1.3.0,<2"
+gitpython = ">=3.0.8,<4"
+invoke = ">=1.4.1,<2"
+python-gitlab = ">=1.10,<3"
+requests = ">=2.25,<3"
+semver = ">=2.10,<3"
+tomlkit = "0.7.0"
+twine = ">=3,<4"
+wheel = "*"
+
+[package.extras]
+dev = ["black", "isort", "tox"]
+docs = ["Sphinx (==1.3.6)"]
+mypy = ["mypy", "types-requests"]
+test = ["coverage (>=5,<6)", "mock (==1.3.0)", "pytest (>=5,<6)", "pytest-mock (>=2,<3)", "pytest-xdist (>=1,<2)", "responses (==0.13.3)"]
+
+[[package]]
+name = "pywin32"
+version = "305"
+description = "Python for Window Extensions"
+category = "dev"
+optional = false
+python-versions = "*"
+files = [
+ {file = "pywin32-305-cp310-cp310-win32.whl", hash = "sha256:421f6cd86e84bbb696d54563c48014b12a23ef95a14e0bdba526be756d89f116"},
+ {file = "pywin32-305-cp310-cp310-win_amd64.whl", hash = "sha256:73e819c6bed89f44ff1d690498c0a811948f73777e5f97c494c152b850fad478"},
+ {file = "pywin32-305-cp310-cp310-win_arm64.whl", hash = "sha256:742eb905ce2187133a29365b428e6c3b9001d79accdc30aa8969afba1d8470f4"},
+ {file = "pywin32-305-cp311-cp311-win32.whl", hash = "sha256:19ca459cd2e66c0e2cc9a09d589f71d827f26d47fe4a9d09175f6aa0256b51c2"},
+ {file = "pywin32-305-cp311-cp311-win_amd64.whl", hash = "sha256:326f42ab4cfff56e77e3e595aeaf6c216712bbdd91e464d167c6434b28d65990"},
+ {file = "pywin32-305-cp311-cp311-win_arm64.whl", hash = "sha256:4ecd404b2c6eceaca52f8b2e3e91b2187850a1ad3f8b746d0796a98b4cea04db"},
+ {file = "pywin32-305-cp36-cp36m-win32.whl", hash = "sha256:48d8b1659284f3c17b68587af047d110d8c44837736b8932c034091683e05863"},
+ {file = "pywin32-305-cp36-cp36m-win_amd64.whl", hash = "sha256:13362cc5aa93c2beaf489c9c9017c793722aeb56d3e5166dadd5ef82da021fe1"},
+ {file = "pywin32-305-cp37-cp37m-win32.whl", hash = "sha256:a55db448124d1c1484df22fa8bbcbc45c64da5e6eae74ab095b9ea62e6d00496"},
+ {file = "pywin32-305-cp37-cp37m-win_amd64.whl", hash = "sha256:109f98980bfb27e78f4df8a51a8198e10b0f347257d1e265bb1a32993d0c973d"},
+ {file = "pywin32-305-cp38-cp38-win32.whl", hash = "sha256:9dd98384da775afa009bc04863426cb30596fd78c6f8e4e2e5bbf4edf8029504"},
+ {file = "pywin32-305-cp38-cp38-win_amd64.whl", hash = "sha256:56d7a9c6e1a6835f521788f53b5af7912090674bb84ef5611663ee1595860fc7"},
+ {file = "pywin32-305-cp39-cp39-win32.whl", hash = "sha256:9d968c677ac4d5cbdaa62fd3014ab241718e619d8e36ef8e11fb930515a1e918"},
+ {file = "pywin32-305-cp39-cp39-win_amd64.whl", hash = "sha256:50768c6b7c3f0b38b7fb14dd4104da93ebced5f1a50dc0e834594bff6fbe1271"},
+]
+
+[[package]]
+name = "pywin32-ctypes"
+version = "0.2.0"
+description = ""
+category = "dev"
+optional = false
+python-versions = "*"
+files = [
+ {file = "pywin32-ctypes-0.2.0.tar.gz", hash = "sha256:24ffc3b341d457d48e8922352130cf2644024a4ff09762a2261fd34c36ee5942"},
+ {file = "pywin32_ctypes-0.2.0-py2.py3-none-any.whl", hash = "sha256:9dc2d991b3479cc2df15930958b674a48a227d5361d413827a4cfd0b5876fc98"},
+]
+
+[[package]]
+name = "pywinpty"
+version = "2.0.10"
+description = "Pseudo terminal support for Windows from Python."
+category = "dev"
+optional = false
+python-versions = ">=3.7"
+files = [
+ {file = "pywinpty-2.0.10-cp310-none-win_amd64.whl", hash = "sha256:4c7d06ad10f6e92bc850a467f26d98f4f30e73d2fe5926536308c6ae0566bc16"},
+ {file = "pywinpty-2.0.10-cp311-none-win_amd64.whl", hash = "sha256:7ffbd66310b83e42028fc9df7746118978d94fba8c1ebf15a7c1275fdd80b28a"},
+ {file = "pywinpty-2.0.10-cp37-none-win_amd64.whl", hash = "sha256:38cb924f2778b5751ef91a75febd114776b3af0ae411bc667be45dd84fc881d3"},
+ {file = "pywinpty-2.0.10-cp38-none-win_amd64.whl", hash = "sha256:902d79444b29ad1833b8d5c3c9aabdfd428f4f068504430df18074007c8c0de8"},
+ {file = "pywinpty-2.0.10-cp39-none-win_amd64.whl", hash = "sha256:3c46aef80dd50979aff93de199e4a00a8ee033ba7a03cadf0a91fed45f0c39d7"},
+ {file = "pywinpty-2.0.10.tar.gz", hash = "sha256:cdbb5694cf8c7242c2ecfaca35c545d31fa5d5814c3d67a4e628f803f680ebea"},
+]
+
+[[package]]
+name = "pyyaml"
+version = "6.0"
+description = "YAML parser and emitter for Python"
+category = "main"
+optional = false
+python-versions = ">=3.6"
+files = [
+ {file = "PyYAML-6.0-cp310-cp310-macosx_10_9_x86_64.whl", hash = "sha256:d4db7c7aef085872ef65a8fd7d6d09a14ae91f691dec3e87ee5ee0539d516f53"},
+ {file = "PyYAML-6.0-cp310-cp310-macosx_11_0_arm64.whl", hash = "sha256:9df7ed3b3d2e0ecfe09e14741b857df43adb5a3ddadc919a2d94fbdf78fea53c"},
+ {file = "PyYAML-6.0-cp310-cp310-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:77f396e6ef4c73fdc33a9157446466f1cff553d979bd00ecb64385760c6babdc"},
+ {file = "PyYAML-6.0-cp310-cp310-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:a80a78046a72361de73f8f395f1f1e49f956c6be882eed58505a15f3e430962b"},
+ {file = "PyYAML-6.0-cp310-cp310-manylinux_2_5_x86_64.manylinux1_x86_64.manylinux_2_12_x86_64.manylinux2010_x86_64.whl", hash = "sha256:f84fbc98b019fef2ee9a1cb3ce93e3187a6df0b2538a651bfb890254ba9f90b5"},
+ {file = "PyYAML-6.0-cp310-cp310-win32.whl", hash = "sha256:2cd5df3de48857ed0544b34e2d40e9fac445930039f3cfe4bcc592a1f836d513"},
+ {file = "PyYAML-6.0-cp310-cp310-win_amd64.whl", hash = "sha256:daf496c58a8c52083df09b80c860005194014c3698698d1a57cbcfa182142a3a"},
+ {file = "PyYAML-6.0-cp311-cp311-macosx_10_9_x86_64.whl", hash = "sha256:d4b0ba9512519522b118090257be113b9468d804b19d63c71dbcf4a48fa32358"},
+ {file = "PyYAML-6.0-cp311-cp311-macosx_11_0_arm64.whl", hash = "sha256:81957921f441d50af23654aa6c5e5eaf9b06aba7f0a19c18a538dc7ef291c5a1"},
+ {file = "PyYAML-6.0-cp311-cp311-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:afa17f5bc4d1b10afd4466fd3a44dc0e245382deca5b3c353d8b757f9e3ecb8d"},
+ {file = "PyYAML-6.0-cp311-cp311-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:dbad0e9d368bb989f4515da330b88a057617d16b6a8245084f1b05400f24609f"},
+ {file = "PyYAML-6.0-cp311-cp311-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:432557aa2c09802be39460360ddffd48156e30721f5e8d917f01d31694216782"},
+ {file = "PyYAML-6.0-cp311-cp311-win32.whl", hash = "sha256:bfaef573a63ba8923503d27530362590ff4f576c626d86a9fed95822a8255fd7"},
+ {file = "PyYAML-6.0-cp311-cp311-win_amd64.whl", hash = "sha256:01b45c0191e6d66c470b6cf1b9531a771a83c1c4208272ead47a3ae4f2f603bf"},
+ {file = "PyYAML-6.0-cp36-cp36m-macosx_10_9_x86_64.whl", hash = "sha256:897b80890765f037df3403d22bab41627ca8811ae55e9a722fd0392850ec4d86"},
+ {file = "PyYAML-6.0-cp36-cp36m-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:50602afada6d6cbfad699b0c7bb50d5ccffa7e46a3d738092afddc1f9758427f"},
+ {file = "PyYAML-6.0-cp36-cp36m-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:48c346915c114f5fdb3ead70312bd042a953a8ce5c7106d5bfb1a5254e47da92"},
+ {file = "PyYAML-6.0-cp36-cp36m-manylinux_2_5_x86_64.manylinux1_x86_64.manylinux_2_12_x86_64.manylinux2010_x86_64.whl", hash = "sha256:98c4d36e99714e55cfbaaee6dd5badbc9a1ec339ebfc3b1f52e293aee6bb71a4"},
+ {file = "PyYAML-6.0-cp36-cp36m-win32.whl", hash = "sha256:0283c35a6a9fbf047493e3a0ce8d79ef5030852c51e9d911a27badfde0605293"},
+ {file = "PyYAML-6.0-cp36-cp36m-win_amd64.whl", hash = "sha256:07751360502caac1c067a8132d150cf3d61339af5691fe9e87803040dbc5db57"},
+ {file = "PyYAML-6.0-cp37-cp37m-macosx_10_9_x86_64.whl", hash = "sha256:819b3830a1543db06c4d4b865e70ded25be52a2e0631ccd2f6a47a2822f2fd7c"},
+ {file = "PyYAML-6.0-cp37-cp37m-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:473f9edb243cb1935ab5a084eb238d842fb8f404ed2193a915d1784b5a6b5fc0"},
+ {file = "PyYAML-6.0-cp37-cp37m-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:0ce82d761c532fe4ec3f87fc45688bdd3a4c1dc5e0b4a19814b9009a29baefd4"},
+ {file = "PyYAML-6.0-cp37-cp37m-manylinux_2_5_x86_64.manylinux1_x86_64.manylinux_2_12_x86_64.manylinux2010_x86_64.whl", hash = "sha256:231710d57adfd809ef5d34183b8ed1eeae3f76459c18fb4a0b373ad56bedcdd9"},
+ {file = "PyYAML-6.0-cp37-cp37m-win32.whl", hash = "sha256:c5687b8d43cf58545ade1fe3e055f70eac7a5a1a0bf42824308d868289a95737"},
+ {file = "PyYAML-6.0-cp37-cp37m-win_amd64.whl", hash = "sha256:d15a181d1ecd0d4270dc32edb46f7cb7733c7c508857278d3d378d14d606db2d"},
+ {file = "PyYAML-6.0-cp38-cp38-macosx_10_9_x86_64.whl", hash = "sha256:0b4624f379dab24d3725ffde76559cff63d9ec94e1736b556dacdfebe5ab6d4b"},
+ {file = "PyYAML-6.0-cp38-cp38-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:213c60cd50106436cc818accf5baa1aba61c0189ff610f64f4a3e8c6726218ba"},
+ {file = "PyYAML-6.0-cp38-cp38-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:9fa600030013c4de8165339db93d182b9431076eb98eb40ee068700c9c813e34"},
+ {file = "PyYAML-6.0-cp38-cp38-manylinux_2_5_x86_64.manylinux1_x86_64.manylinux_2_12_x86_64.manylinux2010_x86_64.whl", hash = "sha256:277a0ef2981ca40581a47093e9e2d13b3f1fbbeffae064c1d21bfceba2030287"},
+ {file = "PyYAML-6.0-cp38-cp38-win32.whl", hash = "sha256:d4eccecf9adf6fbcc6861a38015c2a64f38b9d94838ac1810a9023a0609e1b78"},
+ {file = "PyYAML-6.0-cp38-cp38-win_amd64.whl", hash = "sha256:1e4747bc279b4f613a09eb64bba2ba602d8a6664c6ce6396a4d0cd413a50ce07"},
+ {file = "PyYAML-6.0-cp39-cp39-macosx_10_9_x86_64.whl", hash = "sha256:055d937d65826939cb044fc8c9b08889e8c743fdc6a32b33e2390f66013e449b"},
+ {file = "PyYAML-6.0-cp39-cp39-macosx_11_0_arm64.whl", hash = "sha256:e61ceaab6f49fb8bdfaa0f92c4b57bcfbea54c09277b1b4f7ac376bfb7a7c174"},
+ {file = "PyYAML-6.0-cp39-cp39-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:d67d839ede4ed1b28a4e8909735fc992a923cdb84e618544973d7dfc71540803"},
+ {file = "PyYAML-6.0-cp39-cp39-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:cba8c411ef271aa037d7357a2bc8f9ee8b58b9965831d9e51baf703280dc73d3"},
+ {file = "PyYAML-6.0-cp39-cp39-manylinux_2_5_x86_64.manylinux1_x86_64.manylinux_2_12_x86_64.manylinux2010_x86_64.whl", hash = "sha256:40527857252b61eacd1d9af500c3337ba8deb8fc298940291486c465c8b46ec0"},
+ {file = "PyYAML-6.0-cp39-cp39-win32.whl", hash = "sha256:b5b9eccad747aabaaffbc6064800670f0c297e52c12754eb1d976c57e4f74dcb"},
+ {file = "PyYAML-6.0-cp39-cp39-win_amd64.whl", hash = "sha256:b3d267842bf12586ba6c734f89d1f5b871df0273157918b0ccefa29deb05c21c"},
+ {file = "PyYAML-6.0.tar.gz", hash = "sha256:68fb519c14306fec9720a2a5b45bc9f0c8d1b9c72adf45c37baedfcd949c35a2"},
+]
+
+[[package]]
+name = "pyzmq"
+version = "25.0.0"
+description = "Python bindings for 0MQ"
+category = "dev"
+optional = false
+python-versions = ">=3.6"
+files = [
+ {file = "pyzmq-25.0.0-cp310-cp310-macosx_10_15_universal2.whl", hash = "sha256:2d05d904f03ddf1e0d83d97341354dfe52244a619b5a1440a5f47a5b3451e84e"},
+ {file = "pyzmq-25.0.0-cp310-cp310-macosx_10_9_x86_64.whl", hash = "sha256:0a154ef810d44f9d28868be04641f837374a64e7449df98d9208e76c260c7ef1"},
+ {file = "pyzmq-25.0.0-cp310-cp310-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:487305c2a011fdcf3db1f24e8814bb76d23bc4d2f46e145bc80316a59a9aa07d"},
+ {file = "pyzmq-25.0.0-cp310-cp310-manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:2e7b87638ee30ab13230e37ce5331b3e730b1e0dda30120b9eeec3540ed292c8"},
+ {file = "pyzmq-25.0.0-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:75243e422e85a62f0ab7953dc315452a56b2c6a7e7d1a3c3109ac3cc57ed6b47"},
+ {file = "pyzmq-25.0.0-cp310-cp310-manylinux_2_28_x86_64.whl", hash = "sha256:31e523d067ce44a04e876bed3ff9ea1ff8d1b6636d16e5fcace9d22f8c564369"},
+ {file = "pyzmq-25.0.0-cp310-cp310-musllinux_1_1_aarch64.whl", hash = "sha256:8539216173135e9e89f6b1cc392e74e6b935b91e8c76106cf50e7a02ab02efe5"},
+ {file = "pyzmq-25.0.0-cp310-cp310-musllinux_1_1_i686.whl", hash = "sha256:2754fa68da08a854f4816e05160137fa938a2347276471103d31e04bcee5365c"},
+ {file = "pyzmq-25.0.0-cp310-cp310-musllinux_1_1_x86_64.whl", hash = "sha256:4a1bc30f0c18444d51e9b0d0dd39e3a4e7c53ee74190bebef238cd58de577ea9"},
+ {file = "pyzmq-25.0.0-cp310-cp310-win32.whl", hash = "sha256:01d53958c787cfea34091fcb8ef36003dbb7913b8e9f8f62a0715234ebc98b70"},
+ {file = "pyzmq-25.0.0-cp310-cp310-win_amd64.whl", hash = "sha256:58fc3ad5e1cfd2e6d24741fbb1e216b388115d31b0ca6670f894187f280b6ba6"},
+ {file = "pyzmq-25.0.0-cp311-cp311-macosx_10_15_universal2.whl", hash = "sha256:e4bba04ea779a3d7ef25a821bb63fd0939142c88e7813e5bd9c6265a20c523a2"},
+ {file = "pyzmq-25.0.0-cp311-cp311-macosx_10_9_x86_64.whl", hash = "sha256:af1fbfb7ad6ac0009ccee33c90a1d303431c7fb594335eb97760988727a37577"},
+ {file = "pyzmq-25.0.0-cp311-cp311-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:85456f0d8f3268eecd63dede3b99d5bd8d3b306310c37d4c15141111d22baeaf"},
+ {file = "pyzmq-25.0.0-cp311-cp311-manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:0645b5a2d2a06fd8eb738018490c514907f7488bf9359c6ee9d92f62e844b76f"},
+ {file = "pyzmq-25.0.0-cp311-cp311-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:9f72ea279b2941a5203e935a4588b9ba8a48aeb9a926d9dfa1986278bd362cb8"},
+ {file = "pyzmq-25.0.0-cp311-cp311-manylinux_2_28_x86_64.whl", hash = "sha256:4e295f7928a31ae0f657e848c5045ba6d693fe8921205f408ca3804b1b236968"},
+ {file = "pyzmq-25.0.0-cp311-cp311-musllinux_1_1_aarch64.whl", hash = "sha256:ac97e7d647d5519bcef48dd8d3d331f72975afa5c4496c95f6e854686f45e2d9"},
+ {file = "pyzmq-25.0.0-cp311-cp311-musllinux_1_1_i686.whl", hash = "sha256:656281d496aaf9ca4fd4cea84e6d893e3361057c4707bd38618f7e811759103c"},
+ {file = "pyzmq-25.0.0-cp311-cp311-musllinux_1_1_x86_64.whl", hash = "sha256:1f6116991568aac48b94d6d8aaed6157d407942ea385335a6ed313692777fb9d"},
+ {file = "pyzmq-25.0.0-cp311-cp311-win32.whl", hash = "sha256:0282bba9aee6e0346aa27d6c69b5f7df72b5a964c91958fc9e0c62dcae5fdcdc"},
+ {file = "pyzmq-25.0.0-cp311-cp311-win_amd64.whl", hash = "sha256:526f884a27e8bba62fe1f4e07c62be2cfe492b6d432a8fdc4210397f8cf15331"},
+ {file = "pyzmq-25.0.0-cp36-cp36m-macosx_10_9_x86_64.whl", hash = "sha256:ccb3e1a863222afdbda42b7ca8ac8569959593d7abd44f5a709177d6fa27d266"},
+ {file = "pyzmq-25.0.0-cp36-cp36m-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:4046d03100aca266e70d54a35694cb35d6654cfbef633e848b3c4a8d64b9d187"},
+ {file = "pyzmq-25.0.0-cp36-cp36m-manylinux_2_5_i686.manylinux1_i686.whl", hash = "sha256:3100dddcada66ec5940ed6391ebf9d003cc3ede3d320748b2737553019f58230"},
+ {file = "pyzmq-25.0.0-cp36-cp36m-manylinux_2_5_x86_64.manylinux1_x86_64.whl", hash = "sha256:7877264aa851c19404b1bb9dbe6eed21ea0c13698be1eda3784aab3036d1c861"},
+ {file = "pyzmq-25.0.0-cp36-cp36m-musllinux_1_1_aarch64.whl", hash = "sha256:5049e75cc99db65754a3da5f079230fb8889230cf09462ec972d884d1704a3ed"},
+ {file = "pyzmq-25.0.0-cp36-cp36m-musllinux_1_1_i686.whl", hash = "sha256:81f99fb1224d36eb91557afec8cdc2264e856f3464500b55749020ce4c848ef2"},
+ {file = "pyzmq-25.0.0-cp36-cp36m-musllinux_1_1_x86_64.whl", hash = "sha256:a1cd4a95f176cdc0ee0a82d49d5830f13ae6015d89decbf834c273bc33eeb3d3"},
+ {file = "pyzmq-25.0.0-cp36-cp36m-win32.whl", hash = "sha256:926236ca003aec70574754f39703528947211a406f5c6c8b3e50eca04a9e87fc"},
+ {file = "pyzmq-25.0.0-cp36-cp36m-win_amd64.whl", hash = "sha256:94f0a7289d0f5c80807c37ebb404205e7deb737e8763eb176f4770839ee2a287"},
+ {file = "pyzmq-25.0.0-cp37-cp37m-macosx_10_9_x86_64.whl", hash = "sha256:f3f96d452e9580cb961ece2e5a788e64abaecb1232a80e61deffb28e105ff84a"},
+ {file = "pyzmq-25.0.0-cp37-cp37m-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:930e6ad4f2eaac31a3d0c2130619d25db754b267487ebc186c6ad18af2a74018"},
+ {file = "pyzmq-25.0.0-cp37-cp37m-manylinux_2_5_i686.manylinux1_i686.whl", hash = "sha256:e1081d7030a1229c8ff90120346fb7599b54f552e98fcea5170544e7c6725aab"},
+ {file = "pyzmq-25.0.0-cp37-cp37m-manylinux_2_5_x86_64.manylinux1_x86_64.whl", hash = "sha256:531866c491aee5a1e967c286cfa470dffac1e2a203b1afda52d62b58782651e9"},
+ {file = "pyzmq-25.0.0-cp37-cp37m-musllinux_1_1_aarch64.whl", hash = "sha256:fc7c1421c5b1c916acf3128bf3cc7ea7f5018b58c69a6866d70c14190e600ce9"},
+ {file = "pyzmq-25.0.0-cp37-cp37m-musllinux_1_1_i686.whl", hash = "sha256:9a2d5e419bd39a1edb6cdd326d831f0120ddb9b1ff397e7d73541bf393294973"},
+ {file = "pyzmq-25.0.0-cp37-cp37m-musllinux_1_1_x86_64.whl", hash = "sha256:183e18742be3621acf8908903f689ec520aee3f08449bfd29f583010ca33022b"},
+ {file = "pyzmq-25.0.0-cp37-cp37m-win32.whl", hash = "sha256:02f5cb60a7da1edd5591a15efa654ffe2303297a41e1b40c3c8942f8f11fc17c"},
+ {file = "pyzmq-25.0.0-cp37-cp37m-win_amd64.whl", hash = "sha256:cac602e02341eaaf4edfd3e29bd3fdef672e61d4e6dfe5c1d065172aee00acee"},
+ {file = "pyzmq-25.0.0-cp38-cp38-macosx_10_15_universal2.whl", hash = "sha256:e14df47c1265356715d3d66e90282a645ebc077b70b3806cf47efcb7d1d630cb"},
+ {file = "pyzmq-25.0.0-cp38-cp38-macosx_10_9_x86_64.whl", hash = "sha256:293a7c2128690f496057f1f1eb6074f8746058d13588389981089ec45d8fdc77"},
+ {file = "pyzmq-25.0.0-cp38-cp38-manylinux_2_12_i686.manylinux2010_i686.whl", hash = "sha256:731b208bc9412deeb553c9519dca47136b5a01ca66667cafd8733211941b17e4"},
+ {file = "pyzmq-25.0.0-cp38-cp38-manylinux_2_12_x86_64.manylinux2010_x86_64.whl", hash = "sha256:b055a1cddf8035966ad13aa51edae5dc8f1bba0b5d5e06f7a843d8b83dc9b66b"},
+ {file = "pyzmq-25.0.0-cp38-cp38-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:17e1cb97d573ea84d7cd97188b42ca6f611ab3ee600f6a75041294ede58e3d20"},
+ {file = "pyzmq-25.0.0-cp38-cp38-musllinux_1_1_aarch64.whl", hash = "sha256:60ecbfe7669d3808ffa8a7dd1487d6eb8a4015b07235e3b723d4b2a2d4de7203"},
+ {file = "pyzmq-25.0.0-cp38-cp38-musllinux_1_1_i686.whl", hash = "sha256:4c25c95416133942280faaf068d0fddfd642b927fb28aaf4ab201a738e597c1e"},
+ {file = "pyzmq-25.0.0-cp38-cp38-musllinux_1_1_x86_64.whl", hash = "sha256:be05504af0619d1cffa500af1e0ede69fb683f301003851f5993b5247cc2c576"},
+ {file = "pyzmq-25.0.0-cp38-cp38-win32.whl", hash = "sha256:6bf3842af37af43fa953e96074ebbb5315f6a297198f805d019d788a1021dbc8"},
+ {file = "pyzmq-25.0.0-cp38-cp38-win_amd64.whl", hash = "sha256:b90bb8dfbbd138558f1f284fecfe328f7653616ff9a972433a00711d9475d1a9"},
+ {file = "pyzmq-25.0.0-cp39-cp39-macosx_10_15_universal2.whl", hash = "sha256:62b9e80890c0d2408eb42d5d7e1fc62a5ce71be3288684788f74cf3e59ffd6e2"},
+ {file = "pyzmq-25.0.0-cp39-cp39-macosx_10_9_x86_64.whl", hash = "sha256:484c2c4ee02c1edc07039f42130bd16e804b1fe81c4f428e0042e03967f40c20"},
+ {file = "pyzmq-25.0.0-cp39-cp39-manylinux_2_12_i686.manylinux2010_i686.whl", hash = "sha256:9ca6db34b26c4d3e9b0728841ec9aa39484eee272caa97972ec8c8e231b20c7e"},
+ {file = "pyzmq-25.0.0-cp39-cp39-manylinux_2_12_x86_64.manylinux2010_x86_64.whl", hash = "sha256:610d2d112acd4e5501fac31010064a6c6efd716ceb968e443cae0059eb7b86de"},
+ {file = "pyzmq-25.0.0-cp39-cp39-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:3594c0ff604e685d7e907860b61d0e10e46c74a9ffca168f6e9e50ea934ee440"},
+ {file = "pyzmq-25.0.0-cp39-cp39-musllinux_1_1_aarch64.whl", hash = "sha256:c21a5f4e54a807df5afdef52b6d24ec1580153a6bcf0607f70a6e1d9fa74c5c3"},
+ {file = "pyzmq-25.0.0-cp39-cp39-musllinux_1_1_i686.whl", hash = "sha256:4725412e27612f0d7d7c2f794d89807ad0227c2fc01dd6146b39ada49c748ef9"},
+ {file = "pyzmq-25.0.0-cp39-cp39-musllinux_1_1_x86_64.whl", hash = "sha256:4d3d604fe0a67afd1aff906e54da557a5203368a99dcc50a70eef374f1d2abef"},
+ {file = "pyzmq-25.0.0-cp39-cp39-win32.whl", hash = "sha256:3670e8c5644768f214a3b598fe46378a4a6f096d5fb82a67dfd3440028460565"},
+ {file = "pyzmq-25.0.0-cp39-cp39-win_amd64.whl", hash = "sha256:e99629a976809fe102ef73e856cf4b2660acd82a412a51e80ba2215e523dfd0a"},
+ {file = "pyzmq-25.0.0-pp37-pypy37_pp73-macosx_10_9_x86_64.whl", hash = "sha256:66509c48f7446b640eeae24b60c9c1461799a27b1b0754e438582e36b5af3315"},
+ {file = "pyzmq-25.0.0-pp37-pypy37_pp73-manylinux_2_12_i686.manylinux2010_i686.whl", hash = "sha256:a9c464cc508177c09a5a6122b67f978f20e2954a21362bf095a0da4647e3e908"},
+ {file = "pyzmq-25.0.0-pp37-pypy37_pp73-manylinux_2_12_x86_64.manylinux2010_x86_64.whl", hash = "sha256:28bcb2e66224a7ac2843eb632e4109d6b161479e7a2baf24e37210461485b4f1"},
+ {file = "pyzmq-25.0.0-pp37-pypy37_pp73-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:a0e7ef9ac807db50b4eb6f534c5dcc22f998f5dae920cc28873d2c1d080a4fc9"},
+ {file = "pyzmq-25.0.0-pp37-pypy37_pp73-win_amd64.whl", hash = "sha256:5050f5c50b58a6e38ccaf9263a356f74ef1040f5ca4030225d1cb1a858c5b7b6"},
+ {file = "pyzmq-25.0.0-pp38-pypy38_pp73-macosx_10_9_x86_64.whl", hash = "sha256:2a73af6504e0d2805e926abf136ebf536735a13c22f709be7113c2ec65b4bec3"},
+ {file = "pyzmq-25.0.0-pp38-pypy38_pp73-manylinux_2_12_i686.manylinux2010_i686.whl", hash = "sha256:0e8d00228db627ddd1b418c7afd81820b38575f237128c9650365f2dd6ac3443"},
+ {file = "pyzmq-25.0.0-pp38-pypy38_pp73-manylinux_2_12_x86_64.manylinux2010_x86_64.whl", hash = "sha256:5605621f2181f20b71f13f698944deb26a0a71af4aaf435b34dd90146092d530"},
+ {file = "pyzmq-25.0.0-pp38-pypy38_pp73-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:6136bfb0e5a9cf8c60c6ac763eb21f82940a77e6758ea53516c8c7074f4ff948"},
+ {file = "pyzmq-25.0.0-pp38-pypy38_pp73-win_amd64.whl", hash = "sha256:0a90b2480a26aef7c13cff18703ba8d68e181facb40f78873df79e6d42c1facc"},
+ {file = "pyzmq-25.0.0-pp39-pypy39_pp73-macosx_10_9_x86_64.whl", hash = "sha256:00c94fd4c9dd3c95aace0c629a7fa713627a5c80c1819326b642adf6c4b8e2a2"},
+ {file = "pyzmq-25.0.0-pp39-pypy39_pp73-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:20638121b0bdc80777ce0ec8c1f14f1ffec0697a1f88f0b564fa4a23078791c4"},
+ {file = "pyzmq-25.0.0-pp39-pypy39_pp73-manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:b6f75b4b8574f3a8a0d6b4b52606fc75b82cb4391471be48ab0b8677c82f9ed4"},
+ {file = "pyzmq-25.0.0-pp39-pypy39_pp73-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:4cbb885f347eba7ab7681c450dee5b14aed9f153eec224ec0c3f299273d9241f"},
+ {file = "pyzmq-25.0.0-pp39-pypy39_pp73-manylinux_2_28_x86_64.whl", hash = "sha256:c48f257da280b3be6c94e05bd575eddb1373419dbb1a72c3ce64e88f29d1cd6d"},
+ {file = "pyzmq-25.0.0-pp39-pypy39_pp73-win_amd64.whl", hash = "sha256:866eabf7c1315ef2e93e34230db7cbf672e0d7c626b37c11f7e870c8612c3dcc"},
+ {file = "pyzmq-25.0.0.tar.gz", hash = "sha256:f330a1a2c7f89fd4b0aa4dcb7bf50243bf1c8da9a2f1efc31daf57a2046b31f2"},
+]
+
+[package.dependencies]
+cffi = {version = "*", markers = "implementation_name == \"pypy\""}
+
+[[package]]
+name = "qtconsole"
+version = "5.4.0"
+description = "Jupyter Qt console"
+category = "dev"
+optional = false
+python-versions = ">= 3.7"
+files = [
+ {file = "qtconsole-5.4.0-py3-none-any.whl", hash = "sha256:be13560c19bdb3b54ed9741a915aa701a68d424519e8341ac479a91209e694b2"},
+ {file = "qtconsole-5.4.0.tar.gz", hash = "sha256:57748ea2fd26320a0b77adba20131cfbb13818c7c96d83fafcb110ff55f58b35"},
+]
+
+[package.dependencies]
+ipykernel = ">=4.1"
+ipython-genutils = "*"
+jupyter-client = ">=4.1"
+jupyter-core = "*"
+pygments = "*"
+pyzmq = ">=17.1"
+qtpy = ">=2.0.1"
+traitlets = "<5.2.1 || >5.2.1,<5.2.2 || >5.2.2"
+
+[package.extras]
+doc = ["Sphinx (>=1.3)"]
+test = ["flaky", "pytest", "pytest-qt"]
+
+[[package]]
+name = "qtpy"
+version = "2.3.0"
+description = "Provides an abstraction layer on top of the various Qt bindings (PyQt5/6 and PySide2/6)."
+category = "dev"
+optional = false
+python-versions = ">=3.7"
+files = [
+ {file = "QtPy-2.3.0-py3-none-any.whl", hash = "sha256:8d6d544fc20facd27360ea189592e6135c614785f0dec0b4f083289de6beb408"},
+ {file = "QtPy-2.3.0.tar.gz", hash = "sha256:0603c9c83ccc035a4717a12908bf6bc6cb22509827ea2ec0e94c2da7c9ed57c5"},
+]
+
+[package.dependencies]
+packaging = "*"
+
+[package.extras]
+test = ["pytest (>=6,!=7.0.0,!=7.0.1)", "pytest-cov (>=3.0.0)", "pytest-qt"]
+
+[[package]]
+name = "readme-renderer"
+version = "37.3"
+description = "readme_renderer is a library for rendering \"readme\" descriptions for Warehouse"
+category = "dev"
+optional = false
+python-versions = ">=3.7"
+files = [
+ {file = "readme_renderer-37.3-py3-none-any.whl", hash = "sha256:f67a16caedfa71eef48a31b39708637a6f4664c4394801a7b0d6432d13907343"},
+ {file = "readme_renderer-37.3.tar.gz", hash = "sha256:cd653186dfc73055656f090f227f5cb22a046d7f71a841dfa305f55c9a513273"},
+]
+
+[package.dependencies]
+bleach = ">=2.1.0"
+docutils = ">=0.13.1"
+Pygments = ">=2.5.1"
+
+[package.extras]
+md = ["cmarkgfm (>=0.8.0)"]
+
+[[package]]
+name = "requests"
+version = "2.28.2"
+description = "Python HTTP for Humans."
+category = "dev"
+optional = false
+python-versions = ">=3.7, <4"
+files = [
+ {file = "requests-2.28.2-py3-none-any.whl", hash = "sha256:64299f4909223da747622c030b781c0d7811e359c37124b4bd368fb8c6518baa"},
+ {file = "requests-2.28.2.tar.gz", hash = "sha256:98b1b2782e3c6c4904938b84c0eb932721069dfdb9134313beff7c83c2df24bf"},
+]
+
+[package.dependencies]
+certifi = ">=2017.4.17"
+charset-normalizer = ">=2,<4"
+idna = ">=2.5,<4"
+urllib3 = ">=1.21.1,<1.27"
+
+[package.extras]
+socks = ["PySocks (>=1.5.6,!=1.5.7)"]
+use-chardet-on-py3 = ["chardet (>=3.0.2,<6)"]
+
+[[package]]
+name = "requests-toolbelt"
+version = "0.10.1"
+description = "A utility belt for advanced users of python-requests"
+category = "dev"
+optional = false
+python-versions = ">=2.7, !=3.0.*, !=3.1.*, !=3.2.*, !=3.3.*"
+files = [
+ {file = "requests-toolbelt-0.10.1.tar.gz", hash = "sha256:62e09f7ff5ccbda92772a29f394a49c3ad6cb181d568b1337626b2abb628a63d"},
+ {file = "requests_toolbelt-0.10.1-py2.py3-none-any.whl", hash = "sha256:18565aa58116d9951ac39baa288d3adb5b3ff975c4f25eee78555d89e8f247f7"},
+]
+
+[package.dependencies]
+requests = ">=2.0.1,<3.0.0"
+
+[[package]]
+name = "resolvelib"
+version = "0.9.0"
+description = "Resolve abstract dependencies into concrete ones"
+category = "dev"
+optional = false
+python-versions = "*"
+files = [
+ {file = "resolvelib-0.9.0-py2.py3-none-any.whl", hash = "sha256:597adcbdf81d62d0cde55d90faa8e79187ec0f18e5012df30bd7a751b26343ae"},
+ {file = "resolvelib-0.9.0.tar.gz", hash = "sha256:40ab05117c3281b1b160105e10075094c5ab118315003c922b77673a365290e1"},
+]
+
+[package.extras]
+examples = ["html5lib", "packaging", "pygraphviz", "requests"]
+lint = ["black", "flake8", "isort", "mypy", "types-requests"]
+release = ["build", "towncrier", "twine"]
+test = ["commentjson", "packaging", "pytest"]
+
+[[package]]
+name = "rfc3986"
+version = "2.0.0"
+description = "Validating URI References per RFC 3986"
+category = "dev"
+optional = false
+python-versions = ">=3.7"
+files = [
+ {file = "rfc3986-2.0.0-py2.py3-none-any.whl", hash = "sha256:50b1502b60e289cb37883f3dfd34532b8873c7de9f49bb546641ce9cbd256ebd"},
+ {file = "rfc3986-2.0.0.tar.gz", hash = "sha256:97aacf9dbd4bfd829baad6e6309fa6573aaf1be3f6fa735c8ab05e46cecb261c"},
+]
+
+[package.extras]
+idna2008 = ["idna"]
+
+[[package]]
+name = "ruff"
+version = "0.0.191"
+description = "An extremely fast Python linter, written in Rust."
+category = "dev"
+optional = false
+python-versions = ">=3.7"
+files = [
+ {file = "ruff-0.0.191-py3-none-macosx_10_7_x86_64.whl", hash = "sha256:77aa90ab83f6ef663ad002708eec13b7931193dfa418c09564ab34df4766a11d"},
+ {file = "ruff-0.0.191-py3-none-macosx_10_9_x86_64.macosx_11_0_arm64.macosx_10_9_universal2.whl", hash = "sha256:75e8d628f4d1b0db216e8cb26a10f36a6c3db572e839aaad8ac0a51e0bd1c0ef"},
+ {file = "ruff-0.0.191-py3-none-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:0ce09e5c6253854ce907a42640c316f97ff49b8a3e35ce3e44524e1fc67bf3f3"},
+ {file = "ruff-0.0.191-py3-none-manylinux_2_17_armv7l.manylinux2014_armv7l.whl", hash = "sha256:0a230dad3805d70d4c2c9716e1cfe0939695979c8ef7b7b4791e3f5c26c5d965"},
+ {file = "ruff-0.0.191-py3-none-manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:2727d02f9cacca0d945d1a4fa443ff9469a131c7791df1f8824b9c89721bd138"},
+ {file = "ruff-0.0.191-py3-none-manylinux_2_17_ppc64.manylinux2014_ppc64.whl", hash = "sha256:20b52da4ff1008f4401e4681c6b8552133aa0553cde17dfcd4d42d992242e578"},
+ {file = "ruff-0.0.191-py3-none-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:1084aa3a448c5f98c567547de79eef2dba6f022232fd9c89876881d74de3d3cd"},
+ {file = "ruff-0.0.191-py3-none-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:0d76b88b0dbd6dfa6ac3ceb08bd519e140ad6e4f7cf9cf2d88310a084af35e92"},
+ {file = "ruff-0.0.191-py3-none-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:614106b64fefd70750a2eb3e50bfce7c9b56585d7be96f925fb3c13108552984"},
+ {file = "ruff-0.0.191-py3-none-musllinux_1_2_aarch64.whl", hash = "sha256:304b83f1ab6f91245a800b3c60e38083b89ee240cbdb3d1d8a98671265292c11"},
+ {file = "ruff-0.0.191-py3-none-musllinux_1_2_armv7l.whl", hash = "sha256:4bf38f1419b5d8b3bc299a2cba183a43ade4858af2160033fb94aa9e5a85c700"},
+ {file = "ruff-0.0.191-py3-none-musllinux_1_2_i686.whl", hash = "sha256:97c22a00daad210f03458e8993c62bee38ec98d2f0b8d50a671b31a96256cd5f"},
+ {file = "ruff-0.0.191-py3-none-musllinux_1_2_x86_64.whl", hash = "sha256:c2a5b7e2b7aa33e5ece2291dc655ad2023626114179fc166d382e3902f319dd5"},
+ {file = "ruff-0.0.191-py3-none-win32.whl", hash = "sha256:195da807f65e1b153379b7258086cbfb3b10143ecf93683275dfef561593c485"},
+ {file = "ruff-0.0.191-py3-none-win_amd64.whl", hash = "sha256:a25a7b9d56732df7f4887f3bd4a66dd54924be8c4d359e6a73bd5b1a8082f1bf"},
+ {file = "ruff-0.0.191.tar.gz", hash = "sha256:d698c4d5e3b2963cbbb7c2728f404091d5c47cdf8d94db3eb2f335e2a93a6b1b"},
+]
+
+[[package]]
+name = "scipy"
+version = "1.7.3"
+description = "SciPy: Scientific Library for Python"
+category = "main"
+optional = false
+python-versions = ">=3.7,<3.11"
+files = [
+ {file = "scipy-1.7.3-1-cp310-cp310-macosx_12_0_arm64.whl", hash = "sha256:c9e04d7e9b03a8a6ac2045f7c5ef741be86727d8f49c45db45f244bdd2bcff17"},
+ {file = "scipy-1.7.3-1-cp38-cp38-macosx_12_0_arm64.whl", hash = "sha256:b0e0aeb061a1d7dcd2ed59ea57ee56c9b23dd60100825f98238c06ee5cc4467e"},
+ {file = "scipy-1.7.3-1-cp39-cp39-macosx_12_0_arm64.whl", hash = "sha256:b78a35c5c74d336f42f44106174b9851c783184a85a3fe3e68857259b37b9ffb"},
+ {file = "scipy-1.7.3-cp310-cp310-macosx_10_9_x86_64.whl", hash = "sha256:173308efba2270dcd61cd45a30dfded6ec0085b4b6eb33b5eb11ab443005e088"},
+ {file = "scipy-1.7.3-cp310-cp310-macosx_12_0_arm64.whl", hash = "sha256:21b66200cf44b1c3e86495e3a436fc7a26608f92b8d43d344457c54f1c024cbc"},
+ {file = "scipy-1.7.3-cp310-cp310-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:ceebc3c4f6a109777c0053dfa0282fddb8893eddfb0d598574acfb734a926168"},
+ {file = "scipy-1.7.3-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:f7eaea089345a35130bc9a39b89ec1ff69c208efa97b3f8b25ea5d4c41d88094"},
+ {file = "scipy-1.7.3-cp310-cp310-win_amd64.whl", hash = "sha256:304dfaa7146cffdb75fbf6bb7c190fd7688795389ad060b970269c8576d038e9"},
+ {file = "scipy-1.7.3-cp37-cp37m-macosx_10_9_x86_64.whl", hash = "sha256:033ce76ed4e9f62923e1f8124f7e2b0800db533828c853b402c7eec6e9465d80"},
+ {file = "scipy-1.7.3-cp37-cp37m-manylinux_2_12_i686.manylinux2010_i686.whl", hash = "sha256:4d242d13206ca4302d83d8a6388c9dfce49fc48fdd3c20efad89ba12f785bf9e"},
+ {file = "scipy-1.7.3-cp37-cp37m-manylinux_2_12_x86_64.manylinux2010_x86_64.whl", hash = "sha256:8499d9dd1459dc0d0fe68db0832c3d5fc1361ae8e13d05e6849b358dc3f2c279"},
+ {file = "scipy-1.7.3-cp37-cp37m-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:ca36e7d9430f7481fc7d11e015ae16fbd5575615a8e9060538104778be84addf"},
+ {file = "scipy-1.7.3-cp37-cp37m-win32.whl", hash = "sha256:e2c036492e673aad1b7b0d0ccdc0cb30a968353d2c4bf92ac8e73509e1bf212c"},
+ {file = "scipy-1.7.3-cp37-cp37m-win_amd64.whl", hash = "sha256:866ada14a95b083dd727a845a764cf95dd13ba3dc69a16b99038001b05439709"},
+ {file = "scipy-1.7.3-cp38-cp38-macosx_10_9_x86_64.whl", hash = "sha256:65bd52bf55f9a1071398557394203d881384d27b9c2cad7df9a027170aeaef93"},
+ {file = "scipy-1.7.3-cp38-cp38-macosx_12_0_arm64.whl", hash = "sha256:f99d206db1f1ae735a8192ab93bd6028f3a42f6fa08467d37a14eb96c9dd34a3"},
+ {file = "scipy-1.7.3-cp38-cp38-manylinux_2_12_i686.manylinux2010_i686.whl", hash = "sha256:5f2cfc359379c56b3a41b17ebd024109b2049f878badc1e454f31418c3a18436"},
+ {file = "scipy-1.7.3-cp38-cp38-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:eb7ae2c4dbdb3c9247e07acc532f91077ae6dbc40ad5bd5dca0bb5a176ee9bda"},
+ {file = "scipy-1.7.3-cp38-cp38-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:95c2d250074cfa76715d58830579c64dff7354484b284c2b8b87e5a38321672c"},
+ {file = "scipy-1.7.3-cp38-cp38-win32.whl", hash = "sha256:87069cf875f0262a6e3187ab0f419f5b4280d3dcf4811ef9613c605f6e4dca95"},
+ {file = "scipy-1.7.3-cp38-cp38-win_amd64.whl", hash = "sha256:7edd9a311299a61e9919ea4192dd477395b50c014cdc1a1ac572d7c27e2207fa"},
+ {file = "scipy-1.7.3-cp39-cp39-macosx_10_9_x86_64.whl", hash = "sha256:eef93a446114ac0193a7b714ce67659db80caf940f3232bad63f4c7a81bc18df"},
+ {file = "scipy-1.7.3-cp39-cp39-macosx_12_0_arm64.whl", hash = "sha256:eb326658f9b73c07081300daba90a8746543b5ea177184daed26528273157294"},
+ {file = "scipy-1.7.3-cp39-cp39-manylinux_2_12_i686.manylinux2010_i686.whl", hash = "sha256:93378f3d14fff07572392ce6a6a2ceb3a1f237733bd6dcb9eb6a2b29b0d19085"},
+ {file = "scipy-1.7.3-cp39-cp39-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:edad1cf5b2ce1912c4d8ddad20e11d333165552aba262c882e28c78bbc09dbf6"},
+ {file = "scipy-1.7.3-cp39-cp39-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:5d1cc2c19afe3b5a546ede7e6a44ce1ff52e443d12b231823268019f608b9b12"},
+ {file = "scipy-1.7.3-cp39-cp39-win32.whl", hash = "sha256:2c56b820d304dffcadbbb6cbfbc2e2c79ee46ea291db17e288e73cd3c64fefa9"},
+ {file = "scipy-1.7.3-cp39-cp39-win_amd64.whl", hash = "sha256:3f78181a153fa21c018d346f595edd648344751d7f03ab94b398be2ad083ed3e"},
+ {file = "scipy-1.7.3.tar.gz", hash = "sha256:ab5875facfdef77e0a47d5fd39ea178b58e60e454a4c85aa1e52fcb80db7babf"},
+]
+
+[package.dependencies]
+numpy = ">=1.16.5,<1.23.0"
+
+[[package]]
+name = "scipy"
+version = "1.10.1"
+description = "Fundamental algorithms for scientific computing in Python"
+category = "main"
+optional = false
+python-versions = "<3.12,>=3.8"
+files = [
+ {file = "scipy-1.10.1-cp310-cp310-macosx_10_9_x86_64.whl", hash = "sha256:e7354fd7527a4b0377ce55f286805b34e8c54b91be865bac273f527e1b839019"},
+ {file = "scipy-1.10.1-cp310-cp310-macosx_12_0_arm64.whl", hash = "sha256:4b3f429188c66603a1a5c549fb414e4d3bdc2a24792e061ffbd607d3d75fd84e"},
+ {file = "scipy-1.10.1-cp310-cp310-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:1553b5dcddd64ba9a0d95355e63fe6c3fc303a8fd77c7bc91e77d61363f7433f"},
+ {file = "scipy-1.10.1-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:4c0ff64b06b10e35215abce517252b375e580a6125fd5fdf6421b98efbefb2d2"},
+ {file = "scipy-1.10.1-cp310-cp310-win_amd64.whl", hash = "sha256:fae8a7b898c42dffe3f7361c40d5952b6bf32d10c4569098d276b4c547905ee1"},
+ {file = "scipy-1.10.1-cp311-cp311-macosx_10_9_x86_64.whl", hash = "sha256:0f1564ea217e82c1bbe75ddf7285ba0709ecd503f048cb1236ae9995f64217bd"},
+ {file = "scipy-1.10.1-cp311-cp311-macosx_12_0_arm64.whl", hash = "sha256:d925fa1c81b772882aa55bcc10bf88324dadb66ff85d548c71515f6689c6dac5"},
+ {file = "scipy-1.10.1-cp311-cp311-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:aaea0a6be54462ec027de54fca511540980d1e9eea68b2d5c1dbfe084797be35"},
+ {file = "scipy-1.10.1-cp311-cp311-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:15a35c4242ec5f292c3dd364a7c71a61be87a3d4ddcc693372813c0b73c9af1d"},
+ {file = "scipy-1.10.1-cp311-cp311-win_amd64.whl", hash = "sha256:43b8e0bcb877faf0abfb613d51026cd5cc78918e9530e375727bf0625c82788f"},
+ {file = "scipy-1.10.1-cp38-cp38-macosx_10_9_x86_64.whl", hash = "sha256:5678f88c68ea866ed9ebe3a989091088553ba12c6090244fdae3e467b1139c35"},
+ {file = "scipy-1.10.1-cp38-cp38-macosx_12_0_arm64.whl", hash = "sha256:39becb03541f9e58243f4197584286e339029e8908c46f7221abeea4b749fa88"},
+ {file = "scipy-1.10.1-cp38-cp38-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:bce5869c8d68cf383ce240e44c1d9ae7c06078a9396df68ce88a1230f93a30c1"},
+ {file = "scipy-1.10.1-cp38-cp38-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:07c3457ce0b3ad5124f98a86533106b643dd811dd61b548e78cf4c8786652f6f"},
+ {file = "scipy-1.10.1-cp38-cp38-win_amd64.whl", hash = "sha256:049a8bbf0ad95277ffba9b3b7d23e5369cc39e66406d60422c8cfef40ccc8415"},
+ {file = "scipy-1.10.1-cp39-cp39-macosx_10_9_x86_64.whl", hash = "sha256:cd9f1027ff30d90618914a64ca9b1a77a431159df0e2a195d8a9e8a04c78abf9"},
+ {file = "scipy-1.10.1-cp39-cp39-macosx_12_0_arm64.whl", hash = "sha256:79c8e5a6c6ffaf3a2262ef1be1e108a035cf4f05c14df56057b64acc5bebffb6"},
+ {file = "scipy-1.10.1-cp39-cp39-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:51af417a000d2dbe1ec6c372dfe688e041a7084da4fdd350aeb139bd3fb55353"},
+ {file = "scipy-1.10.1-cp39-cp39-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:1b4735d6c28aad3cdcf52117e0e91d6b39acd4272f3f5cd9907c24ee931ad601"},
+ {file = "scipy-1.10.1-cp39-cp39-win_amd64.whl", hash = "sha256:7ff7f37b1bf4417baca958d254e8e2875d0cc23aaadbe65b3d5b3077b0eb23ea"},
+ {file = "scipy-1.10.1.tar.gz", hash = "sha256:2cf9dfb80a7b4589ba4c40ce7588986d6d5cebc5457cad2c2880f6bc2d42f3a5"},
+]
+
+[package.dependencies]
+numpy = ">=1.19.5,<1.27.0"
+
+[package.extras]
+dev = ["click", "doit (>=0.36.0)", "flake8", "mypy", "pycodestyle", "pydevtool", "rich-click", "typing_extensions"]
+doc = ["matplotlib (>2)", "numpydoc", "pydata-sphinx-theme (==0.9.0)", "sphinx (!=4.1.0)", "sphinx-design (>=0.2.0)"]
+test = ["asv", "gmpy2", "mpmath", "pooch", "pytest", "pytest-cov", "pytest-timeout", "pytest-xdist", "scikit-umfpack", "threadpoolctl"]
+
+[[package]]
+name = "secretstorage"
+version = "3.3.3"
+description = "Python bindings to FreeDesktop.org Secret Service API"
+category = "dev"
+optional = false
+python-versions = ">=3.6"
+files = [
+ {file = "SecretStorage-3.3.3-py3-none-any.whl", hash = "sha256:f356e6628222568e3af06f2eba8df495efa13b3b63081dafd4f7d9a7b7bc9f99"},
+ {file = "SecretStorage-3.3.3.tar.gz", hash = "sha256:2403533ef369eca6d2ba81718576c5e0f564d5cca1b58f73a8b23e7d4eeebd77"},
+]
+
+[package.dependencies]
+cryptography = ">=2.0"
+jeepney = ">=0.6"
+
+[[package]]
+name = "semver"
+version = "2.13.0"
+description = "Python helper for Semantic Versioning (http://semver.org/)"
+category = "dev"
+optional = false
+python-versions = ">=2.7, !=3.0.*, !=3.1.*, !=3.2.*, !=3.3.*"
+files = [
+ {file = "semver-2.13.0-py2.py3-none-any.whl", hash = "sha256:ced8b23dceb22134307c1b8abfa523da14198793d9787ac838e70e29e77458d4"},
+ {file = "semver-2.13.0.tar.gz", hash = "sha256:fa0fe2722ee1c3f57eac478820c3a5ae2f624af8264cbdf9000c980ff7f75e3f"},
+]
+
+[[package]]
+name = "send2trash"
+version = "1.8.0"
+description = "Send file to trash natively under Mac OS X, Windows and Linux."
+category = "dev"
+optional = false
+python-versions = "*"
+files = [
+ {file = "Send2Trash-1.8.0-py3-none-any.whl", hash = "sha256:f20eaadfdb517eaca5ce077640cb261c7d2698385a6a0f072a4a5447fd49fa08"},
+ {file = "Send2Trash-1.8.0.tar.gz", hash = "sha256:d2c24762fd3759860a0aff155e45871447ea58d2be6bdd39b5c8f966a0c99c2d"},
+]
+
+[package.extras]
+nativelib = ["pyobjc-framework-Cocoa", "pywin32"]
+objc = ["pyobjc-framework-Cocoa"]
+win32 = ["pywin32"]
+
+[[package]]
+name = "setuptools"
+version = "67.4.0"
+description = "Easily download, build, install, upgrade, and uninstall Python packages"
+category = "main"
+optional = false
+python-versions = ">=3.7"
+files = [
+ {file = "setuptools-67.4.0-py3-none-any.whl", hash = "sha256:f106dee1b506dee5102cc3f3e9e68137bbad6d47b616be7991714b0c62204251"},
+ {file = "setuptools-67.4.0.tar.gz", hash = "sha256:e5fd0a713141a4a105412233c63dc4e17ba0090c8e8334594ac790ec97792330"},
+]
+
+[package.extras]
+docs = ["furo", "jaraco.packaging (>=9)", "jaraco.tidelift (>=1.4)", "pygments-github-lexers (==0.0.5)", "rst.linker (>=1.9)", "sphinx (>=3.5)", "sphinx-favicon", "sphinx-hoverxref (<2)", "sphinx-inline-tabs", "sphinx-lint", "sphinx-notfound-page (==0.8.3)", "sphinx-reredirects", "sphinxcontrib-towncrier"]
+testing = ["build[virtualenv]", "filelock (>=3.4.0)", "flake8 (<5)", "flake8-2020", "ini2toml[lite] (>=0.9)", "jaraco.envs (>=2.2)", "jaraco.path (>=3.2.0)", "pip (>=19.1)", "pip-run (>=8.8)", "pytest (>=6)", "pytest-black (>=0.3.7)", "pytest-checkdocs (>=2.4)", "pytest-cov", "pytest-enabler (>=1.3)", "pytest-flake8", "pytest-mypy (>=0.9.1)", "pytest-perf", "pytest-timeout", "pytest-xdist", "tomli-w (>=1.0.0)", "virtualenv (>=13.0.0)", "wheel"]
+testing-integration = ["build[virtualenv]", "filelock (>=3.4.0)", "jaraco.envs (>=2.2)", "jaraco.path (>=3.2.0)", "pytest", "pytest-enabler", "pytest-xdist", "tomli", "virtualenv (>=13.0.0)", "wheel"]
+
+[[package]]
+name = "six"
+version = "1.16.0"
+description = "Python 2 and 3 compatibility utilities"
+category = "main"
+optional = false
+python-versions = ">=2.7, !=3.0.*, !=3.1.*, !=3.2.*"
+files = [
+ {file = "six-1.16.0-py2.py3-none-any.whl", hash = "sha256:8abb2f1d86890a2dfb989f9a77cfcfd3e47c2a354b01111771326f8aa26e0254"},
+ {file = "six-1.16.0.tar.gz", hash = "sha256:1e61c37477a1626458e36f7b1d82aa5c9b094fa4802892072e49de9c60c4c926"},
+]
+
+[[package]]
+name = "smmap"
+version = "5.0.0"
+description = "A pure Python implementation of a sliding window memory map manager"
+category = "dev"
+optional = false
+python-versions = ">=3.6"
+files = [
+ {file = "smmap-5.0.0-py3-none-any.whl", hash = "sha256:2aba19d6a040e78d8b09de5c57e96207b09ed71d8e55ce0959eeee6c8e190d94"},
+ {file = "smmap-5.0.0.tar.gz", hash = "sha256:c840e62059cd3be204b0c9c9f74be2c09d5648eddd4580d9314c3ecde0b30936"},
+]
+
+[[package]]
+name = "sniffio"
+version = "1.3.0"
+description = "Sniff out which async library your code is running under"
+category = "dev"
+optional = false
+python-versions = ">=3.7"
+files = [
+ {file = "sniffio-1.3.0-py3-none-any.whl", hash = "sha256:eecefdce1e5bbfb7ad2eeaabf7c1eeb404d7757c379bd1f7e5cce9d8bf425384"},
+ {file = "sniffio-1.3.0.tar.gz", hash = "sha256:e60305c5e5d314f5389259b7f22aaa33d8f7dee49763119234af3755c55b9101"},
+]
+
+[[package]]
+name = "snowballstemmer"
+version = "2.2.0"
+description = "This package provides 29 stemmers for 28 languages generated from Snowball algorithms."
+category = "dev"
+optional = false
+python-versions = "*"
+files = [
+ {file = "snowballstemmer-2.2.0-py2.py3-none-any.whl", hash = "sha256:c8e1716e83cc398ae16824e5572ae04e0d9fc2c6b985fb0f900f5f0c96ecba1a"},
+ {file = "snowballstemmer-2.2.0.tar.gz", hash = "sha256:09b16deb8547d3412ad7b590689584cd0fe25ec8db3be37788be3810cbf19cb1"},
+]
+
+[[package]]
+name = "soupsieve"
+version = "2.4"
+description = "A modern CSS selector implementation for Beautiful Soup."
+category = "dev"
+optional = false
+python-versions = ">=3.7"
+files = [
+ {file = "soupsieve-2.4-py3-none-any.whl", hash = "sha256:49e5368c2cda80ee7e84da9dbe3e110b70a4575f196efb74e51b94549d921955"},
+ {file = "soupsieve-2.4.tar.gz", hash = "sha256:e28dba9ca6c7c00173e34e4ba57448f0688bb681b7c5e8bf4971daafc093d69a"},
+]
+
+[[package]]
+name = "terminado"
+version = "0.17.1"
+description = "Tornado websocket backend for the Xterm.js Javascript terminal emulator library."
+category = "dev"
+optional = false
+python-versions = ">=3.7"
+files = [
+ {file = "terminado-0.17.1-py3-none-any.whl", hash = "sha256:8650d44334eba354dd591129ca3124a6ba42c3d5b70df5051b6921d506fdaeae"},
+ {file = "terminado-0.17.1.tar.gz", hash = "sha256:6ccbbcd3a4f8a25a5ec04991f39a0b8db52dfcd487ea0e578d977e6752380333"},
+]
+
+[package.dependencies]
+ptyprocess = {version = "*", markers = "os_name != \"nt\""}
+pywinpty = {version = ">=1.1.0", markers = "os_name == \"nt\""}
+tornado = ">=6.1.0"
+
+[package.extras]
+docs = ["myst-parser", "pydata-sphinx-theme", "sphinx"]
+test = ["pre-commit", "pytest (>=7.0)", "pytest-timeout"]
+
+[[package]]
+name = "tinycss2"
+version = "1.2.1"
+description = "A tiny CSS parser"
+category = "dev"
+optional = false
+python-versions = ">=3.7"
+files = [
+ {file = "tinycss2-1.2.1-py3-none-any.whl", hash = "sha256:2b80a96d41e7c3914b8cda8bc7f705a4d9c49275616e886103dd839dfc847847"},
+ {file = "tinycss2-1.2.1.tar.gz", hash = "sha256:8cff3a8f066c2ec677c06dbc7b45619804a6938478d9d73c284b29d14ecb0627"},
+]
+
+[package.dependencies]
+webencodings = ">=0.4"
+
+[package.extras]
+doc = ["sphinx", "sphinx_rtd_theme"]
+test = ["flake8", "isort", "pytest"]
+
+[[package]]
+name = "toml"
+version = "0.10.2"
+description = "Python Library for Tom's Obvious, Minimal Language"
+category = "dev"
+optional = false
+python-versions = ">=2.6, !=3.0.*, !=3.1.*, !=3.2.*"
+files = [
+ {file = "toml-0.10.2-py2.py3-none-any.whl", hash = "sha256:806143ae5bfb6a3c6e736a764057db0e6a0e05e338b5630894a5f779cabb4f9b"},
+ {file = "toml-0.10.2.tar.gz", hash = "sha256:b3bda1d108d5dd99f4a20d24d9c348e91c4db7ab1b749200bded2f839ccbe68f"},
+]
+
+[[package]]
+name = "tomli"
+version = "2.0.1"
+description = "A lil' TOML parser"
+category = "dev"
+optional = false
+python-versions = ">=3.7"
+files = [
+ {file = "tomli-2.0.1-py3-none-any.whl", hash = "sha256:939de3e7a6161af0c887ef91b7d41a53e7c5a1ca976325f429cb46ea9bc30ecc"},
+ {file = "tomli-2.0.1.tar.gz", hash = "sha256:de526c12914f0c550d15924c62d72abc48d6fe7364aa87328337a31007fe8a4f"},
+]
+
+[[package]]
+name = "tomlkit"
+version = "0.7.0"
+description = "Style preserving TOML library"
+category = "dev"
+optional = false
+python-versions = ">=2.7, !=3.0.*, !=3.1.*, !=3.2.*, !=3.3.*, !=3.4.*"
+files = [
+ {file = "tomlkit-0.7.0-py2.py3-none-any.whl", hash = "sha256:6babbd33b17d5c9691896b0e68159215a9387ebfa938aa3ac42f4a4beeb2b831"},
+ {file = "tomlkit-0.7.0.tar.gz", hash = "sha256:ac57f29693fab3e309ea789252fcce3061e19110085aa31af5446ca749325618"},
+]
+
+[[package]]
+name = "torch"
+version = "1.13.1"
+description = "Tensors and Dynamic neural networks in Python with strong GPU acceleration"
+category = "main"
+optional = false
+python-versions = ">=3.7.0"
+files = [
+ {file = "torch-1.13.1-cp310-cp310-manylinux1_x86_64.whl", hash = "sha256:fd12043868a34a8da7d490bf6db66991108b00ffbeecb034228bfcbbd4197143"},
+ {file = "torch-1.13.1-cp310-cp310-manylinux2014_aarch64.whl", hash = "sha256:d9fe785d375f2e26a5d5eba5de91f89e6a3be5d11efb497e76705fdf93fa3c2e"},
+ {file = "torch-1.13.1-cp310-cp310-win_amd64.whl", hash = "sha256:98124598cdff4c287dbf50f53fb455f0c1e3a88022b39648102957f3445e9b76"},
+ {file = "torch-1.13.1-cp310-none-macosx_10_9_x86_64.whl", hash = "sha256:393a6273c832e047581063fb74335ff50b4c566217019cc6ace318cd79eb0566"},
+ {file = "torch-1.13.1-cp310-none-macosx_11_0_arm64.whl", hash = "sha256:0122806b111b949d21fa1a5f9764d1fd2fcc4a47cb7f8ff914204fd4fc752ed5"},
+ {file = "torch-1.13.1-cp311-cp311-manylinux1_x86_64.whl", hash = "sha256:22128502fd8f5b25ac1cd849ecb64a418382ae81dd4ce2b5cebaa09ab15b0d9b"},
+ {file = "torch-1.13.1-cp37-cp37m-manylinux1_x86_64.whl", hash = "sha256:76024be052b659ac1304ab8475ab03ea0a12124c3e7626282c9c86798ac7bc11"},
+ {file = "torch-1.13.1-cp37-cp37m-manylinux2014_aarch64.whl", hash = "sha256:ea8dda84d796094eb8709df0fcd6b56dc20b58fdd6bc4e8d7109930dafc8e419"},
+ {file = "torch-1.13.1-cp37-cp37m-win_amd64.whl", hash = "sha256:2ee7b81e9c457252bddd7d3da66fb1f619a5d12c24d7074de91c4ddafb832c93"},
+ {file = "torch-1.13.1-cp37-none-macosx_10_9_x86_64.whl", hash = "sha256:0d9b8061048cfb78e675b9d2ea8503bfe30db43d583599ae8626b1263a0c1380"},
+ {file = "torch-1.13.1-cp37-none-macosx_11_0_arm64.whl", hash = "sha256:f402ca80b66e9fbd661ed4287d7553f7f3899d9ab54bf5c67faada1555abde28"},
+ {file = "torch-1.13.1-cp38-cp38-manylinux1_x86_64.whl", hash = "sha256:727dbf00e2cf858052364c0e2a496684b9cb5aa01dc8a8bc8bbb7c54502bdcdd"},
+ {file = "torch-1.13.1-cp38-cp38-manylinux2014_aarch64.whl", hash = "sha256:df8434b0695e9ceb8cc70650afc1310d8ba949e6db2a0525ddd9c3b2b181e5fe"},
+ {file = "torch-1.13.1-cp38-cp38-win_amd64.whl", hash = "sha256:5e1e722a41f52a3f26f0c4fcec227e02c6c42f7c094f32e49d4beef7d1e213ea"},
+ {file = "torch-1.13.1-cp38-none-macosx_10_9_x86_64.whl", hash = "sha256:33e67eea526e0bbb9151263e65417a9ef2d8fa53cbe628e87310060c9dcfa312"},
+ {file = "torch-1.13.1-cp38-none-macosx_11_0_arm64.whl", hash = "sha256:eeeb204d30fd40af6a2d80879b46a7efbe3cf43cdbeb8838dd4f3d126cc90b2b"},
+ {file = "torch-1.13.1-cp39-cp39-manylinux1_x86_64.whl", hash = "sha256:50ff5e76d70074f6653d191fe4f6a42fdbe0cf942fbe2a3af0b75eaa414ac038"},
+ {file = "torch-1.13.1-cp39-cp39-manylinux2014_aarch64.whl", hash = "sha256:2c3581a3fd81eb1f0f22997cddffea569fea53bafa372b2c0471db373b26aafc"},
+ {file = "torch-1.13.1-cp39-cp39-win_amd64.whl", hash = "sha256:0aa46f0ac95050c604bcf9ef71da9f1172e5037fdf2ebe051962d47b123848e7"},
+ {file = "torch-1.13.1-cp39-none-macosx_10_9_x86_64.whl", hash = "sha256:6930791efa8757cb6974af73d4996b6b50c592882a324b8fb0589c6a9ba2ddaf"},
+ {file = "torch-1.13.1-cp39-none-macosx_11_0_arm64.whl", hash = "sha256:e0df902a7c7dd6c795698532ee5970ce898672625635d885eade9976e5a04949"},
+]
+
+[package.dependencies]
+nvidia-cublas-cu11 = {version = "11.10.3.66", markers = "platform_system == \"Linux\""}
+nvidia-cuda-nvrtc-cu11 = {version = "11.7.99", markers = "platform_system == \"Linux\""}
+nvidia-cuda-runtime-cu11 = {version = "11.7.99", markers = "platform_system == \"Linux\""}
+nvidia-cudnn-cu11 = {version = "8.5.0.96", markers = "platform_system == \"Linux\""}
+typing-extensions = "*"
+
+[package.extras]
+opt-einsum = ["opt-einsum (>=3.3)"]
+
+[[package]]
+name = "tornado"
+version = "6.2"
+description = "Tornado is a Python web framework and asynchronous networking library, originally developed at FriendFeed."
+category = "dev"
+optional = false
+python-versions = ">= 3.7"
+files = [
+ {file = "tornado-6.2-cp37-abi3-macosx_10_9_universal2.whl", hash = "sha256:20f638fd8cc85f3cbae3c732326e96addff0a15e22d80f049e00121651e82e72"},
+ {file = "tornado-6.2-cp37-abi3-macosx_10_9_x86_64.whl", hash = "sha256:87dcafae3e884462f90c90ecc200defe5e580a7fbbb4365eda7c7c1eb809ebc9"},
+ {file = "tornado-6.2-cp37-abi3-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:ba09ef14ca9893954244fd872798b4ccb2367c165946ce2dd7376aebdde8e3ac"},
+ {file = "tornado-6.2-cp37-abi3-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:b8150f721c101abdef99073bf66d3903e292d851bee51910839831caba341a75"},
+ {file = "tornado-6.2-cp37-abi3-manylinux_2_5_x86_64.manylinux1_x86_64.manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:d3a2f5999215a3a06a4fc218026cd84c61b8b2b40ac5296a6db1f1451ef04c1e"},
+ {file = "tornado-6.2-cp37-abi3-musllinux_1_1_aarch64.whl", hash = "sha256:5f8c52d219d4995388119af7ccaa0bcec289535747620116a58d830e7c25d8a8"},
+ {file = "tornado-6.2-cp37-abi3-musllinux_1_1_i686.whl", hash = "sha256:6fdfabffd8dfcb6cf887428849d30cf19a3ea34c2c248461e1f7d718ad30b66b"},
+ {file = "tornado-6.2-cp37-abi3-musllinux_1_1_x86_64.whl", hash = "sha256:1d54d13ab8414ed44de07efecb97d4ef7c39f7438cf5e976ccd356bebb1b5fca"},
+ {file = "tornado-6.2-cp37-abi3-win32.whl", hash = "sha256:5c87076709343557ef8032934ce5f637dbb552efa7b21d08e89ae7619ed0eb23"},
+ {file = "tornado-6.2-cp37-abi3-win_amd64.whl", hash = "sha256:e5f923aa6a47e133d1cf87d60700889d7eae68988704e20c75fb2d65677a8e4b"},
+ {file = "tornado-6.2.tar.gz", hash = "sha256:9b630419bde84ec666bfd7ea0a4cb2a8a651c2d5cccdbdd1972a0c859dfc3c13"},
+]
+
+[[package]]
+name = "tqdm"
+version = "4.64.1"
+description = "Fast, Extensible Progress Meter"
+category = "dev"
+optional = false
+python-versions = "!=3.0.*,!=3.1.*,!=3.2.*,!=3.3.*,>=2.7"
+files = [
+ {file = "tqdm-4.64.1-py2.py3-none-any.whl", hash = "sha256:6fee160d6ffcd1b1c68c65f14c829c22832bc401726335ce92c52d395944a6a1"},
+ {file = "tqdm-4.64.1.tar.gz", hash = "sha256:5f4f682a004951c1b450bc753c710e9280c5746ce6ffedee253ddbcbf54cf1e4"},
+]
+
+[package.dependencies]
+colorama = {version = "*", markers = "platform_system == \"Windows\""}
+
+[package.extras]
+dev = ["py-make (>=0.1.0)", "twine", "wheel"]
+notebook = ["ipywidgets (>=6)"]
+slack = ["slack-sdk"]
+telegram = ["requests"]
+
+[[package]]
+name = "traitlets"
+version = "5.9.0"
+description = "Traitlets Python configuration system"
+category = "dev"
+optional = false
+python-versions = ">=3.7"
+files = [
+ {file = "traitlets-5.9.0-py3-none-any.whl", hash = "sha256:9e6ec080259b9a5940c797d58b613b5e31441c2257b87c2e795c5228ae80d2d8"},
+ {file = "traitlets-5.9.0.tar.gz", hash = "sha256:f6cde21a9c68cf756af02035f72d5a723bf607e862e7be33ece505abf4a3bad9"},
+]
+
+[package.extras]
+docs = ["myst-parser", "pydata-sphinx-theme", "sphinx"]
+test = ["argcomplete (>=2.0)", "pre-commit", "pytest", "pytest-mock"]
+
+[[package]]
+name = "twine"
+version = "3.8.0"
+description = "Collection of utilities for publishing packages on PyPI"
+category = "dev"
+optional = false
+python-versions = ">=3.6"
+files = [
+ {file = "twine-3.8.0-py3-none-any.whl", hash = "sha256:d0550fca9dc19f3d5e8eadfce0c227294df0a2a951251a4385797c8a6198b7c8"},
+ {file = "twine-3.8.0.tar.gz", hash = "sha256:8efa52658e0ae770686a13b675569328f1fba9837e5de1867bfe5f46a9aefe19"},
+]
+
+[package.dependencies]
+colorama = ">=0.4.3"
+importlib-metadata = ">=3.6"
+keyring = ">=15.1"
+pkginfo = ">=1.8.1"
+readme-renderer = ">=21.0"
+requests = ">=2.20"
+requests-toolbelt = ">=0.8.0,<0.9.0 || >0.9.0"
+rfc3986 = ">=1.4.0"
+tqdm = ">=4.14"
+urllib3 = ">=1.26.0"
+
+[[package]]
+name = "typed-ast"
+version = "1.5.4"
+description = "a fork of Python 2 and 3 ast modules with type comment support"
+category = "dev"
+optional = false
+python-versions = ">=3.6"
+files = [
+ {file = "typed_ast-1.5.4-cp310-cp310-macosx_10_9_x86_64.whl", hash = "sha256:669dd0c4167f6f2cd9f57041e03c3c2ebf9063d0757dc89f79ba1daa2bfca9d4"},
+ {file = "typed_ast-1.5.4-cp310-cp310-macosx_11_0_arm64.whl", hash = "sha256:211260621ab1cd7324e0798d6be953d00b74e0428382991adfddb352252f1d62"},
+ {file = "typed_ast-1.5.4-cp310-cp310-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:267e3f78697a6c00c689c03db4876dd1efdfea2f251a5ad6555e82a26847b4ac"},
+ {file = "typed_ast-1.5.4-cp310-cp310-manylinux_2_5_x86_64.manylinux1_x86_64.manylinux_2_12_x86_64.manylinux2010_x86_64.whl", hash = "sha256:c542eeda69212fa10a7ada75e668876fdec5f856cd3d06829e6aa64ad17c8dfe"},
+ {file = "typed_ast-1.5.4-cp310-cp310-win_amd64.whl", hash = "sha256:a9916d2bb8865f973824fb47436fa45e1ebf2efd920f2b9f99342cb7fab93f72"},
+ {file = "typed_ast-1.5.4-cp36-cp36m-macosx_10_9_x86_64.whl", hash = "sha256:79b1e0869db7c830ba6a981d58711c88b6677506e648496b1f64ac7d15633aec"},
+ {file = "typed_ast-1.5.4-cp36-cp36m-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:a94d55d142c9265f4ea46fab70977a1944ecae359ae867397757d836ea5a3f47"},
+ {file = "typed_ast-1.5.4-cp36-cp36m-manylinux_2_5_x86_64.manylinux1_x86_64.manylinux_2_12_x86_64.manylinux2010_x86_64.whl", hash = "sha256:183afdf0ec5b1b211724dfef3d2cad2d767cbefac291f24d69b00546c1837fb6"},
+ {file = "typed_ast-1.5.4-cp36-cp36m-win_amd64.whl", hash = "sha256:639c5f0b21776605dd6c9dbe592d5228f021404dafd377e2b7ac046b0349b1a1"},
+ {file = "typed_ast-1.5.4-cp37-cp37m-macosx_10_9_x86_64.whl", hash = "sha256:cf4afcfac006ece570e32d6fa90ab74a17245b83dfd6655a6f68568098345ff6"},
+ {file = "typed_ast-1.5.4-cp37-cp37m-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:ed855bbe3eb3715fca349c80174cfcfd699c2f9de574d40527b8429acae23a66"},
+ {file = "typed_ast-1.5.4-cp37-cp37m-manylinux_2_5_x86_64.manylinux1_x86_64.manylinux_2_12_x86_64.manylinux2010_x86_64.whl", hash = "sha256:6778e1b2f81dfc7bc58e4b259363b83d2e509a65198e85d5700dfae4c6c8ff1c"},
+ {file = "typed_ast-1.5.4-cp37-cp37m-win_amd64.whl", hash = "sha256:0261195c2062caf107831e92a76764c81227dae162c4f75192c0d489faf751a2"},
+ {file = "typed_ast-1.5.4-cp38-cp38-macosx_10_9_x86_64.whl", hash = "sha256:2efae9db7a8c05ad5547d522e7dbe62c83d838d3906a3716d1478b6c1d61388d"},
+ {file = "typed_ast-1.5.4-cp38-cp38-macosx_11_0_arm64.whl", hash = "sha256:7d5d014b7daa8b0bf2eaef684295acae12b036d79f54178b92a2b6a56f92278f"},
+ {file = "typed_ast-1.5.4-cp38-cp38-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:370788a63915e82fd6f212865a596a0fefcbb7d408bbbb13dea723d971ed8bdc"},
+ {file = "typed_ast-1.5.4-cp38-cp38-manylinux_2_5_x86_64.manylinux1_x86_64.manylinux_2_12_x86_64.manylinux2010_x86_64.whl", hash = "sha256:4e964b4ff86550a7a7d56345c7864b18f403f5bd7380edf44a3c1fb4ee7ac6c6"},
+ {file = "typed_ast-1.5.4-cp38-cp38-win_amd64.whl", hash = "sha256:683407d92dc953c8a7347119596f0b0e6c55eb98ebebd9b23437501b28dcbb8e"},
+ {file = "typed_ast-1.5.4-cp39-cp39-macosx_10_9_x86_64.whl", hash = "sha256:4879da6c9b73443f97e731b617184a596ac1235fe91f98d279a7af36c796da35"},
+ {file = "typed_ast-1.5.4-cp39-cp39-macosx_11_0_arm64.whl", hash = "sha256:3e123d878ba170397916557d31c8f589951e353cc95fb7f24f6bb69adc1a8a97"},
+ {file = "typed_ast-1.5.4-cp39-cp39-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:ebd9d7f80ccf7a82ac5f88c521115cc55d84e35bf8b446fcd7836eb6b98929a3"},
+ {file = "typed_ast-1.5.4-cp39-cp39-manylinux_2_5_x86_64.manylinux1_x86_64.manylinux_2_12_x86_64.manylinux2010_x86_64.whl", hash = "sha256:98f80dee3c03455e92796b58b98ff6ca0b2a6f652120c263efdba4d6c5e58f72"},
+ {file = "typed_ast-1.5.4-cp39-cp39-win_amd64.whl", hash = "sha256:0fdbcf2fef0ca421a3f5912555804296f0b0960f0418c440f5d6d3abb549f3e1"},
+ {file = "typed_ast-1.5.4.tar.gz", hash = "sha256:39e21ceb7388e4bb37f4c679d72707ed46c2fbf2a5609b8b8ebc4b067d977df2"},
+]
+
+[[package]]
+name = "types-setuptools"
+version = "67.4.0.3"
+description = "Typing stubs for setuptools"
+category = "dev"
+optional = false
+python-versions = "*"
+files = [
+ {file = "types-setuptools-67.4.0.3.tar.gz", hash = "sha256:19e958dfdbf1c5a628e54c2a7ee84935051afb7278d0c1cdb08ac194757ee3b1"},
+ {file = "types_setuptools-67.4.0.3-py3-none-any.whl", hash = "sha256:3c83c3a6363dd3ddcdd054796705605f0fa8b8e5a39390e07a05e5f7af054978"},
+]
+
+[[package]]
+name = "types-toml"
+version = "0.10.8.5"
+description = "Typing stubs for toml"
+category = "dev"
+optional = false
+python-versions = "*"
+files = [
+ {file = "types-toml-0.10.8.5.tar.gz", hash = "sha256:bf80fce7d2d74be91148f47b88d9ae5adeb1024abef22aa2fdbabc036d6b8b3c"},
+ {file = "types_toml-0.10.8.5-py3-none-any.whl", hash = "sha256:2432017febe43174af0f3c65f03116e3d3cf43e7e1406b8200e106da8cf98992"},
+]
+
+[[package]]
+name = "typing-extensions"
+version = "3.10.0.2"
+description = "Backported and Experimental Type Hints for Python 3.5+"
+category = "main"
+optional = false
+python-versions = "*"
+files = [
+ {file = "typing_extensions-3.10.0.2-py2-none-any.whl", hash = "sha256:d8226d10bc02a29bcc81df19a26e56a9647f8b0a6d4a83924139f4a8b01f17b7"},
+ {file = "typing_extensions-3.10.0.2-py3-none-any.whl", hash = "sha256:f1d25edafde516b146ecd0613dabcc61409817af4766fbbcfb8d1ad4ec441a34"},
+ {file = "typing_extensions-3.10.0.2.tar.gz", hash = "sha256:49f75d16ff11f1cd258e1b988ccff82a3ca5570217d7ad8c5f48205dd99a677e"},
+]
+
+[[package]]
+name = "urllib3"
+version = "1.26.14"
+description = "HTTP library with thread-safe connection pooling, file post, and more."
+category = "dev"
+optional = false
+python-versions = ">=2.7, !=3.0.*, !=3.1.*, !=3.2.*, !=3.3.*, !=3.4.*, !=3.5.*"
+files = [
+ {file = "urllib3-1.26.14-py2.py3-none-any.whl", hash = "sha256:75edcdc2f7d85b137124a6c3c9fc3933cdeaa12ecb9a6a959f22797a0feca7e1"},
+ {file = "urllib3-1.26.14.tar.gz", hash = "sha256:076907bf8fd355cde77728471316625a4d2f7e713c125f51953bb5b3eecf4f72"},
+]
+
+[package.extras]
+brotli = ["brotli (>=1.0.9)", "brotlicffi (>=0.8.0)", "brotlipy (>=0.6.0)"]
+secure = ["certifi", "cryptography (>=1.3.4)", "idna (>=2.0.0)", "ipaddress", "pyOpenSSL (>=0.14)", "urllib3-secure-extra"]
+socks = ["PySocks (>=1.5.6,!=1.5.7,<2.0)"]
+
+[[package]]
+name = "wcwidth"
+version = "0.2.6"
+description = "Measures the displayed width of unicode strings in a terminal"
+category = "dev"
+optional = false
+python-versions = "*"
+files = [
+ {file = "wcwidth-0.2.6-py2.py3-none-any.whl", hash = "sha256:795b138f6875577cd91bba52baf9e445cd5118fd32723b460e30a0af30ea230e"},
+ {file = "wcwidth-0.2.6.tar.gz", hash = "sha256:a5220780a404dbe3353789870978e472cfe477761f06ee55077256e509b156d0"},
+]
+
+[[package]]
+name = "webencodings"
+version = "0.5.1"
+description = "Character encoding aliases for legacy web content"
+category = "dev"
+optional = false
+python-versions = "*"
+files = [
+ {file = "webencodings-0.5.1-py2.py3-none-any.whl", hash = "sha256:a0af1213f3c2226497a97e2b3aa01a7e4bee4f403f95be16fc9acd2947514a78"},
+ {file = "webencodings-0.5.1.tar.gz", hash = "sha256:b36a1c245f2d304965eb4e0a82848379241dc04b865afcc4aab16748587e1923"},
+]
+
+[[package]]
+name = "websocket-client"
+version = "1.5.1"
+description = "WebSocket client for Python with low level API options"
+category = "dev"
+optional = false
+python-versions = ">=3.7"
+files = [
+ {file = "websocket-client-1.5.1.tar.gz", hash = "sha256:3f09e6d8230892547132177f575a4e3e73cfdf06526e20cc02aa1c3b47184d40"},
+ {file = "websocket_client-1.5.1-py3-none-any.whl", hash = "sha256:cdf5877568b7e83aa7cf2244ab56a3213de587bbe0ce9d8b9600fc77b455d89e"},
+]
+
+[package.extras]
+docs = ["Sphinx (>=3.4)", "sphinx-rtd-theme (>=0.5)"]
+optional = ["python-socks", "wsaccel"]
+test = ["websockets"]
+
+[[package]]
+name = "wheel"
+version = "0.38.4"
+description = "A built-package format for Python"
+category = "main"
+optional = false
+python-versions = ">=3.7"
+files = [
+ {file = "wheel-0.38.4-py3-none-any.whl", hash = "sha256:b60533f3f5d530e971d6737ca6d58681ee434818fab630c83a734bb10c083ce8"},
+ {file = "wheel-0.38.4.tar.gz", hash = "sha256:965f5259b566725405b05e7cf774052044b1ed30119b5d586b2703aafe8719ac"},
+]
+
+[package.extras]
+test = ["pytest (>=3.0.0)"]
+
+[[package]]
+name = "widgetsnbextension"
+version = "4.0.5"
+description = "Jupyter interactive widgets for Jupyter Notebook"
+category = "dev"
+optional = false
+python-versions = ">=3.7"
+files = [
+ {file = "widgetsnbextension-4.0.5-py3-none-any.whl", hash = "sha256:eaaaf434fb9b08bd197b2a14ffe45ddb5ac3897593d43c69287091e5f3147bf7"},
+ {file = "widgetsnbextension-4.0.5.tar.gz", hash = "sha256:003f716d930d385be3fd9de42dd9bf008e30053f73bddde235d14fbeaeff19af"},
+]
+
+[[package]]
+name = "wrapt"
+version = "1.13.3"
+description = "Module for decorators, wrappers and monkey patching."
+category = "dev"
+optional = false
+python-versions = "!=3.0.*,!=3.1.*,!=3.2.*,!=3.3.*,!=3.4.*,>=2.7"
+files = [
+ {file = "wrapt-1.13.3-cp27-cp27m-macosx_10_9_x86_64.whl", hash = "sha256:e05e60ff3b2b0342153be4d1b597bbcfd8330890056b9619f4ad6b8d5c96a81a"},
+ {file = "wrapt-1.13.3-cp27-cp27m-manylinux1_i686.whl", hash = "sha256:85148f4225287b6a0665eef08a178c15097366d46b210574a658c1ff5b377489"},
+ {file = "wrapt-1.13.3-cp27-cp27m-manylinux1_x86_64.whl", hash = "sha256:2dded5496e8f1592ec27079b28b6ad2a1ef0b9296d270f77b8e4a3a796cf6909"},
+ {file = "wrapt-1.13.3-cp27-cp27m-manylinux2010_i686.whl", hash = "sha256:e94b7d9deaa4cc7bac9198a58a7240aaf87fe56c6277ee25fa5b3aa1edebd229"},
+ {file = "wrapt-1.13.3-cp27-cp27m-manylinux2010_x86_64.whl", hash = "sha256:498e6217523111d07cd67e87a791f5e9ee769f9241fcf8a379696e25806965af"},
+ {file = "wrapt-1.13.3-cp27-cp27mu-manylinux1_i686.whl", hash = "sha256:ec7e20258ecc5174029a0f391e1b948bf2906cd64c198a9b8b281b811cbc04de"},
+ {file = "wrapt-1.13.3-cp27-cp27mu-manylinux1_x86_64.whl", hash = "sha256:87883690cae293541e08ba2da22cacaae0a092e0ed56bbba8d018cc486fbafbb"},
+ {file = "wrapt-1.13.3-cp27-cp27mu-manylinux2010_i686.whl", hash = "sha256:f99c0489258086308aad4ae57da9e8ecf9e1f3f30fa35d5e170b4d4896554d80"},
+ {file = "wrapt-1.13.3-cp27-cp27mu-manylinux2010_x86_64.whl", hash = "sha256:6a03d9917aee887690aa3f1747ce634e610f6db6f6b332b35c2dd89412912bca"},
+ {file = "wrapt-1.13.3-cp310-cp310-macosx_10_9_x86_64.whl", hash = "sha256:936503cb0a6ed28dbfa87e8fcd0a56458822144e9d11a49ccee6d9a8adb2ac44"},
+ {file = "wrapt-1.13.3-cp310-cp310-manylinux_2_5_i686.manylinux1_i686.manylinux_2_12_i686.manylinux2010_i686.whl", hash = "sha256:f9c51d9af9abb899bd34ace878fbec8bf357b3194a10c4e8e0a25512826ef056"},
+ {file = "wrapt-1.13.3-cp310-cp310-manylinux_2_5_x86_64.manylinux1_x86_64.manylinux_2_12_x86_64.manylinux2010_x86_64.whl", hash = "sha256:220a869982ea9023e163ba915077816ca439489de6d2c09089b219f4e11b6785"},
+ {file = "wrapt-1.13.3-cp310-cp310-musllinux_1_1_i686.whl", hash = "sha256:0877fe981fd76b183711d767500e6b3111378ed2043c145e21816ee589d91096"},
+ {file = "wrapt-1.13.3-cp310-cp310-musllinux_1_1_x86_64.whl", hash = "sha256:43e69ffe47e3609a6aec0fe723001c60c65305784d964f5007d5b4fb1bc6bf33"},
+ {file = "wrapt-1.13.3-cp310-cp310-win32.whl", hash = "sha256:78dea98c81915bbf510eb6a3c9c24915e4660302937b9ae05a0947164248020f"},
+ {file = "wrapt-1.13.3-cp310-cp310-win_amd64.whl", hash = "sha256:ea3e746e29d4000cd98d572f3ee2a6050a4f784bb536f4ac1f035987fc1ed83e"},
+ {file = "wrapt-1.13.3-cp35-cp35m-manylinux1_i686.whl", hash = "sha256:8c73c1a2ec7c98d7eaded149f6d225a692caa1bd7b2401a14125446e9e90410d"},
+ {file = "wrapt-1.13.3-cp35-cp35m-manylinux1_x86_64.whl", hash = "sha256:086218a72ec7d986a3eddb7707c8c4526d677c7b35e355875a0fe2918b059179"},
+ {file = "wrapt-1.13.3-cp35-cp35m-manylinux2010_i686.whl", hash = "sha256:e92d0d4fa68ea0c02d39f1e2f9cb5bc4b4a71e8c442207433d8db47ee79d7aa3"},
+ {file = "wrapt-1.13.3-cp35-cp35m-manylinux2010_x86_64.whl", hash = "sha256:d4a5f6146cfa5c7ba0134249665acd322a70d1ea61732723c7d3e8cc0fa80755"},
+ {file = "wrapt-1.13.3-cp35-cp35m-win32.whl", hash = "sha256:8aab36778fa9bba1a8f06a4919556f9f8c7b33102bd71b3ab307bb3fecb21851"},
+ {file = "wrapt-1.13.3-cp35-cp35m-win_amd64.whl", hash = "sha256:944b180f61f5e36c0634d3202ba8509b986b5fbaf57db3e94df11abee244ba13"},
+ {file = "wrapt-1.13.3-cp36-cp36m-macosx_10_9_x86_64.whl", hash = "sha256:2ebdde19cd3c8cdf8df3fc165bc7827334bc4e353465048b36f7deeae8ee0918"},
+ {file = "wrapt-1.13.3-cp36-cp36m-manylinux_2_5_i686.manylinux1_i686.manylinux_2_12_i686.manylinux2010_i686.whl", hash = "sha256:610f5f83dd1e0ad40254c306f4764fcdc846641f120c3cf424ff57a19d5f7ade"},
+ {file = "wrapt-1.13.3-cp36-cp36m-manylinux_2_5_x86_64.manylinux1_x86_64.manylinux_2_12_x86_64.manylinux2010_x86_64.whl", hash = "sha256:5601f44a0f38fed36cc07db004f0eedeaadbdcec90e4e90509480e7e6060a5bc"},
+ {file = "wrapt-1.13.3-cp36-cp36m-musllinux_1_1_i686.whl", hash = "sha256:e6906d6f48437dfd80464f7d7af1740eadc572b9f7a4301e7dd3d65db285cacf"},
+ {file = "wrapt-1.13.3-cp36-cp36m-musllinux_1_1_x86_64.whl", hash = "sha256:766b32c762e07e26f50d8a3468e3b4228b3736c805018e4b0ec8cc01ecd88125"},
+ {file = "wrapt-1.13.3-cp36-cp36m-win32.whl", hash = "sha256:5f223101f21cfd41deec8ce3889dc59f88a59b409db028c469c9b20cfeefbe36"},
+ {file = "wrapt-1.13.3-cp36-cp36m-win_amd64.whl", hash = "sha256:f122ccd12fdc69628786d0c947bdd9cb2733be8f800d88b5a37c57f1f1d73c10"},
+ {file = "wrapt-1.13.3-cp37-cp37m-macosx_10_9_x86_64.whl", hash = "sha256:46f7f3af321a573fc0c3586612db4decb7eb37172af1bc6173d81f5b66c2e068"},
+ {file = "wrapt-1.13.3-cp37-cp37m-manylinux_2_5_i686.manylinux1_i686.manylinux_2_12_i686.manylinux2010_i686.whl", hash = "sha256:778fd096ee96890c10ce96187c76b3e99b2da44e08c9e24d5652f356873f6709"},
+ {file = "wrapt-1.13.3-cp37-cp37m-manylinux_2_5_x86_64.manylinux1_x86_64.manylinux_2_12_x86_64.manylinux2010_x86_64.whl", hash = "sha256:0cb23d36ed03bf46b894cfec777eec754146d68429c30431c99ef28482b5c1df"},
+ {file = "wrapt-1.13.3-cp37-cp37m-musllinux_1_1_i686.whl", hash = "sha256:96b81ae75591a795d8c90edc0bfaab44d3d41ffc1aae4d994c5aa21d9b8e19a2"},
+ {file = "wrapt-1.13.3-cp37-cp37m-musllinux_1_1_x86_64.whl", hash = "sha256:7dd215e4e8514004c8d810a73e342c536547038fb130205ec4bba9f5de35d45b"},
+ {file = "wrapt-1.13.3-cp37-cp37m-win32.whl", hash = "sha256:47f0a183743e7f71f29e4e21574ad3fa95676136f45b91afcf83f6a050914829"},
+ {file = "wrapt-1.13.3-cp37-cp37m-win_amd64.whl", hash = "sha256:fd76c47f20984b43d93de9a82011bb6e5f8325df6c9ed4d8310029a55fa361ea"},
+ {file = "wrapt-1.13.3-cp38-cp38-macosx_10_9_x86_64.whl", hash = "sha256:b73d4b78807bd299b38e4598b8e7bd34ed55d480160d2e7fdaabd9931afa65f9"},
+ {file = "wrapt-1.13.3-cp38-cp38-manylinux_2_5_i686.manylinux1_i686.manylinux_2_12_i686.manylinux2010_i686.whl", hash = "sha256:ec9465dd69d5657b5d2fa6133b3e1e989ae27d29471a672416fd729b429eb554"},
+ {file = "wrapt-1.13.3-cp38-cp38-manylinux_2_5_x86_64.manylinux1_x86_64.manylinux_2_12_x86_64.manylinux2010_x86_64.whl", hash = "sha256:dd91006848eb55af2159375134d724032a2d1d13bcc6f81cd8d3ed9f2b8e846c"},
+ {file = "wrapt-1.13.3-cp38-cp38-musllinux_1_1_i686.whl", hash = "sha256:ae9de71eb60940e58207f8e71fe113c639da42adb02fb2bcbcaccc1ccecd092b"},
+ {file = "wrapt-1.13.3-cp38-cp38-musllinux_1_1_x86_64.whl", hash = "sha256:51799ca950cfee9396a87f4a1240622ac38973b6df5ef7a41e7f0b98797099ce"},
+ {file = "wrapt-1.13.3-cp38-cp38-win32.whl", hash = "sha256:4b9c458732450ec42578b5642ac53e312092acf8c0bfce140ada5ca1ac556f79"},
+ {file = "wrapt-1.13.3-cp38-cp38-win_amd64.whl", hash = "sha256:7dde79d007cd6dfa65afe404766057c2409316135cb892be4b1c768e3f3a11cb"},
+ {file = "wrapt-1.13.3-cp39-cp39-macosx_10_9_x86_64.whl", hash = "sha256:981da26722bebb9247a0601e2922cedf8bb7a600e89c852d063313102de6f2cb"},
+ {file = "wrapt-1.13.3-cp39-cp39-manylinux_2_5_i686.manylinux1_i686.manylinux_2_12_i686.manylinux2010_i686.whl", hash = "sha256:705e2af1f7be4707e49ced9153f8d72131090e52be9278b5dbb1498c749a1e32"},
+ {file = "wrapt-1.13.3-cp39-cp39-manylinux_2_5_x86_64.manylinux1_x86_64.manylinux_2_12_x86_64.manylinux2010_x86_64.whl", hash = "sha256:25b1b1d5df495d82be1c9d2fad408f7ce5ca8a38085e2da41bb63c914baadff7"},
+ {file = "wrapt-1.13.3-cp39-cp39-musllinux_1_1_i686.whl", hash = "sha256:77416e6b17926d953b5c666a3cb718d5945df63ecf922af0ee576206d7033b5e"},
+ {file = "wrapt-1.13.3-cp39-cp39-musllinux_1_1_x86_64.whl", hash = "sha256:865c0b50003616f05858b22174c40ffc27a38e67359fa1495605f96125f76640"},
+ {file = "wrapt-1.13.3-cp39-cp39-win32.whl", hash = "sha256:0a017a667d1f7411816e4bf214646d0ad5b1da2c1ea13dec6c162736ff25a374"},
+ {file = "wrapt-1.13.3-cp39-cp39-win_amd64.whl", hash = "sha256:81bd7c90d28a4b2e1df135bfbd7c23aee3050078ca6441bead44c42483f9ebfb"},
+ {file = "wrapt-1.13.3.tar.gz", hash = "sha256:1fea9cd438686e6682271d36f3481a9f3636195578bab9ca3382e2f5f01fc185"},
+]
+
+[[package]]
+name = "zipp"
+version = "3.15.0"
+description = "Backport of pathlib-compatible object wrapper for zip files"
+category = "dev"
+optional = false
+python-versions = ">=3.7"
+files = [
+ {file = "zipp-3.15.0-py3-none-any.whl", hash = "sha256:48904fc76a60e542af151aded95726c1a5c34ed43ab4134b597665c86d7ad556"},
+ {file = "zipp-3.15.0.tar.gz", hash = "sha256:112929ad649da941c23de50f356a2b5570c954b65150642bccdd66bf194d224b"},
+]
+
+[package.extras]
+docs = ["furo", "jaraco.packaging (>=9)", "jaraco.tidelift (>=1.4)", "rst.linker (>=1.9)", "sphinx (>=3.5)", "sphinx-lint"]
+testing = ["big-O", "flake8 (<5)", "jaraco.functools", "jaraco.itertools", "more-itertools", "pytest (>=6)", "pytest-black (>=0.3.7)", "pytest-checkdocs (>=2.4)", "pytest-cov", "pytest-enabler (>=1.3)", "pytest-flake8", "pytest-mypy (>=0.9.1)"]
+
+[metadata]
+lock-version = "2.0"
+python-versions = ">=3.7,<3.11"
+content-hash = "6909a805bce2b77b4d8def5a9f8c5eeb20cd818716ef0c314878d5f2651623ce"
diff --git a/frontends/concrete-python/poetry.toml b/frontends/concrete-python/poetry.toml
new file mode 100644
index 000000000..ab1033bd3
--- /dev/null
+++ b/frontends/concrete-python/poetry.toml
@@ -0,0 +1,2 @@
+[virtualenvs]
+in-project = true
diff --git a/frontends/concrete-python/pylintrc b/frontends/concrete-python/pylintrc
new file mode 100644
index 000000000..a24e88ae5
--- /dev/null
+++ b/frontends/concrete-python/pylintrc
@@ -0,0 +1,624 @@
+[MASTER]
+
+# A comma-separated list of package or module names from where C extensions may
+# be loaded. Extensions are loading into the active Python interpreter and may
+# run arbitrary code.
+extension-pkg-allow-list=
+
+# A comma-separated list of package or module names from where C extensions may
+# be loaded. Extensions are loading into the active Python interpreter and may
+# run arbitrary code. (This is an alternative name to extension-pkg-allow-list
+# for backward compatibility.)
+extension-pkg-whitelist=concrete,mlir
+
+# Return non-zero exit code if any of these messages/categories are detected,
+# even if score is above --fail-under value. Syntax same as enable. Messages
+# specified are enabled, while categories only check already-enabled messages.
+fail-on=
+
+# Specify a score threshold to be exceeded before program exits with error.
+fail-under=10.0
+
+# Files or directories to be skipped. They should be base names, not paths.
+ignore=CVS
+
+# Add files or directories matching the regex patterns to the ignore-list. The
+# regex matches against paths.
+ignore-paths=
+
+# Files or directories matching the regex patterns are skipped. The regex
+# matches against base names, not paths.
+ignore-patterns=
+
+# Python code to execute, usually for sys.path manipulation such as
+# pygtk.require().
+#init-hook=
+
+# Use multiple processes to speed up Pylint. Specifying 0 will auto-detect the
+# number of processors available to use.
+jobs=1
+
+# Control the amount of potential inferred values when inferring a single
+# object. This can help the performance when dealing with large functions or
+# complex, nested conditions.
+limit-inference-results=100
+
+# List of plugins (as comma separated values of python module names) to load,
+# usually to register additional checkers.
+load-plugins=
+
+# Pickle collected data for later comparisons.
+persistent=yes
+
+# When enabled, pylint would attempt to guess common misconfiguration and emit
+# user-friendly hints instead of false-positive error messages.
+suggestion-mode=yes
+
+# Allow loading of arbitrary C extensions. Extensions are imported into the
+# active Python interpreter and may run arbitrary code.
+unsafe-load-any-extension=no
+
+
+[MESSAGES CONTROL]
+
+# Only show warnings with the listed confidence levels. Leave empty to show
+# all. Valid levels: HIGH, INFERENCE, INFERENCE_FAILURE, UNDEFINED.
+confidence=
+
+# Disable the message, report, category or checker with the given id(s). You
+# can either give multiple identifiers separated by comma (,) or put this
+# option multiple times (only on the command line, not in the configuration
+# file where it should appear only once). You can also use "--disable=all" to
+# disable everything first and then reenable specific checks. For example, if
+# you want to run only the similarities checker, you can use "--disable=all
+# --enable=similarities". If you want to run only the classes checker, but have
+# no Warning level messages displayed, use "--disable=all --enable=classes
+# --disable=W".
+disable=print-statement,
+ parameter-unpacking,
+ unpacking-in-except,
+ old-raise-syntax,
+ backtick,
+ long-suffix,
+ old-ne-operator,
+ old-octal-literal,
+ import-star-module-level,
+ non-ascii-bytes-literal,
+ raw-checker-failed,
+ bad-inline-option,
+ locally-disabled,
+ file-ignored,
+ suppressed-message,
+ useless-suppression,
+ deprecated-pragma,
+ use-symbolic-message-instead,
+ fixme,
+ apply-builtin,
+ basestring-builtin,
+ buffer-builtin,
+ cmp-builtin,
+ coerce-builtin,
+ execfile-builtin,
+ file-builtin,
+ long-builtin,
+ raw_input-builtin,
+ reduce-builtin,
+ standarderror-builtin,
+ unicode-builtin,
+ xrange-builtin,
+ coerce-method,
+ delslice-method,
+ getslice-method,
+ setslice-method,
+ no-absolute-import,
+ old-division,
+ dict-iter-method,
+ dict-view-method,
+ next-method-called,
+ metaclass-assignment,
+ indexing-exception,
+ raising-string,
+ reload-builtin,
+ oct-method,
+ hex-method,
+ nonzero-method,
+ cmp-method,
+ input-builtin,
+ round-builtin,
+ intern-builtin,
+ unichr-builtin,
+ map-builtin-not-iterating,
+ zip-builtin-not-iterating,
+ range-builtin-not-iterating,
+ filter-builtin-not-iterating,
+ using-cmp-argument,
+ eq-without-hash,
+ div-method,
+ idiv-method,
+ rdiv-method,
+ exception-message-attribute,
+ invalid-str-codec,
+ sys-max-int,
+ bad-python3-import,
+ deprecated-string-function,
+ deprecated-str-translate-call,
+ deprecated-itertools-function,
+ deprecated-types-field,
+ next-method-defined,
+ dict-items-not-iterating,
+ dict-keys-not-iterating,
+ dict-values-not-iterating,
+ deprecated-operator-function,
+ deprecated-urllib-function,
+ xreadlines-attribute,
+ deprecated-sys-function,
+ exception-escape,
+ comprehension-escape
+
+# Enable the message, report, category or checker with the given id(s). You can
+# either give multiple identifier separated by comma (,) or put this option
+# multiple time (only on the command line, not in the configuration file where
+# it should appear only once). See also the "--disable" option for examples.
+enable=c-extension-no-member
+
+
+[REPORTS]
+
+# Python expression which should return a score less than or equal to 10. You
+# have access to the variables 'error', 'warning', 'refactor', and 'convention'
+# which contain the number of messages in each category, as well as 'statement'
+# which is the total number of statements analyzed. This score is used by the
+# global evaluation report (RP0004).
+evaluation=10.0 - ((float(5 * error + warning + refactor + convention) / statement) * 10)
+
+# Template used to display messages. This is a python new-style format string
+# used to format the message information. See doc for all details.
+#msg-template=
+
+# Set the output format. Available formats are text, parseable, colorized, json
+# and msvs (visual studio). You can also give a reporter class, e.g.
+# mypackage.mymodule.MyReporterClass.
+output-format=text
+
+# Tells whether to display a full report or only the messages.
+reports=no
+
+# Activate the evaluation score.
+score=yes
+
+
+[REFACTORING]
+
+# Maximum number of nested blocks for function / method body
+max-nested-blocks=5
+
+# Complete name of functions that never returns. When checking for
+# inconsistent-return-statements if a never returning function is called then
+# it will be considered as an explicit return statement and no message will be
+# printed.
+never-returning-functions=sys.exit,argparse.parse_error
+
+
+[BASIC]
+
+# Naming style matching correct argument names.
+argument-naming-style=snake_case
+
+# Regular expression matching correct argument names. Overrides argument-
+# naming-style.
+#argument-rgx=
+
+# Naming style matching correct attribute names.
+attr-naming-style=snake_case
+
+# Regular expression matching correct attribute names. Overrides attr-naming-
+# style.
+#attr-rgx=
+
+# Bad variable names which should always be refused, separated by a comma.
+bad-names=foo,
+ bar,
+ baz,
+ toto,
+ tutu,
+ tata
+
+# Bad variable names regexes, separated by a comma. If names match any regex,
+# they will always be refused
+bad-names-rgxs=
+
+# Naming style matching correct class attribute names.
+class-attribute-naming-style=any
+
+# Regular expression matching correct class attribute names. Overrides class-
+# attribute-naming-style.
+#class-attribute-rgx=
+
+# Naming style matching correct class constant names.
+class-const-naming-style=UPPER_CASE
+
+# Regular expression matching correct class constant names. Overrides class-
+# const-naming-style.
+#class-const-rgx=
+
+# Naming style matching correct class names.
+class-naming-style=PascalCase
+
+# Regular expression matching correct class names. Overrides class-naming-
+# style.
+#class-rgx=
+
+# Naming style matching correct constant names.
+const-naming-style=UPPER_CASE
+
+# Regular expression matching correct constant names. Overrides const-naming-
+# style.
+#const-rgx=
+
+# Minimum line length for functions/classes that require docstrings, shorter
+# ones are exempt.
+docstring-min-length=-1
+
+# Naming style matching correct function names.
+function-naming-style=snake_case
+
+# Regular expression matching correct function names. Overrides function-
+# naming-style.
+#function-rgx=
+
+# Good variable names which should always be accepted, separated by a comma.
+good-names=i,
+ j,
+ k,
+ ex,
+ Run,
+ _
+
+# Good variable names regexes, separated by a comma. If names match any regex,
+# they will always be accepted
+good-names-rgxs=^[a-z]$
+
+# Include a hint for the correct naming format with invalid-name.
+include-naming-hint=no
+
+# Naming style matching correct inline iteration names.
+inlinevar-naming-style=any
+
+# Regular expression matching correct inline iteration names. Overrides
+# inlinevar-naming-style.
+#inlinevar-rgx=
+
+# Naming style matching correct method names.
+method-naming-style=snake_case
+
+# Regular expression matching correct method names. Overrides method-naming-
+# style.
+#method-rgx=
+
+# Naming style matching correct module names.
+module-naming-style=snake_case
+
+# Regular expression matching correct module names. Overrides module-naming-
+# style.
+#module-rgx=
+
+# Colon-delimited sets of names that determine each other's naming style when
+# the name regexes allow several styles.
+name-group=
+
+# Regular expression which should only match function or class names that do
+# not require a docstring.
+no-docstring-rgx=^_
+
+# List of decorators that produce properties, such as abc.abstractproperty. Add
+# to this list to register other decorators that produce valid properties.
+# These decorators are taken in consideration only for invalid-name.
+property-classes=abc.abstractproperty
+
+# Naming style matching correct variable names.
+variable-naming-style=snake_case
+
+# Regular expression matching correct variable names. Overrides variable-
+# naming-style.
+#variable-rgx=
+
+
+[FORMAT]
+
+# Expected format of line ending, e.g. empty (any line ending), LF or CRLF.
+expected-line-ending-format=
+
+# Regexp for a line that is allowed to be longer than the limit.
+ignore-long-lines=^\s*(# )??$
+
+# Number of spaces of indent required inside a hanging or continued line.
+indent-after-paren=4
+
+# String used as indentation unit. This is usually " " (4 spaces) or "\t" (1
+# tab).
+indent-string=' '
+
+# Maximum number of characters on a single line.
+max-line-length=100
+
+# Maximum number of lines in a module.
+max-module-lines=1000
+
+# Allow the body of a class to be on the same line as the declaration if body
+# contains single statement.
+single-line-class-stmt=no
+
+# Allow the body of an if to be on the same line as the test if there is no
+# else.
+single-line-if-stmt=no
+
+
+[LOGGING]
+
+# The type of string formatting that logging methods do. `old` means using %
+# formatting, `new` is for `{}` formatting.
+logging-format-style=old
+
+# Logging modules to check that the string format arguments are in logging
+# function parameter format.
+logging-modules=logging
+
+
+[MISCELLANEOUS]
+
+# List of note tags to take in consideration, separated by a comma.
+notes=FIXME,
+ XXX,
+ TODO
+
+# Regular expression of note tags to take in consideration.
+#notes-rgx=
+
+
+[SIMILARITIES]
+
+# Ignore comments when computing similarities.
+ignore-comments=yes
+
+# Ignore docstrings when computing similarities.
+ignore-docstrings=yes
+
+# Ignore imports when computing similarities.
+ignore-imports=yes
+
+# Ignore function signatures when computing similarities.
+ignore-signatures=no
+
+# Minimum lines number of a similarity.
+min-similarity-lines=10
+
+
+[SPELLING]
+
+# Limits count of emitted suggestions for spelling mistakes.
+max-spelling-suggestions=4
+
+# Spelling dictionary name. Available dictionaries: none. To make it work,
+# install the 'python-enchant' package.
+spelling-dict=
+
+# List of comma separated words that should be considered directives if they
+# appear and the beginning of a comment and should not be checked.
+spelling-ignore-comment-directives=fmt: on,fmt: off,noqa:,noqa,nosec,isort:skip,mypy:
+
+# List of comma separated words that should not be checked.
+spelling-ignore-words=
+
+# A path to a file that contains the private dictionary; one word per line.
+spelling-private-dict-file=
+
+# Tells whether to store unknown words to the private dictionary (see the
+# --spelling-private-dict-file option) instead of raising a message.
+spelling-store-unknown-words=no
+
+
+[STRING]
+
+# This flag controls whether inconsistent-quotes generates a warning when the
+# character used as a quote delimiter is used inconsistently within a module.
+check-quote-consistency=no
+
+# This flag controls whether the implicit-str-concat should generate a warning
+# on implicit string concatenation in sequences defined over several lines.
+check-str-concat-over-line-jumps=no
+
+
+[TYPECHECK]
+
+# List of decorators that produce context managers, such as
+# contextlib.contextmanager. Add to this list to register other decorators that
+# produce valid context managers.
+contextmanager-decorators=contextlib.contextmanager
+
+# List of members which are set dynamically and missed by pylint inference
+# system, and so shouldn't trigger E1101 when accessed. Python regular
+# expressions are accepted.
+generated-members=
+
+# Tells whether missing members accessed in mixin class should be ignored. A
+# mixin class is detected if its name ends with "mixin" (case insensitive).
+ignore-mixin-members=yes
+
+# Tells whether to warn about missing members when the owner of the attribute
+# is inferred to be None.
+ignore-none=yes
+
+# This flag controls whether pylint should warn about no-member and similar
+# checks whenever an opaque object is returned when inferring. The inference
+# can return multiple potential results while evaluating a Python object, but
+# some branches might not be evaluated, which results in partial inference. In
+# that case, it might be useful to still emit no-member and other checks for
+# the rest of the inferred objects.
+ignore-on-opaque-inference=yes
+
+# List of class names for which member attributes should not be checked (useful
+# for classes with dynamically set attributes). This supports the use of
+# qualified names.
+ignored-classes=optparse.Values,thread._local,_thread._local
+
+# List of module names for which member attributes should not be checked
+# (useful for modules/projects where namespaces are manipulated during runtime
+# and thus existing member attributes cannot be deduced by static analysis). It
+# supports qualified module names, as well as Unix pattern matching.
+ignored-modules=
+
+# Show a hint with possible names when a member name was not found. The aspect
+# of finding the hint is based on edit distance.
+missing-member-hint=yes
+
+# The minimum edit distance a name should have in order to be considered a
+# similar match for a missing member name.
+missing-member-hint-distance=1
+
+# The total number of similar names that should be taken in consideration when
+# showing a hint for a missing member.
+missing-member-max-choices=1
+
+# List of decorators that change the signature of a decorated function.
+signature-mutators=
+
+
+[VARIABLES]
+
+# List of additional names supposed to be defined in builtins. Remember that
+# you should avoid defining new builtins when possible.
+additional-builtins=
+
+# Tells whether unused global variables should be treated as a violation.
+allow-global-unused-variables=yes
+
+# List of names allowed to shadow builtins
+allowed-redefined-builtins=
+
+# List of strings which can identify a callback function by name. A callback
+# name must start or end with one of those strings.
+callbacks=cb_,
+ _cb
+
+# A regular expression matching the name of dummy variables (i.e. expected to
+# not be used).
+dummy-variables-rgx=_+$|(_[a-zA-Z0-9_]*[a-zA-Z0-9]+?$)|dummy|^ignored_|^unused_
+
+# Argument names that match this expression will be ignored. Default to name
+# with leading underscore.
+ignored-argument-names=_.*|^ignored_|^unused_
+
+# Tells whether we should check for unused import in __init__ files.
+init-import=no
+
+# List of qualified module names which can have objects that can redefine
+# builtins.
+redefining-builtins-modules=six.moves,past.builtins,future.builtins,builtins,io
+
+
+[CLASSES]
+
+# Warn about protected attribute access inside special methods
+check-protected-access-in-special-methods=yes
+
+# List of method names used to declare (i.e. assign) instance attributes.
+defining-attr-methods=__init__,
+ __new__,
+ setUp,
+ __post_init__
+
+# List of member names, which should be excluded from the protected access
+# warning.
+exclude-protected=_asdict,
+ _fields,
+ _replace,
+ _source,
+ _make
+
+# List of valid names for the first argument in a class method.
+valid-classmethod-first-arg=cls
+
+# List of valid names for the first argument in a metaclass class method.
+valid-metaclass-classmethod-first-arg=cls
+
+
+[DESIGN]
+
+# Maximum number of arguments for function / method.
+max-args=10
+
+# Maximum number of attributes for a class (see R0902).
+max-attributes=10
+
+# Maximum number of boolean expressions in an if statement (see R0916).
+max-bool-expr=5
+
+# Maximum number of branch for function / method body.
+max-branches=12
+
+# Maximum number of locals for function / method body.
+max-locals=25
+
+# Maximum number of parents for a class (see R0901).
+max-parents=7
+
+# Maximum number of public methods for a class (see R0904).
+max-public-methods=20
+
+# Maximum number of return / yield for function / method body.
+max-returns=6
+
+# Maximum number of statements in function / method body.
+max-statements=50
+
+# Minimum number of public methods for a class (see R0903).
+min-public-methods=0
+
+
+[IMPORTS]
+
+# List of modules that can be imported at any level, not just the top level
+# one.
+allow-any-import-level=
+
+# Allow wildcard imports from modules that define __all__.
+allow-wildcard-with-all=no
+
+# Analyse import fallback blocks. This can be used to support both Python 2 and
+# 3 compatible code, which means that the block might have code that exists
+# only in one or another interpreter, leading to false positives when analysed.
+analyse-fallback-blocks=no
+
+# Deprecated modules which should not be used, separated by a comma.
+deprecated-modules=
+
+# Output a graph (.gv or any supported image format) of external dependencies
+# to the given file (report RP0402 must not be disabled).
+ext-import-graph=
+
+# Output a graph (.gv or any supported image format) of all (i.e. internal and
+# external) dependencies to the given file (report RP0402 must not be
+# disabled).
+import-graph=
+
+# Output a graph (.gv or any supported image format) of internal dependencies
+# to the given file (report RP0402 must not be disabled).
+int-import-graph=
+
+# Force import order to recognize a module as part of the standard
+# compatibility libraries.
+known-standard-library=
+
+# Force import order to recognize a module as part of a third party library.
+known-third-party=enchant,concrete,mlir
+
+# Couples of modules and preferred modules, separated by a comma.
+preferred-modules=
+
+
+[EXCEPTIONS]
+
+# Exceptions that will emit a warning when being caught. Defaults to
+# "BaseException, Exception".
+overgeneral-exceptions=BaseException,
+ Exception
diff --git a/frontends/concrete-python/pyproject.toml b/frontends/concrete-python/pyproject.toml
new file mode 100644
index 000000000..091f15e97
--- /dev/null
+++ b/frontends/concrete-python/pyproject.toml
@@ -0,0 +1,118 @@
+[tool.poetry]
+name = "concrete-numpy"
+version = "1.0.0-rc2"
+description = "Concrete Numpy is an open-source library which simplifies the use of fully homomorphic encryption (FHE)."
+license = "BSD-3-Clause"
+authors = [
+ "Zama ",
+ "Arthur Meyre ",
+ "Umut Sahin ",
+ "Benoit Chevallier-Mames ",
+ "Jordan Frery ",
+ "Alexandre Quint ",
+ "Ayoub Benaissa ",
+ "Andrei Stoian ",
+ "Jeremy Bradley ",
+]
+homepage = "https://zama.ai/concrete/"
+repository = "https://github.com/zama-ai/concrete-numpy"
+documentation = "http://docs.zama.ai/concrete-numpy/"
+keywords = ["FHE", "homomorphic encryption", "privacy", "security"]
+packages = [
+ { include = "concrete" },
+]
+classifiers = [
+ "Topic :: Scientific/Engineering :: Artificial Intelligence",
+ "Topic :: Scientific/Engineering :: Mathematics",
+ "Topic :: Scientific/Engineering",
+ "Topic :: Security",
+ "Topic :: Security :: Cryptography",
+ "Topic :: Software Development :: Compilers",
+]
+readme = "README.md"
+
+[tool.poetry.urls]
+"README" = "https://github.com/zama-ai/concrete-numpy/blob/main/README.md"
+"Bug Tracker" = "https://github.com/zama-ai/concrete-numpy/issues"
+"Discourse" = "https://community.zama.ai/c/concrete-numpy/7"
+
+[tool.poetry.dependencies]
+python = ">=3.7,<3.11"
+networkx = "^2.6.3"
+matplotlib = "^3.5.1"
+numpy = [
+ {version = "^1.23.5", python = ">=3.8"},
+ {version = "1.21.6", python = "<3.8"}
+]
+concrete-compiler = "0.24.0rc5"
+torch = "^1.13.1"
+scipy = [
+ {version = "^1.10.1", python = ">=3.8"},
+ {version = "1.7.3", python = "<3.8"}
+]
+
+[tool.poetry.dev-dependencies]
+isort = "^5.10.1"
+black = "^22.3.0"
+pylint = "2.11.1"
+pytest = "^6.2.5"
+pytest-cov = "^3.0.0"
+mypy = "^1.0"
+pydocstyle = "^6.1.1"
+jupyter = "^1.0.0"
+flake8 = "^4.0.1"
+flake8-bugbear = "^21.11.29"
+tqdm = "^4.62.3"
+psutil = "^5.9.0"
+py-cpuinfo = "^8.0.0"
+python-dotenv = "^0.19.2"
+nbmake = "^1.1"
+python-semantic-release = "7.23.0"
+semver = "^2.13.0"
+tomlkit = "^0.7.0"
+GitPython = "^3.1.26"
+pytest-xdist = "^2.5.0"
+pytest-randomly = "^3.11.0"
+pygments-style-tomorrow = "^1.0.0"
+beautifulsoup4 = "^4.10.0"
+pip-licenses = "^3.5.3"
+pip-audit = "^1.1.1"
+pytest-codeblocks = "^0.12.2"
+twine = "^3.7.1"
+ruff = "^0.0.191"
+requests = "^2.28.2"
+
+[build-system]
+requires = ["poetry-core>=1.0.0"]
+build-backend = "poetry.core.masonry.api"
+
+[tool.pytest.ini_options]
+filterwarnings = [
+ "error",
+ "ignore:Implicitly cleaning up:ResourceWarning",
+ "ignore:pandas not found, skipping conversion test.:ImportWarning",
+ "ignore:scipy not found, skipping conversion test.:ImportWarning",
+ "ignore:Matplotlib is currently using .*, which is a non-GUI backend, so cannot show the figure\\.:UserWarning",
+ "ignore:Deprecated call to `pkg_resources.declare_namespace:DeprecationWarning"
+]
+
+[tool.semantic_release]
+version_toml = "pyproject.toml:tool.poetry.version"
+upload_to_pypi = "False"
+
+[tool.ruff]
+target-version = "py37"
+line-length = 100
+select = [
+ "F", "E", "W", "C90", "I", "UP", "N", "YTT", "S", "BLE", "FBT", "B", "C4",
+ "T10", "EM", "ICN", "Q", "RET", "SIM", "TID", "ARG", "DTZ", "ERA", "PD", "PGH",
+ "PLC", "PLE", "PLR", "PLW", "RUF"
+]
+ignore = [
+ "A", "D", "FBT", "T20", "ANN", "N806", "ARG001", "S101", "BLE001", "RUF100", "ERA001",
+ "RET504", "TID252", "PD011", "I001", "UP015", "C901", "A001", "SIM118", "PGH003"
+]
+
+[tool.ruff.per-file-ignores]
+"**/__init__.py" = ["F401"]
+"tests/**" = ["PLC2201"]
diff --git a/frontends/concrete-python/script/actions_utils/RELEASE_TEMPLATE.md b/frontends/concrete-python/script/actions_utils/RELEASE_TEMPLATE.md
new file mode 100644
index 000000000..b8c86727d
--- /dev/null
+++ b/frontends/concrete-python/script/actions_utils/RELEASE_TEMPLATE.md
@@ -0,0 +1,5 @@
+## Summary
+
+_Please fill here with information about the main features in this release, or the main reason for having a delivery (e.g., fixing an annoying bug)_
+
+## Links
diff --git a/frontends/concrete-python/script/actions_utils/actions_combine_status.py b/frontends/concrete-python/script/actions_utils/actions_combine_status.py
new file mode 100644
index 000000000..9ffdc8f57
--- /dev/null
+++ b/frontends/concrete-python/script/actions_utils/actions_combine_status.py
@@ -0,0 +1,40 @@
+"""Helper script for github actions to combine job statuses"""
+import argparse
+import json
+
+RESULTS_TO_DISPLAY_LEVEL = {
+ "failure": 0,
+ "cancelled": 1,
+ "success": 2,
+ "skipped": 3,
+}
+
+DISPLAY_LEVEL_TO_RESULTS = {val: key for key, val in RESULTS_TO_DISPLAY_LEVEL.items()}
+
+
+def main(args):
+ """Entry point"""
+
+ need_context_data = None
+ with open(args.needs_context_json, encoding="utf-8") as f:
+ need_context_data = json.load(f)
+
+ display_level = min(
+ RESULTS_TO_DISPLAY_LEVEL[job_object["result"]] for job_object in need_context_data.values()
+ )
+
+ print(DISPLAY_LEVEL_TO_RESULTS[display_level])
+
+
+if __name__ == "__main__":
+ parser = argparse.ArgumentParser("Combine github actions statuses", allow_abbrev=False)
+
+ parser.add_argument(
+ "--needs_context_json",
+ type=str,
+ help="Pass the json file path containing the workflow needs context",
+ )
+
+ cli_args = parser.parse_args()
+
+ main(cli_args)
diff --git a/frontends/concrete-python/script/actions_utils/coverage.sh b/frontends/concrete-python/script/actions_utils/coverage.sh
new file mode 100755
index 000000000..e8ecc73c3
--- /dev/null
+++ b/frontends/concrete-python/script/actions_utils/coverage.sh
@@ -0,0 +1,12 @@
+#!/usr/bin/env bash
+
+set -o pipefail
+set +e
+
+CURR_DIR=$(dirname "$0")
+
+# Format diff-coverage.txt for PR comment
+poetry run python "$CURR_DIR"/coverage_report_format.py \
+global-coverage \
+--global-coverage-json-file "$1" \
+--global-coverage-output-file diff-coverage.txt
diff --git a/frontends/concrete-python/script/actions_utils/coverage_report_format.py b/frontends/concrete-python/script/actions_utils/coverage_report_format.py
new file mode 100755
index 000000000..27a0350c9
--- /dev/null
+++ b/frontends/concrete-python/script/actions_utils/coverage_report_format.py
@@ -0,0 +1,74 @@
+"""Helper script for github actions"""
+import argparse
+import json
+from pathlib import Path
+
+
+def write_coverage_file(coverage_file_path: Path, exit_code: int, coverage_content):
+ """Write the formatted coverage to file."""
+ with open(coverage_file_path, "w", encoding="utf-8") as f:
+ if exit_code == 0:
+ f.write("## Coverage passed ✅\n\n")
+ else:
+ f.write("## Coverage failed ❌\n\n")
+
+ # Open collapsible section
+ f.write("Coverage details \n\n\n")
+ f.write("```\n")
+
+ f.writelines(coverage_content)
+
+ # Close collapsible section
+ f.write("```\n\n")
+ f.write("
\n \n\n")
+
+
+def diff_coverage(args):
+ """diff-coverage entry point."""
+ diff_cover_file_path = Path(args.diff_cover_output).resolve()
+ diff_cover_content = None
+
+ with open(diff_cover_file_path, "r", encoding="utf-8") as f:
+ diff_cover_content = f.readlines()
+
+ write_coverage_file(diff_cover_file_path, args.diff_cover_exit_code, diff_cover_content)
+
+
+def global_coverage(args):
+ """global-coverage entry point."""
+ global_coverage_json_path = Path(args.global_coverage_json_file).resolve()
+ global_coverage_infos = None
+ with open(global_coverage_json_path, "r", encoding="utf-8") as f:
+ global_coverage_infos = json.load(f)
+
+ exit_code = global_coverage_infos["exit_code"]
+ coverage_content = global_coverage_infos["content"]
+ global_coverage_output_file_path = Path(args.global_coverage_output_file).resolve()
+ write_coverage_file(global_coverage_output_file_path, exit_code, coverage_content)
+
+
+def main(args):
+ """Entry point"""
+ args.entry_point(args)
+
+
+if __name__ == "__main__":
+ main_parser = argparse.ArgumentParser(allow_abbrev=False)
+
+ sub_parsers = main_parser.add_subparsers(dest="sub-command", required=True)
+
+ parser_diff_coverage = sub_parsers.add_parser("diff-coverage")
+
+ parser_diff_coverage.add_argument("--diff-cover-exit-code", type=int, required=True)
+ parser_diff_coverage.add_argument("--diff-cover-output", type=str, required=True)
+ parser_diff_coverage.set_defaults(entry_point=diff_coverage)
+
+ parser_global_coverage = sub_parsers.add_parser("global-coverage")
+
+ parser_global_coverage.add_argument("--global-coverage-output-file", type=str, required=True)
+ parser_global_coverage.add_argument("--global-coverage-json-file", type=str, required=True)
+ parser_global_coverage.set_defaults(entry_point=global_coverage)
+
+ cli_args = main_parser.parse_args()
+
+ main(cli_args)
diff --git a/frontends/concrete-python/script/actions_utils/generate_test_matrix.py b/frontends/concrete-python/script/actions_utils/generate_test_matrix.py
new file mode 100644
index 000000000..61bc26184
--- /dev/null
+++ b/frontends/concrete-python/script/actions_utils/generate_test_matrix.py
@@ -0,0 +1,95 @@
+"""Script to generate custom GitHub actions test matrices."""
+
+import argparse
+import itertools
+import json
+from pathlib import Path
+
+WEEKLY = "weekly"
+RELEASE = "release"
+PR = "pr"
+PUSH_TO_MAIN = "push_to_main"
+
+LINUX = "linux"
+MACOS = "macos"
+
+OSES = {LINUX, MACOS}
+
+PR_OSES = {LINUX: "ubuntu-22.04"}
+PR_PYTHON_VERSIONS = ["3.7"]
+PR_CONF = {"os": PR_OSES, "python": PR_PYTHON_VERSIONS}
+
+PUSH_TO_MAIN_OSES = {LINUX: "ubuntu-22.04"}
+PUSH_TO_MAIN_PYTHON_VERSIONS = ["3.7"]
+PUSH_TO_MAIN_CONF = {"os": PUSH_TO_MAIN_OSES, "python": PUSH_TO_MAIN_PYTHON_VERSIONS}
+
+WEEKLY_OSES = {
+ LINUX: "ubuntu-22.04",
+ MACOS: "macos-11",
+}
+WEEKLY_PYTHON_VERSIONS = ["3.7", "3.8", "3.9", "3.10"]
+WEEKLY_CONF = {"os": WEEKLY_OSES, "python": WEEKLY_PYTHON_VERSIONS}
+
+# The OSes here are to indicate the OSes used for runners during release
+RELEASE_OSES = {
+ LINUX: "ubuntu-22.04",
+ # TODO: https://github.com/zama-ai/concrete-numpy-internal/issues/1340
+ # Re-enable macOS for release once we have the duration of the tests
+ # MACOS: "macos-10.15",
+}
+# The python versions will be used to build packages during release
+RELEASE_PYTHON_VERSIONS = ["3.7", "3.8", "3.9", "3.10"]
+RELEASE_CONF = {"os": RELEASE_OSES, "python": RELEASE_PYTHON_VERSIONS}
+
+CONFIGURATIONS = {
+ PR: PR_CONF,
+ WEEKLY: WEEKLY_CONF,
+ RELEASE: RELEASE_CONF,
+ PUSH_TO_MAIN: PUSH_TO_MAIN_CONF,
+}
+
+
+def main(args):
+ """Entry point."""
+
+ matrix_conf = CONFIGURATIONS[args.build_type]
+
+ github_action_matrix = []
+
+ for (os_kind, os_name), python_version in itertools.product(
+ matrix_conf["os"].items(), matrix_conf["python"]
+ ):
+ github_action_matrix.append(
+ {
+ "os_kind": os_kind,
+ "runs_on": os_name,
+ "python_version": python_version,
+ }
+ )
+
+ print(json.dumps(github_action_matrix, indent=4))
+
+ output_json_path = Path(args.output_json).resolve()
+
+ with open(output_json_path, "w", encoding="utf-8") as f:
+ json.dump(github_action_matrix, f)
+
+
+if __name__ == "__main__":
+ parser = argparse.ArgumentParser("Generate GHA test matrices", allow_abbrev=False)
+
+ parser.add_argument(
+ "--build-type",
+ type=str,
+ required=True,
+ choices=[WEEKLY, RELEASE, PR, PUSH_TO_MAIN],
+ help="The type of build for which the matrix generation is required",
+ )
+
+ parser.add_argument(
+ "--output-json", type=str, required=True, help="Where to output the matrix as json data"
+ )
+
+ cli_args = parser.parse_args()
+
+ main(cli_args)
diff --git a/frontends/concrete-python/script/actions_utils/generate_versions_html.py b/frontends/concrete-python/script/actions_utils/generate_versions_html.py
new file mode 100644
index 000000000..fa50c3433
--- /dev/null
+++ b/frontends/concrete-python/script/actions_utils/generate_versions_html.py
@@ -0,0 +1,163 @@
+"""Tool to manage the versions.html file at the root of our docs sites."""
+
+import argparse
+from pathlib import Path
+
+from bs4 import BeautifulSoup
+from bs4.element import Tag
+from semver import VersionInfo
+
+VERSIONS_LIST_ID = "versions-list"
+
+
+def strip_leading_v(version_str: str):
+ """Strip leading v of a version which is not SemVer compatible."""
+ return version_str[1:] if version_str.startswith("v") else version_str
+
+
+def create_list_element(soup: BeautifulSoup, contents: Tag) -> Tag:
+ """Create a list element for links.
+
+ Args:
+ soup (BeautifulSoup): The soup to use to create the tag.
+
+ Returns:
+ Tag: tag containing .
+ """
+ new_list_element = soup.new_tag("li", **{"class": "toctree-l1"})
+ new_list_element.contents.append(contents)
+ return new_list_element
+
+
+def create_link_tag_set_string(soup: BeautifulSoup, version_string: str) -> Tag:
+ """Create a link tag on the given soup to version specified by version_string.
+
+ Args:
+ soup (BeautifulSoup): The soup to use to create the tag.
+ version_string (str): The version string to use.
+
+ Returns:
+ Tag: tag containing {version_string} .
+ """
+ new_tag = soup.new_tag(
+ "a",
+ **{
+ "href": f"{version_string}/",
+ "class": "reference internal",
+ },
+ )
+
+ new_tag.string = version_string
+ return new_tag
+
+
+def main(args):
+ """Entry point."""
+
+ invalid_versions = [
+ version
+ for version in args.add_versions
+ if not VersionInfo.isvalid(strip_leading_v(version))
+ ]
+ if len(invalid_versions) > 0:
+ message = f"Found invalid versions: {invalid_versions}"
+ raise RuntimeError(message)
+
+ version_html = None
+ version_html_file_path = Path(args.versions_html_file).resolve()
+ with open(version_html_file_path, "r", encoding="utf-8") as f:
+ version_html = BeautifulSoup(f, "html.parser")
+
+ if version_html is None:
+ message = f"An error occured while trying to load {str(version_html_file_path)}"
+ raise RuntimeError(message)
+
+ print(version_html)
+
+ version_list = version_html.find(id=VERSIONS_LIST_ID)
+ if version_list is None or version_list.name != "ul":
+ message = f"Could not find tag with id {VERSIONS_LIST_ID}"
+ raise RuntimeError(message)
+
+ non_semver_versions = {}
+ semver_versions = {}
+ for list_entry in version_list.find_all("li"):
+ version_tags = []
+ version_is_valid_semver = False
+ for potential_version_tag in list_entry.contents:
+ if not isinstance(potential_version_tag, Tag):
+ continue
+ version_is_valid_semver = VersionInfo.isvalid(
+ strip_leading_v(potential_version_tag.string)
+ )
+ version_tags.append(potential_version_tag.string)
+
+ num_version_tags = len(version_tags)
+ assert num_version_tags == 1, f"Can only have 1 version tag, got {num_version_tags}"
+
+ version_tag = version_tags[0]
+
+ if version_is_valid_semver:
+ semver_versions[version_tag.string] = list_entry
+ else:
+ non_semver_versions[version_tag.string] = list_entry
+
+ parsed_versions = [VersionInfo.parse(version) for version in args.add_versions]
+
+ versions_already_in_html = set(parsed_versions).intersection(semver_versions.keys())
+ if len(versions_already_in_html) > 0:
+ message = (
+ "The following versions are already in the html: "
+ f"{', '.join(str(ver) for ver in sorted(versions_already_in_html))}"
+ )
+ raise RuntimeError(message)
+
+ semver_versions.update(
+ (
+ parsed_version,
+ create_list_element(
+ version_html, create_link_tag_set_string(version_html, str(parsed_version))
+ ),
+ )
+ for parsed_version in parsed_versions
+ )
+
+ version_list.contents = []
+ for sorted_non_semver_version in sorted(non_semver_versions.keys()):
+ version_list.contents.append(non_semver_versions[sorted_non_semver_version])
+
+ # We want the most recent versions at the top
+ for sorted_semver_version in sorted(semver_versions.keys(), reverse=True):
+ version_list.contents.append(semver_versions[sorted_semver_version])
+
+ pretty_output = version_html.prettify()
+ print(pretty_output)
+
+ output_html_path = Path(args.output_html).resolve()
+ with open(output_html_path, "w", encoding="utf-8") as f:
+ f.write(pretty_output)
+
+
+if __name__ == "__main__":
+ parser = argparse.ArgumentParser("versions.html generator", allow_abbrev=False)
+
+ parser.add_argument(
+ "--add-versions",
+ type=str,
+ required=True,
+ nargs="+",
+ help="A list of versions to add to versions.html. "
+ "The links will be sorted by versions with stable/main as the first entry. "
+ "The link will point to '$VERSION/' and will have text '$VERSION'.",
+ )
+ parser.add_argument(
+ "--versions-html-file",
+ type=str,
+ required=True,
+ help="Path to the versions.html to update. "
+ 'It must have a tag with id="versions-list".',
+ )
+ parser.add_argument("--output-html", type=str, required=True, help="Output file path.")
+
+ cli_args = parser.parse_args()
+ main(cli_args)
diff --git a/frontends/concrete-python/script/actions_utils/generate_versions_json.py b/frontends/concrete-python/script/actions_utils/generate_versions_json.py
new file mode 100644
index 000000000..746b769ad
--- /dev/null
+++ b/frontends/concrete-python/script/actions_utils/generate_versions_json.py
@@ -0,0 +1,81 @@
+"""Tool to manage the versions.json file at the root of our docs sites."""
+
+import argparse
+import json
+from json.decoder import JSONDecodeError
+from pathlib import Path
+
+from semver import VersionInfo
+
+
+def strip_leading_v(version_str: str):
+ """Strip leading v of a version which is not SemVer compatible."""
+ return version_str[1:] if version_str.startswith("v") else version_str
+
+
+def main(args):
+ """Entry point."""
+ version = args.version
+ latest = args.latest
+ prerelease = args.prerelease
+
+ if not VersionInfo.isvalid(strip_leading_v(version)):
+ message = f"Invalid version: {version}"
+ raise RuntimeError(message)
+
+ version_json_file_path = Path(args.versions_json_file).resolve()
+ try:
+ with open(version_json_file_path, "r", encoding="utf-8") as f:
+ version_json = json.loads(f.read())
+ except JSONDecodeError as err:
+ message = f"An error occurred while trying to load {str(version_json_file_path)}"
+ raise RuntimeError(message) from err
+
+ # Version json is composed by:
+ # all: list of all published versions
+ # menu: list of all available versions (if any entry is not included in "all",
+ # warning banner with DEV/PRE-RELEASE doc warning will be displayed)
+ # latest: latest version, if current doc != latest, warning banner is displayed
+ if "version" not in version_json["menu"]:
+ version_json["menu"].append(version)
+ if not prerelease:
+ version_json["all"].append(version)
+ if latest:
+ version_json["latest"] = version
+
+ print(version_json)
+ output_json_path = Path(args.output_json).resolve()
+ with open(output_json_path, "w", encoding="utf-8") as f:
+ json.dump(version_json, f, indent=4)
+
+
+if __name__ == "__main__":
+ parser = argparse.ArgumentParser("versions.json generator", allow_abbrev=False)
+
+ parser.add_argument(
+ "--add-version",
+ type=str,
+ required=True,
+ dest="version",
+ help="A single versions to add to versions.json. "
+ "The link will point to '$VERSION/' and will have text '$VERSION'.",
+ )
+ parser.add_argument(
+ "--versions-json-file", type=str, required=True, help="Path to the versions.json to update."
+ )
+ parser.add_argument(
+ "--prerelease",
+ action="store_true",
+ dest="prerelease",
+ help="set this version as a pre-release documentation.",
+ )
+ parser.add_argument(
+ "--latest",
+ action="store_true",
+ dest="latest",
+ help="set this version as latest available documentation.",
+ )
+ parser.add_argument("--output-json", type=str, required=True, help="Output file path.")
+
+ cli_args = parser.parse_args()
+ main(cli_args)
diff --git a/frontends/concrete-python/script/actions_utils/get_latest_run.sh b/frontends/concrete-python/script/actions_utils/get_latest_run.sh
new file mode 100755
index 000000000..add01cd14
--- /dev/null
+++ b/frontends/concrete-python/script/actions_utils/get_latest_run.sh
@@ -0,0 +1,61 @@
+#!/bin/bash
+
+TOKEN=
+ORG_REPO=
+EVENTS_TO_CHECK=
+
+while [ -n "$1" ]
+do
+ case "$1" in
+ "--token" )
+ shift
+ TOKEN="$1"
+ ;;
+
+ "--org-repo" )
+ shift
+ ORG_REPO="$1"
+ ;;
+
+ "--event-types" )
+ shift
+ EVENTS_TO_CHECK="$1"
+ ;;
+
+ *)
+ echo "Unknown param : $1"
+ exit 1
+ ;;
+ esac
+ shift
+done
+
+# Store the workflows that come in jsons in a file per event type
+declare -a JSON_FILES_ARRAY=()
+for EVENT in $EVENTS_TO_CHECK; do
+ CURR_FILE="$(mktemp --suffix=.json)"
+ curl \
+ -X GET \
+ -H "Accept: application/vnd.github.v3+json" \
+ -H "Authorization: token ${TOKEN}" \
+ "https://api.github.com/repos/${ORG_REPO}/actions/runs?branch=main&event=${EVENT}&status=success" | \
+ jq -rc '.workflow_runs | sort_by(.updated_at)[-1]' > "${CURR_FILE}"
+ JSON_FILES_ARRAY+=("${CURR_FILE}")
+done
+
+# Put all the workflows in the same json and dump that
+CONCAT_FILE="$(mktemp --suffix=.json)"
+jq -sr '.' "${JSON_FILES_ARRAY[@]}" > "${CONCAT_FILE}"
+
+# Sort by updated_at, get the last and get the sha1 for this last one
+BEFORE_SHA=$(jq -rc 'sort_by(.updated_at)[-1].head_sha' "${CONCAT_FILE}")
+
+# Remove files
+rm "${CONCAT_FILE}"
+
+for FILE_TO_RM in "${JSON_FILES_ARRAY[@]}"; do
+ rm "${FILE_TO_RM}"
+done
+
+# Echo for the outside world
+echo "${BEFORE_SHA}"
diff --git a/frontends/concrete-python/script/actions_utils/parse_pip_audit_vulns.py b/frontends/concrete-python/script/actions_utils/parse_pip_audit_vulns.py
new file mode 100644
index 000000000..382d9f8c0
--- /dev/null
+++ b/frontends/concrete-python/script/actions_utils/parse_pip_audit_vulns.py
@@ -0,0 +1,65 @@
+"""Script to parse output of pip-audit"""
+
+import argparse
+import json
+import sys
+from pathlib import Path
+from typing import List
+
+
+def format_vulnerability(pkg_name, pkg_version, vuln_info: dict) -> List[str]:
+ """Format a vulnerability info."""
+
+ vuln_strs = [
+ f"{pkg_name}({pkg_version}) - ID: {vuln['id']} "
+ f"fixed in {', '.join(vuln['fix_versions'])}"
+ for vuln in vuln_info
+ ]
+ return vuln_strs
+
+
+# Cannot have a backslash in f-string, so create a constant for newline
+NEW_LINE = "\n"
+
+
+def main(args):
+ """Entry point"""
+
+ vulns_json_path = Path(args.vulns_json).resolve()
+ json_content = []
+ with open(vulns_json_path, "r", encoding="utf-8") as f:
+ json_content.extend(f.readlines())
+
+ report_path = Path(args.vulns_report).resolve()
+ with open(report_path, "w", encoding="utf-8") as report:
+ if json_content:
+ report.write("Found the following vulnerabilities:\n")
+ assert len(json_content) == 1
+ json_data = json.loads(json_content[0])
+ for entry in json_data:
+ vuln_entries = entry.get("vulns", [])
+ if vuln_entries:
+ formatted_vulns = format_vulnerability(
+ entry["name"], entry["version"], vuln_entries
+ )
+ report.write(f"- {f'{NEW_LINE}- '.join(formatted_vulns)}\n")
+ sys.exit(1)
+ else:
+ report.write("No vulnerabilities found.\n")
+
+
+if __name__ == "__main__":
+ parser = argparse.ArgumentParser("pip-audit output parser", allow_abbrev=False)
+
+ parser.add_argument(
+ "--vulns-json", type=str, required=True, help="The path to the pip-audit json output"
+ )
+ parser.add_argument(
+ "--vulns-report",
+ type=str,
+ required=True,
+ help="Path to the file to which to write the vulneratbility report",
+ )
+
+ cli_args = parser.parse_args()
+ main(cli_args)
diff --git a/frontends/concrete-python/script/doc_utils/gen_supported_ufuncs.py b/frontends/concrete-python/script/doc_utils/gen_supported_ufuncs.py
new file mode 100644
index 000000000..fa2b0b820
--- /dev/null
+++ b/frontends/concrete-python/script/doc_utils/gen_supported_ufuncs.py
@@ -0,0 +1,75 @@
+"""Update list of supported functions in the doc."""
+
+import argparse
+
+from concrete.numpy.tracing import Tracer
+
+
+def main(file_to_update):
+ """Update list of supported functions in file_to_update"""
+ f_names = sorted(f.__name__.replace("_", "\\_") for f in Tracer.SUPPORTED_NUMPY_OPERATORS)
+ supported_func = [
+ f"[np.{f}](https://numpy.org/doc/stable/reference/generated/numpy.{f}.html)"
+ for f in f_names
+ ]
+
+ with open(file_to_update, "r", encoding="utf-8") as file:
+ lines = file.readlines()
+
+ newlines = []
+ keep_line = True
+
+ for line in lines:
+ if line.startswith(
+ ""
+ ):
+ keep_line = False
+ newlines.append(line)
+ newlines.append(
+ "\n"
+ )
+ elif line.startswith(
+ ""
+ ):
+ pass
+ elif line.startswith(
+ ""
+ ):
+ keep_line = True
+
+ # Inject the supported functions
+ newlines.extend(f"* {f}\n" for f in supported_func)
+
+ newlines.append(line)
+ else:
+ assert "gen_supported_ufuncs.py" not in line, (
+ f"Error: not expected to have 'gen_supported_ufuncs.py' at line {line} "
+ f"of {file_to_update}"
+ )
+
+ if keep_line:
+ newlines.append(line)
+
+ if args.check:
+
+ with open(file_to_update, "r", encoding="utf-8") as file:
+ oldlines = file.readlines()
+
+ assert (
+ oldlines == newlines
+ ), "List of supported functions is not up to date. Please run `make supported_functions`."
+
+ else:
+ with open(file_to_update, "w", encoding="utf-8") as file:
+ file.writelines(newlines)
+
+
+if __name__ == "__main__":
+ parser = argparse.ArgumentParser(description="Update list of supported functions in the doc")
+ parser.add_argument("--check", action="store_true", help="flag to enable just checking mode")
+
+ parser.add_argument("file_to_update", type=str, help=".md file to update")
+ args = parser.parse_args()
+ main(args.file_to_update)
diff --git a/frontends/concrete-python/script/make_utils/changelog_helper.py b/frontends/concrete-python/script/make_utils/changelog_helper.py
new file mode 100644
index 000000000..ac2285ae1
--- /dev/null
+++ b/frontends/concrete-python/script/make_utils/changelog_helper.py
@@ -0,0 +1,251 @@
+"""Tool to bypass the insane logic of semantic-release and generate changelogs we want"""
+
+import argparse
+import subprocess
+import sys
+from collections import deque
+
+from git.repo import Repo
+from semantic_release.changelog import markdown_changelog
+from semantic_release.errors import UnknownCommitMessageStyleError
+from semantic_release.settings import config, current_commit_parser
+from semantic_release.vcs_helpers import get_repository_owner_and_name
+from semver import VersionInfo
+
+
+def log_msg(*args, file=sys.stderr, **kwargs):
+ """Shortcut to print to sys.stderr."""
+ print(*args, file=file, **kwargs)
+
+
+def strip_leading_v(version_str: str):
+ """Strip leading v of a version which is not SemVer compatible."""
+ return version_str[1:] if version_str.startswith("v") else version_str
+
+
+def get_poetry_project_version() -> VersionInfo:
+ """Run poetry version and get the project version"""
+ command = ["poetry", "version"]
+ poetry_version_output = subprocess.check_output(command, text=True)
+ return version_string_to_version_info(poetry_version_output.split(" ")[1])
+
+
+def raise_exception_or_print_warning(is_error: bool, message_body: str):
+ """Raise an exception if is_error is true else print a warning to stderr"""
+ msg_start = "Error" if is_error else "Warning"
+ msg = f"{msg_start}: {message_body}"
+ if is_error:
+ raise RuntimeError(msg)
+ log_msg(msg)
+
+
+def version_string_to_version_info(version_string: str) -> VersionInfo:
+ """Convert git tag to VersionInfo."""
+ return VersionInfo.parse(strip_leading_v(version_string))
+
+
+def generate_changelog(repo: Repo, from_commit_excluded: str, to_commit_included: str) -> dict:
+ """Recreate the functionality from semantic release with the from and to commits.
+
+ Args:
+ repo (Repo): the gitpython Repo object representing your git repository
+ from_commit_excluded (str): the commit after which we want to collect commit messages for
+ the changelog
+ to_commit_included (str): the last commit included in the collected commit messages for the
+ changelog.
+
+ Returns:
+ dict: the same formatted dict as the generate_changelog from semantic-release
+ """
+ # Additional sections will be added as new types are encountered
+ changes: dict = {"breaking": []}
+
+ rev = f"{from_commit_excluded}...{to_commit_included}"
+
+ for commit in repo.iter_commits(rev):
+ hash_ = commit.hexsha
+ commit_message = (
+ commit.message.replace("\r\n", "\n")
+ if isinstance(commit.message, str)
+ else commit.message.replace(b"\r\n", b"\n")
+ )
+ try:
+ message = current_commit_parser()(commit_message)
+ if message.type not in changes:
+ log_msg(f"Creating new changelog section for {message.type} ")
+ changes[message.type] = []
+
+ # Capitalize the first letter of the message, leaving others as they were
+ # (using str.capitalize() would make the other letters lowercase)
+ formatted_message = message.descriptions[0][0].upper() + message.descriptions[0][1:]
+ if config.get("changelog_capitalize") is False:
+ formatted_message = message.descriptions[0]
+
+ # By default, feat(x): description shows up in changelog with the
+ # scope bolded, like:
+ #
+ # * **x**: description
+ if config.get("changelog_scope") and message.scope:
+ formatted_message = f"**{message.scope}:** {formatted_message}"
+
+ changes[message.type].append((hash_, formatted_message))
+
+ if message.breaking_descriptions:
+ # Copy breaking change descriptions into changelog
+ for paragraph in message.breaking_descriptions:
+ changes["breaking"].append((hash_, paragraph))
+ elif message.bump == 3:
+ # Major, but no breaking descriptions, use commit subject instead
+ changes["breaking"].append((hash_, message.descriptions[0]))
+
+ except UnknownCommitMessageStyleError as err:
+ log_msg(f"Ignoring UnknownCommitMessageStyleError: {err}")
+
+ return changes
+
+
+def main(args):
+ """Entry point"""
+
+ repo = Repo(args.repo_root)
+
+ sha1_to_tags = {tag.commit.hexsha: tag for tag in repo.tags}
+
+ to_commit = repo.commit(args.to_ref)
+ log_msg(f"To commit: {to_commit}")
+
+ to_tag = sha1_to_tags.get(to_commit.hexsha, None)
+ if to_tag is None:
+ raise_exception_or_print_warning(
+ is_error=args.to_ref_must_have_tag,
+ message_body=f"to-ref {args.to_ref} has no tag associated to it",
+ )
+
+ to_version = (
+ get_poetry_project_version()
+ if to_tag is None
+ else version_string_to_version_info(to_tag.name)
+ )
+ log_msg(f"Project version {to_version} taken from tag: {to_tag is not None}")
+
+ from_commit = None
+ if args.from_ref is None:
+ tags_by_name = {strip_leading_v(tag.name): tag for tag in repo.tags}
+ version_infos = {
+ VersionInfo.parse(tag_name): tag_name
+ for tag_name in tags_by_name
+ if VersionInfo.isvalid(tag_name)
+ }
+ all_release_version_infos = {
+ version_info: tags_by_name[tag_name]
+ for version_info, tag_name in version_infos.items()
+ if version_info.prerelease is None
+ }
+ log_msg(f"All release versions {all_release_version_infos}")
+
+ versions_before_project_version = [
+ version_info for version_info in all_release_version_infos if version_info < to_version
+ ]
+ if len(versions_before_project_version) > 0:
+ highest_version_before_current_version = max(versions_before_project_version)
+ highest_version_tag = all_release_version_infos[highest_version_before_current_version]
+ from_commit = highest_version_tag.commit
+ else:
+ # No versions before, get the initial commit reachable from to_commit
+ # from https://stackoverflow.com/a/48232574
+ last_element_extractor = deque(repo.iter_commits(to_commit), 1)
+ from_commit = last_element_extractor.pop()
+ else:
+ from_commit = repo.commit(args.from_ref)
+
+ log_msg(f"From commit: {from_commit}")
+ ancestor_commit = repo.merge_base(to_commit, from_commit)
+ assert len(ancestor_commit) == 1
+ ancestor_commit = ancestor_commit[0]
+ log_msg(f"Common ancestor: {ancestor_commit}")
+
+ if ancestor_commit != from_commit:
+ do_not_change_from_ref = args.do_not_change_from_ref and args.from_ref is not None
+ raise_exception_or_print_warning(
+ is_error=do_not_change_from_ref,
+ message_body=(
+ f"the ancestor {ancestor_commit} for {from_commit} and {to_commit} "
+ f"is not the same commit as the commit for '--from-ref' {from_commit}."
+ ),
+ )
+
+ ancestor_tag = sha1_to_tags.get(ancestor_commit.hexsha, None)
+ if ancestor_tag is None:
+ raise_exception_or_print_warning(
+ is_error=args.ancestor_must_have_tag,
+ message_body=(
+ f"the ancestor {ancestor_commit} for " f"{from_commit} and {to_commit} has no tag"
+ ),
+ )
+
+ ancestor_version_str = (
+ None if ancestor_tag is None else str(version_string_to_version_info(ancestor_tag.name))
+ )
+
+ log_msg(
+ f"Collecting commits from \n{ancestor_commit} "
+ f"(tag: {ancestor_tag} - parsed version "
+ f"{str(ancestor_version_str)}) to \n{to_commit} "
+ f"(tag: {to_tag} - parsed version {str(to_version)})"
+ )
+
+ log_dict = generate_changelog(repo, ancestor_commit.hexsha, to_commit.hexsha)
+
+ owner, name = get_repository_owner_and_name()
+ md_changelog = markdown_changelog(
+ owner,
+ name,
+ str(to_version),
+ log_dict,
+ header=True,
+ previous_version=ancestor_version_str,
+ )
+
+ print(md_changelog)
+
+
+if __name__ == "__main__":
+ parser = argparse.ArgumentParser("Changelog helper", allow_abbrev=False)
+
+ parser.add_argument("--repo-root", type=str, default=".", help="Path to the repo root")
+ parser.add_argument(
+ "--to-ref",
+ type=str,
+ help="Specify the git ref-like string (sha1, tag, HEAD~, etc.) that will mark the LAST "
+ "included commit of the changelog. If this is not specified, the current project version "
+ "will be used to create a changelog with the current commit as last commit.",
+ )
+ parser.add_argument(
+ "--from-ref",
+ type=str,
+ help="Specify the git ref-like string (sha1, tag, HEAD~, etc.) that will mark the commit "
+ "BEFORE the first included commit of the changelog. If this is not specified, the most "
+ "recent actual release tag (no pre-releases) before the '--to-ref' argument will be used. "
+ "If the tagged commit is not an ancestor of '--to-ref' then the most recent common ancestor"
+ "(git merge-base) will be used unless '--do-not-change-from-ref' is specified.",
+ )
+ parser.add_argument(
+ "--ancestor-must-have-tag",
+ action="store_true",
+ help="Set if the used ancestor must have a tag associated to it.",
+ )
+ parser.add_argument(
+ "--to-ref-must-have-tag",
+ action="store_true",
+ help="Set if '--to-ref' must have a tag associated to it.",
+ )
+ parser.add_argument(
+ "--do-not-change-from-ref",
+ action="store_true",
+ help="Specify to prevent selecting a different '--from-ref' than the one specified in cli. "
+ "Will raise an exception if '--from-ref' is not a suitable ancestor for '--to-ref' and "
+ "would otherwise use the most recent common ancestor (git merge-base) as '--from-ref'.",
+ )
+
+ cli_args = parser.parse_args()
+ main(cli_args)
diff --git a/frontends/concrete-python/script/make_utils/get_pylintrc_notes.py b/frontends/concrete-python/script/make_utils/get_pylintrc_notes.py
new file mode 100644
index 000000000..590f22901
--- /dev/null
+++ b/frontends/concrete-python/script/make_utils/get_pylintrc_notes.py
@@ -0,0 +1,30 @@
+"""File to get pylintrc notes"""
+
+import argparse
+import configparser
+from pathlib import Path
+
+
+def main(args):
+ """Entry point"""
+
+ pylintrc_file_path = Path(args.pylintrc_path).resolve()
+ config = configparser.ConfigParser()
+ config.read(pylintrc_file_path)
+ notes = sorted(x.strip() for x in config["MISCELLANEOUS"]["notes"].split(","))
+ # Make sure we at least have todo in there without writing it otherwise we'll match
+ notes.append("TO" + "DO")
+ notes_for_grep_search = r"\|".join(notes)
+ print(notes_for_grep_search)
+
+
+if __name__ == "__main__":
+ parser = argparse.ArgumentParser("Parse pylintrc notes", allow_abbrev=False)
+
+ parser.add_argument(
+ "--pylintrc-path", type=str, required=True, help="Path to pylintrc ini config"
+ )
+
+ cli_args = parser.parse_args()
+
+ main(cli_args)
diff --git a/frontends/concrete-python/script/make_utils/is_latest.py b/frontends/concrete-python/script/make_utils/is_latest.py
new file mode 100644
index 000000000..2f3d79838
--- /dev/null
+++ b/frontends/concrete-python/script/make_utils/is_latest.py
@@ -0,0 +1,44 @@
+"""
+Simple script to check if a given version is the latest version of Concrete Numpy.
+"""
+
+import sys
+from typing import List
+
+import requests # type: ignore
+from semver import VersionInfo
+
+
+def is_latest(new_version: VersionInfo, existing_versions: List[VersionInfo]):
+ """
+ Get if `new_version` is the latest version among `existing_versions`.
+ """
+
+ if new_version.prerelease:
+ return False
+
+ for existing_version in existing_versions:
+ if existing_version.prerelease:
+ continue
+
+ if existing_version > new_version:
+ return False
+
+ return True
+
+
+def main():
+ """
+ Run the script.
+ """
+
+ info = requests.get("https://api.github.com/repos/zama-ai/concrete-numpy/releases").json()
+
+ new_version = VersionInfo.parse(sys.argv[1])
+ existing_versions = [VersionInfo.parse(releases["name"][1:]) for releases in info]
+
+ print(is_latest(new_version, existing_versions))
+
+
+if __name__ == "__main__":
+ main()
diff --git a/frontends/concrete-python/script/make_utils/is_prerelease.py b/frontends/concrete-python/script/make_utils/is_prerelease.py
new file mode 100644
index 000000000..e43ae01f5
--- /dev/null
+++ b/frontends/concrete-python/script/make_utils/is_prerelease.py
@@ -0,0 +1,28 @@
+"""
+Simple script to check if a given version is a pre-release version.
+"""
+
+import sys
+
+from semver import VersionInfo
+
+
+def is_prerelease(version: VersionInfo):
+ """
+ Get if `version` is a pre-release version.
+ """
+
+ return version.prerelease is not None
+
+
+def main():
+ """
+ Run the script.
+ """
+
+ version = VersionInfo.parse(sys.argv[1])
+ print(str(is_prerelease(version)).lower())
+
+
+if __name__ == "__main__":
+ main()
diff --git a/frontends/concrete-python/script/make_utils/licenses.sh b/frontends/concrete-python/script/make_utils/licenses.sh
new file mode 100755
index 000000000..abf9ef84a
--- /dev/null
+++ b/frontends/concrete-python/script/make_utils/licenses.sh
@@ -0,0 +1,126 @@
+#!/bin/bash
+
+set -e
+
+BASENAME="licenses"
+LICENSE_DIRECTORY="docs"
+CHECK=0
+DIFF_TOOL="diff --ignore-all-space --ignore-tab-expansion --ignore-space-change --ignore-all-space --ignore-blank-lines --strip-trailing-cr"
+TMP_VENV_PATH="/tmp/tmp_venv"
+DO_USER_LICENSES=1
+
+# Dev licences are not done, but could be re-enabled
+DO_DEV_LICENSES=0
+
+OUTPUT_DIRECTORY="${LICENSE_DIRECTORY}"
+
+while [ -n "$1" ]
+do
+ case "$1" in
+ "--check" )
+ CHECK=1
+ OUTPUT_DIRECTORY=$(mktemp -d)
+ ;;
+
+ *)
+ echo "Unknown param : $1"
+ exit 1
+ ;;
+ esac
+ shift
+done
+
+UNAME=$(uname)
+if [ "$UNAME" == "Darwin" ]
+then
+ OS=mac
+elif [ "$UNAME" == "Linux" ]
+then
+ OS=linux
+else
+ echo "Problem with OS"
+ exit 255
+fi
+
+if [ $DO_USER_LICENSES -eq 1 ]
+then
+ #Licenses for user (install in a temporary venv)
+ echo "Doing licenses for user"
+
+ FILENAME="${OS}.dependency.${BASENAME}.txt"
+ LICENSES_FILENAME="${LICENSE_DIRECTORY}/${FILENAME}"
+ NEW_LICENSES_FILENAME="${OUTPUT_DIRECTORY}/${FILENAME}"
+
+ rm -rf $TMP_VENV_PATH/tmp_venv
+ python3 -m venv $TMP_VENV_PATH/tmp_venv
+
+ # SC1090: Can't follow non-constant source. Use a directive to specify location.
+ # shellcheck disable=SC1090,SC1091
+ source $TMP_VENV_PATH/tmp_venv/bin/activate
+
+ python -m pip install -U pip wheel
+ python -m pip install -U --force-reinstall setuptools
+ poetry install --only main
+ python -m pip install pip-licenses
+ pip-licenses | grep -v "pkg\-resources\|concrete-numpy" | tee "${NEW_LICENSES_FILENAME}"
+
+ # Remove trailing whitespaces
+ if [ "$UNAME" == "Darwin" ]
+ then
+ sed -i "" 's/[t ]*$//g' "${NEW_LICENSES_FILENAME}"
+ else
+ sed -i 's/[t ]*$//g' "${NEW_LICENSES_FILENAME}"
+ fi
+
+ deactivate
+
+ if [ $CHECK -eq 1 ]
+ then
+ echo "$DIFF_TOOL $LICENSES_FILENAME ${NEW_LICENSES_FILENAME}"
+ $DIFF_TOOL "$LICENSES_FILENAME" "${NEW_LICENSES_FILENAME}"
+ echo "Success: no update in $LICENSES_FILENAME"
+ fi
+fi
+
+if [ $DO_DEV_LICENSES -eq 1 ]
+then
+ # Licenses for developer (install in a temporary venv)
+ echo "Doing licenses for developper"
+
+ FILENAME="${BASENAME}_${OS}_dev.txt"
+ LICENSES_FILENAME="${LICENSE_DIRECTORY}/${FILENAME}"
+ NEW_LICENSES_FILENAME="${OUTPUT_DIRECTORY}/${FILENAME}"
+
+ rm -rf $TMP_VENV_PATH/tmp_venv
+ python3 -m venv $TMP_VENV_PATH/tmp_venv
+
+ # SC1090: Can't follow non-constant source. Use a directive to specify location.
+ # shellcheck disable=SC1090,SC1091
+ source $TMP_VENV_PATH/tmp_venv/bin/activate
+
+ make setup_env
+ pip-licenses | grep -v "pkg\-resources\|concrete-numpy" | tee "${NEW_LICENSES_FILENAME}"
+
+ # Remove trailing whitespaces
+ if [ "$UNAME" == "Darwin" ]
+ then
+ sed -i "" 's/[t ]*$//g' "${NEW_LICENSES_FILENAME}"
+ else
+ sed -i 's/[t ]*$//g' "${NEW_LICENSES_FILENAME}"
+ fi
+
+ deactivate
+
+ if [ $CHECK -eq 1 ]
+ then
+
+ echo "$DIFF_TOOL $LICENSES_FILENAME ${NEW_LICENSES_FILENAME}"
+ $DIFF_TOOL "$LICENSES_FILENAME" "${NEW_LICENSES_FILENAME}"
+ echo "Success: no update in $LICENSES_FILENAME"
+ fi
+fi
+
+rm -f ${LICENSE_DIRECTORY}/licenses_*.txt.tmp
+rm -rf $TMP_VENV_PATH/tmp_venv
+
+echo "End of license script"
diff --git a/frontends/concrete-python/script/make_utils/ncpus.sh b/frontends/concrete-python/script/make_utils/ncpus.sh
new file mode 100755
index 000000000..61590effd
--- /dev/null
+++ b/frontends/concrete-python/script/make_utils/ncpus.sh
@@ -0,0 +1,7 @@
+#!/bin/bash
+
+if [[ $(uname) == "Darwin" ]]; then
+ sysctl -n hw.logicalcpu
+else
+ nproc
+fi
diff --git a/frontends/concrete-python/script/make_utils/setup_os_deps.sh b/frontends/concrete-python/script/make_utils/setup_os_deps.sh
new file mode 100755
index 000000000..83aeaf41e
--- /dev/null
+++ b/frontends/concrete-python/script/make_utils/setup_os_deps.sh
@@ -0,0 +1,95 @@
+#!/usr/bin/env bash
+
+# From https://stackoverflow.com/a/69860299
+isDocker(){
+ local cgroup=/proc/1/cgroup
+ test -f $cgroup && [[ "$(<$cgroup)" = *:cpuset:/docker/* ]]
+}
+
+isDockerBuildkit(){
+ local cgroup=/proc/1/cgroup
+ test -f $cgroup && [[ "$(<$cgroup)" = *:cpuset:/docker/buildkit/* ]]
+}
+
+isDockerContainer(){
+ [[ -e /.dockerenv ]]
+}
+
+LINUX_INSTALL_PYTHON=0
+
+while [ -n "$1" ]
+do
+ case "$1" in
+ "--linux-install-python" )
+ LINUX_INSTALL_PYTHON=1
+ ;;
+
+ *)
+ echo "Unknown param : $1"
+ exit 1
+ ;;
+ esac
+ shift
+done
+
+OS_NAME=$(uname)
+
+if [[ "${OS_NAME}" == "Linux" ]]; then
+ # Docker build
+ if isDockerBuildkit || (isDocker && ! isDockerContainer); then
+ CLEAR_APT_LISTS="rm -rf /var/lib/apt/lists/* &&"
+ SUDO_BIN=""
+ else
+ CLEAR_APT_LISTS=""
+ SUDO_BIN="$(command -v sudo)"
+ if [[ "${SUDO_BIN}" != "" ]]; then
+ SUDO_BIN="${SUDO_BIN} "
+ fi
+ fi
+
+ PYTHON_PACKAGES=
+ if [[ "${LINUX_INSTALL_PYTHON}" == "1" ]]; then
+ PYTHON_PACKAGES="python3-pip \
+ python3 \
+ python3-dev \
+ python3-tk \
+ python3-venv \
+ python-is-python3 \
+ "
+ fi
+
+ SETUP_CMD="${SUDO_BIN:+$SUDO_BIN}apt-get update && apt-get upgrade --no-install-recommends -y && \
+ ${SUDO_BIN:+$SUDO_BIN}apt-get install --no-install-recommends -y \
+ build-essential \
+ curl \
+ sqlite3 \
+ ${PYTHON_PACKAGES:+$PYTHON_PACKAGES} \
+ git \
+ graphviz* \
+ jq \
+ make \
+ pandoc \
+ shellcheck && \
+ ${CLEAR_APT_LISTS:+$CLEAR_APT_LISTS} \
+ pip install --no-cache-dir --upgrade pip && \
+ pip install --no-cache-dir poetry"
+ eval "${SETUP_CMD}"
+elif [[ "${OS_NAME}" == "Darwin" ]]; then
+ # Some problems with the git which is preinstalled on AWS virtual machines. Let's unlink it
+ # but not fail if it is not there, so use 'cat' as a hack to be sure that, even if set -x is
+ # activated later in this script, the status is still 0 == success
+ brew unlink git@2.35.1 | cat
+ brew install git
+
+ brew install curl graphviz jq make pandoc shellcheck sqlite
+ python3 -m pip install -U pip
+ python3 -m pip install poetry
+
+ echo "Make is currently installed as gmake"
+ echo 'If you need to use it as "make", you can add a "gnubin" directory to your PATH from your bashrc like:'
+ # shellcheck disable=SC2016
+ echo 'PATH="/usr/local/opt/make/libexec/gnubin:$PATH"'
+else
+ echo "Unknown OS"
+ exit 1
+fi
diff --git a/frontends/concrete-python/script/make_utils/test_md_python_code.py b/frontends/concrete-python/script/make_utils/test_md_python_code.py
new file mode 100644
index 000000000..951ca34a3
--- /dev/null
+++ b/frontends/concrete-python/script/make_utils/test_md_python_code.py
@@ -0,0 +1,143 @@
+"""Helper script to be able to test python code in markdown files."""
+
+import argparse
+import re
+import sys
+import traceback
+from pathlib import Path
+from typing import Dict, List
+
+PYTHON_BLOCK_HINTS = ["py", "python", "python3"]
+BLOCK_STARTS = tuple(f"```{hint}" for hint in PYTHON_BLOCK_HINTS)
+BLOCK_END = "```"
+DIRECTIVE_COMMENT_PATTERN = ""
+SKIP_DIRECTIVE = "skip"
+CONT_DIRECTIVE = "cont"
+
+
+def get_code_blocks_for_file(md_file: Path) -> Dict[int, List[str]]:
+ """Function to process an md file and test the python code in it.
+
+ Args:
+ md_file (Path): The path to the md file to convert and test.
+
+ Raises:
+ SyntaxError: If EOF is reached before a code block is closed.
+ SyntaxError: If a block is not closed and a new python block is opened.
+
+ Returns:
+ Dict[int, List[str]]: A dict containing the code blocks of the file.
+ """
+ file_content = None
+
+ python_code_blocks: Dict[int, List[str]] = {}
+
+ def get_code_block_container(line_idx):
+ block_idx = line_idx
+ python_code_blocks[block_idx] = []
+ return python_code_blocks[block_idx]
+
+ with open(md_file, encoding="utf-8") as f:
+ file_content = f.readlines()
+
+ file_content_iterator = iter(enumerate(file_content, 1))
+ python_block_continues = False
+ skip_next_python_block = False
+
+ for line_idx, line in file_content_iterator:
+ if line.startswith(BLOCK_STARTS):
+ if skip_next_python_block:
+ skip_next_python_block = False
+ continue
+ if not python_block_continues:
+ current_python_code = get_code_block_container(line_idx)
+ while True:
+ line_idx, line = next(file_content_iterator)
+ if line == "":
+ # Reached EOF
+ message = (
+ "Reached EOF before finding the end of the current python block in "
+ f"{str(md_file)}"
+ )
+ raise SyntaxError(message)
+
+ if line.strip() == BLOCK_END:
+ break
+
+ if line.startswith(BLOCK_STARTS):
+ message = (
+ f"Error at line {line_idx} in file {str(md_file)}, "
+ "python block was opened before the previous one was "
+ "closed (missing ``` ?)"
+ )
+ raise SyntaxError(message)
+ current_python_code.append(line)
+ python_block_continues = False
+ else:
+ match = re.match(DIRECTIVE_COMMENT_PATTERN, line)
+ if match is not None:
+ directive = match.group(1)
+ if directive == SKIP_DIRECTIVE:
+ skip_next_python_block = True
+ elif directive == CONT_DIRECTIVE:
+ python_block_continues = True
+
+ python_block_continues = python_block_continues and not skip_next_python_block
+
+ return python_code_blocks
+
+
+def main(args):
+ """The actual processing."""
+ md_dir_path = Path(args.md_dir)
+ md_files = sorted(md_dir_path.glob("**/*.md"))
+
+ code_blocks_per_file: Dict[str, Dict[int, List[str]]] = {}
+
+ err_msg = ""
+
+ for md_file in md_files:
+ md_file = md_file.resolve().absolute()
+ md_file_str = str(md_file)
+ # pylint: disable=broad-except
+ try:
+ code_blocks_per_file[md_file_str] = get_code_blocks_for_file(md_file)
+ except Exception:
+ err_msg += f"Error while converting {md_file_str}"
+ err_msg += traceback.format_exc() + "\n"
+ # pylint: enable=broad-except
+
+ for md_file_str, code_blocks in code_blocks_per_file.items():
+ for line_idx, python_code in code_blocks.items():
+ # pylint: disable=broad-except,exec-used
+ try:
+ print(f"Testing block starting line #{line_idx} from {md_file_str}")
+ python_code = "".join(python_code)
+ compiled_code = compile(python_code, filename=md_file_str, mode="exec")
+ exec(compiled_code, {"__MODULE__": "__main__"}) # noqa: S102
+ print("Success")
+ except Exception:
+ print("Failed")
+ err_msg += (
+ f"Error while testing block starting line #{line_idx} from {md_file_str}:\n"
+ )
+ err_msg += f"```\n{python_code}```\n"
+ err_msg += traceback.format_exc() + "\n"
+ # pylint: enable=broad-except,exec-used
+
+ if err_msg != "":
+ print(err_msg)
+ sys.exit(1)
+
+
+if __name__ == "__main__":
+ parser = argparse.ArgumentParser(
+ "Converts md python blocks to python files", allow_abbrev=False
+ )
+ parser.add_argument(
+ "--md_dir", type=str, help="The path to the dir containing md files to convert."
+ )
+
+ cli_args = parser.parse_args()
+
+ main(cli_args)
diff --git a/frontends/concrete-python/script/make_utils/upgrade_deps.sh b/frontends/concrete-python/script/make_utils/upgrade_deps.sh
new file mode 100755
index 000000000..5e44325e5
--- /dev/null
+++ b/frontends/concrete-python/script/make_utils/upgrade_deps.sh
@@ -0,0 +1,20 @@
+#!/usr/bin/env bash
+
+# verbose output please
+set -v
+
+no_dev_file=$(mktemp --suffix=.txt)
+all_file=$(mktemp --suffix=.txt)
+dev_file=$(mktemp --suffix=.txt)
+
+poetry show -o -t --only main | grep -v -e "--" | cut -d " " -f 1 | sed 's/$/\@latest/g' > "${no_dev_file}"
+poetry show -o -t | grep -v -e "--" | cut -d " " -f 1 | sed 's/$/\@latest/g' > "${all_file}"
+join -v1 -v2 "${all_file}" "${no_dev_file}" > "${dev_file}"
+# shellcheck disable=SC2002
+cat "${no_dev_file}" | xargs poetry add
+# shellcheck disable=SC2002
+cat "${dev_file}" | xargs poetry add --dev
+
+rm "${no_dev_file}"
+rm "${dev_file}"
+rm "${all_file}"
diff --git a/frontends/concrete-python/script/make_utils/version_utils.py b/frontends/concrete-python/script/make_utils/version_utils.py
new file mode 100644
index 000000000..b097cf77d
--- /dev/null
+++ b/frontends/concrete-python/script/make_utils/version_utils.py
@@ -0,0 +1,222 @@
+"""Tool to manage version in the project"""
+
+import argparse
+import json
+import os
+import re
+import sys
+from pathlib import Path
+from typing import List, Optional
+
+import tomlkit
+from semver import VersionInfo
+
+
+def strip_leading_v(version_str: str):
+ """Strip leading v of a version which is not SemVer compatible."""
+ return version_str[1:] if version_str.startswith("v") else version_str
+
+
+def islatest(args):
+ """islatest command entry point."""
+ print(args, file=sys.stderr)
+
+ # This is the safest default
+ result = {"is_latest": False, "is_prerelease": True}
+
+ new_version_str = strip_leading_v(args.new_version)
+ if VersionInfo.isvalid(new_version_str):
+ new_version_info = VersionInfo.parse(new_version_str)
+ if new_version_info.prerelease is None:
+ # If it's an actual release
+ all_versions_str = (
+ strip_leading_v(version_str) for version_str in args.existing_versions
+ )
+
+ # Keep versions that are not release candidate
+ version_infos = [
+ VersionInfo.parse(version_str)
+ for version_str in all_versions_str
+ if VersionInfo.isvalid(version_str)
+ ]
+ all_non_prerelease_version_infos = [
+ version_info for version_info in version_infos if version_info.prerelease is None
+ ]
+
+ all_non_prerelease_version_infos.append(new_version_info)
+
+ new_version_is_latest = max(all_non_prerelease_version_infos) == new_version_info
+ result["is_latest"] = new_version_is_latest
+ result["is_prerelease"] = False
+
+ print(json.dumps(result))
+
+
+def update_variable_in_py_file(file_path: Path, var_name: str, version_str: str):
+ """Update the version in a .py file."""
+
+ file_content = None
+ with open(file_path, encoding="utf-8") as f:
+ file_content = f.read()
+
+ updated_file_content = re.sub(
+ rf'{var_name} *[:=] *["\'](.+)["\']',
+ rf'{var_name} = "{version_str}"',
+ file_content,
+ )
+
+ with open(file_path, "w", encoding="utf-8", newline="\n") as f:
+ f.write(updated_file_content)
+
+
+def update_variable_in_toml_file(file_path: Path, var_name: str, version_str: str):
+ """Update the version in a .toml file."""
+ toml_content = None
+ with open(file_path, encoding="utf-8") as f:
+ toml_content = tomlkit.loads(f.read())
+
+ toml_keys = var_name.split(".")
+ current_content = toml_content
+ for toml_key in toml_keys[:-1]:
+ current_content = current_content[toml_key]
+ last_toml_key = toml_keys[-1]
+ current_content[last_toml_key] = version_str
+
+ with open(file_path, "w", encoding="utf-8", newline="\n") as f:
+ f.write(tomlkit.dumps(toml_content))
+
+
+def load_file_vars_set(pyproject_path: os.PathLike, cli_file_vars: Optional[List[str]]):
+ """Load files and their version variables set-up in pyproject.toml and passed as arguments."""
+
+ file_vars_set = set()
+ if cli_file_vars is not None:
+ file_vars_set.update(cli_file_vars)
+
+ pyproject_path = Path(pyproject_path).resolve()
+
+ # Check if there is a semantic release configuration
+ if pyproject_path.exists():
+ pyproject_content = None
+ with open(pyproject_path, encoding="utf-8") as f:
+ pyproject_content = tomlkit.loads(f.read())
+
+ try:
+ sr_conf = pyproject_content["tool"]["semantic_release"]
+ sr_version_toml: str = sr_conf.get("version_toml", "")
+ file_vars_set.update(sr_version_toml.split(","))
+ sr_version_variable: str = sr_conf.get("version_variable", "")
+ file_vars_set.update(sr_version_variable.split(","))
+ except KeyError:
+ print("No configuration for semantic release in pyproject.toml")
+
+ return file_vars_set
+
+
+def set_version(args):
+ """set-version command entry point."""
+
+ version_str = strip_leading_v(args.version)
+ if not VersionInfo.isvalid(version_str):
+ message = f"Unable to validate version: {args.version}"
+ raise RuntimeError(message)
+
+ file_vars_set = load_file_vars_set(args.pyproject_file, args.file_vars)
+
+ for file_var_str in sorted(file_vars_set):
+ print(f"Processing {file_var_str}")
+ file, var_name = file_var_str.split(":", 1)
+ file_path = Path(file).resolve()
+
+ if file_path.suffix == ".py":
+ update_variable_in_py_file(file_path, var_name, version_str)
+ elif file_path.suffix == ".toml":
+ update_variable_in_toml_file(file_path, var_name, version_str)
+ else:
+ message = f"Unsupported file extension: {file_path.suffix}"
+ raise RuntimeError(message)
+
+
+def get_variable_from_py_file(file_path: Path, var_name: str):
+ """Read variable value from a .py file."""
+ file_content = None
+ with open(file_path, encoding="utf-8") as f:
+ file_content = f.read()
+
+ variable_values_set = set()
+
+ start_pos = 0
+ while True:
+ file_content = file_content[start_pos:]
+ match = re.search(
+ rf'{var_name} *[:=] *["\'](.+)["\']',
+ file_content,
+ )
+ if match is None:
+ break
+
+ variable_values_set.add(match.group(1))
+ start_pos = match.end()
+
+ return variable_values_set
+
+
+def get_variable_from_toml_file(file_path: Path, var_name: str):
+ """Read variable value from a .toml file."""
+
+ toml_content = None
+ with open(file_path, encoding="utf-8") as f:
+ toml_content = tomlkit.loads(f.read())
+
+ toml_keys = var_name.split(".")
+ current_content = toml_content
+ for toml_key in toml_keys:
+ current_content = current_content[toml_key]
+
+ return current_content
+
+
+def main(args):
+ """Entry point"""
+ args.entry_point(args)
+
+
+if __name__ == "__main__":
+ main_parser = argparse.ArgumentParser("Version utils", allow_abbrev=False)
+
+ sub_parsers = main_parser.add_subparsers(dest="sub-command", required=True)
+
+ parser_islatest = sub_parsers.add_parser("islatest")
+ parser_islatest.add_argument(
+ "--new-version", type=str, required=True, help="The new version to compare"
+ )
+ parser_islatest.add_argument(
+ "--existing-versions",
+ type=str,
+ nargs="+",
+ required=True,
+ help="The list of existing versions",
+ )
+ parser_islatest.set_defaults(entry_point=islatest)
+
+ parser_set_version = sub_parsers.add_parser("set-version")
+ parser_set_version.add_argument("--version", type=str, required=True, help="The version to set")
+ parser_set_version.add_argument(
+ "--pyproject-file",
+ type=str,
+ default="pyproject.toml",
+ help="The path to a project's pyproject.toml file, defaults to $pwd/pyproject.toml",
+ )
+ parser_set_version.add_argument(
+ "--file-vars",
+ type=str,
+ nargs="+",
+ help=(
+ "A space separated list of file/path.{py, toml}:variable to update with the new version"
+ ),
+ )
+ parser_set_version.set_defaults(entry_point=set_version)
+
+ cli_args = main_parser.parse_args()
+
+ main(cli_args)
diff --git a/frontends/concrete-python/script/nbmake_utils/notebook_finalize.py b/frontends/concrete-python/script/nbmake_utils/notebook_finalize.py
new file mode 100644
index 000000000..e8af4f8f7
--- /dev/null
+++ b/frontends/concrete-python/script/nbmake_utils/notebook_finalize.py
@@ -0,0 +1,55 @@
+"""Finalizer for Jupyter notebooks."""
+import argparse
+import json
+from pathlib import Path
+
+
+def main():
+ """Finalize"""
+
+ parser = argparse.ArgumentParser(description="Sanitizer for Jupyter Notebooks")
+
+ parser.add_argument("base", type=str, help="directory which contains the notebooks")
+ parser.add_argument("--check", action="store_true", help="flag to enable just checking mode")
+
+ args = parser.parse_args()
+
+ base = Path(args.base)
+ notebooks = base.glob("**/*.ipynb")
+
+ for notebook in notebooks:
+ path = str(notebook)
+ if "_build" in path or ".ipynb_checkpoints" in path:
+ continue
+
+ with open(notebook, "r", encoding="utf-8") as f:
+ content = json.load(f)
+
+ if args.check:
+ try:
+ metadata = content["metadata"]
+ assert len(metadata) == 1
+ assert "execution" in metadata
+
+ execution = metadata["execution"]
+ assert len(execution) == 1
+ assert "timeout" in execution
+
+ timeout = execution["timeout"]
+ assert timeout == 10800 # 3 hours
+ except Exception:
+ print("Notebooks are not sanitized. Please run `make conformance`.")
+ raise
+ else:
+ content["metadata"] = {
+ "execution": {
+ "timeout": 10800, # 3 hours
+ }
+ }
+ with open(notebook, "w", newline="\n", encoding="utf-8") as f:
+ json.dump(content, f, indent=1, ensure_ascii=False)
+ f.write("\n")
+
+
+if __name__ == "__main__":
+ main()
diff --git a/frontends/concrete-python/script/progress_tracker_utils/benchmark_and_publish_findings_in_docker.sh b/frontends/concrete-python/script/progress_tracker_utils/benchmark_and_publish_findings_in_docker.sh
new file mode 100755
index 000000000..59d98dfb8
--- /dev/null
+++ b/frontends/concrete-python/script/progress_tracker_utils/benchmark_and_publish_findings_in_docker.sh
@@ -0,0 +1,46 @@
+#!/bin/bash
+
+# Run benchmarks while logging the intermediate results
+# Publish findings in the progress tracker
+
+set -e
+
+# shellcheck disable=SC1091
+if [ -f .env ]
+then
+ # shellcheck disable=SC1091
+ set -a; source .env; set +a
+fi
+
+DEV_VENV_PATH="/home/dev_user/dev_venv"
+
+# shellcheck disable=SC1090,SC1091
+if ! source "${DEV_VENV_PATH}/bin/activate"; then
+ python3 -m venv "${DEV_VENV_PATH}"
+ # shellcheck disable=SC1090,SC1091
+ source "${DEV_VENV_PATH}/bin/activate"
+fi
+
+cd /src/ && make setup_env
+
+mkdir -p /tmp/keycache
+mkdir -p logs
+
+initial_concrete_log=logs/$(date -u --iso-8601=seconds).concrete.log
+make -s benchmark 2>&1 | tee -a "$initial_concrete_log"
+
+final_concrete_log=logs/$(date -u --iso-8601=seconds).concrete.log
+cat -s "$initial_concrete_log" | sed '1d; $d' > "$final_concrete_log"
+
+# sed above removes the first and the last lines of the log
+# which are empty to provide a nice console output
+# but empty lines are useless for logs so we get rid of them
+
+rm "$initial_concrete_log"
+cp "$final_concrete_log" logs/latest.concrete.log
+
+curl \
+ -H 'Authorization: Bearer '"$CONCRETE_PROGRESS_TRACKER_TOKEN"'' \
+ -H 'Content-Type: application/json' \
+ -d @progress.json \
+ -X POST "$CONCRETE_PROGRESS_TRACKER_URL"/measurement
diff --git a/frontends/concrete-python/script/source_format/format_python.sh b/frontends/concrete-python/script/source_format/format_python.sh
new file mode 100755
index 000000000..73be71883
--- /dev/null
+++ b/frontends/concrete-python/script/source_format/format_python.sh
@@ -0,0 +1,48 @@
+#!/bin/bash
+
+function usage() {
+ echo "$0: install system and data, to support compiler"
+ echo
+ echo "--help Print this message"
+ echo "--check Do not apply format"
+ echo "--dir Specify a source directory"
+ echo
+}
+
+CHECK=
+
+while [ -n "$1" ]
+do
+ case $1 in
+ "--help" | "-h" )
+ usage
+ exit 0
+ ;;
+
+ "--check" )
+ CHECK="$1"
+ ;;
+
+ "--dir" )
+ shift
+ DIRS+=("$1")
+ ;;
+
+ *)
+ echo "Unknown param : $1"
+ exit 1
+ ;;
+ esac
+ shift
+done
+
+for SRC_DIR in "${DIRS[@]}"; do
+ isort -l 100 --profile black ${CHECK:+"$CHECK"} "${SRC_DIR}"
+ ((FAILURES+=$?))
+ black -l 100 ${CHECK:+"$CHECK"} "${SRC_DIR}"
+ ((FAILURES+=$?))
+done
+
+if [[ "$FAILURES" != "0" ]]; then
+ exit 1
+fi
diff --git a/frontends/concrete-python/tests/__init__.py b/frontends/concrete-python/tests/__init__.py
new file mode 100644
index 000000000..03a1d67ad
--- /dev/null
+++ b/frontends/concrete-python/tests/__init__.py
@@ -0,0 +1,3 @@
+"""
+Tests of `concrete.numpy` namespace.
+"""
diff --git a/frontends/concrete-python/tests/compilation/__init__.py b/frontends/concrete-python/tests/compilation/__init__.py
new file mode 100644
index 000000000..e5aca87e2
--- /dev/null
+++ b/frontends/concrete-python/tests/compilation/__init__.py
@@ -0,0 +1,3 @@
+"""
+Tests of `concrete.numpy.compilation` namespace.
+"""
diff --git a/frontends/concrete-python/tests/compilation/test_artifacts.py b/frontends/concrete-python/tests/compilation/test_artifacts.py
new file mode 100644
index 000000000..380323140
--- /dev/null
+++ b/frontends/concrete-python/tests/compilation/test_artifacts.py
@@ -0,0 +1,63 @@
+"""
+Tests of `DebugArtifacts` class.
+"""
+
+import tempfile
+from pathlib import Path
+
+import numpy as np
+
+from concrete.numpy import DebugArtifacts, compiler
+
+
+def test_artifacts_export(helpers):
+ """
+ Test `export` method of `DebugArtifacts` class.
+ """
+
+ with tempfile.TemporaryDirectory() as path:
+ tmpdir = Path(path)
+
+ configuration = helpers.configuration()
+ artifacts = DebugArtifacts(tmpdir)
+
+ @compiler({"x": "encrypted"})
+ def f(x):
+ a = ((np.sin(x) ** 2) + (np.cos(x) ** 2)).astype(np.int64)
+ b = np.where(x < 5, x * 10, x + 10)
+ return a + b
+
+ inputset = range(10)
+ f.compile(inputset, configuration, artifacts)
+
+ artifacts.export()
+
+ assert (tmpdir / "environment.txt").exists()
+ assert (tmpdir / "requirements.txt").exists()
+
+ assert (tmpdir / "function.txt").exists()
+ assert (tmpdir / "parameters.txt").exists()
+
+ assert (tmpdir / "1.initial.graph.txt").exists()
+ assert (tmpdir / "2.after-fusing.graph.txt").exists()
+ assert (tmpdir / "3.after-fusing.graph.txt").exists()
+ assert (tmpdir / "4.final.graph.txt").exists()
+
+ assert (tmpdir / "mlir.txt").exists()
+ assert (tmpdir / "client_parameters.json").exists()
+
+ artifacts.export()
+
+ assert (tmpdir / "environment.txt").exists()
+ assert (tmpdir / "requirements.txt").exists()
+
+ assert (tmpdir / "function.txt").exists()
+ assert (tmpdir / "parameters.txt").exists()
+
+ assert (tmpdir / "1.initial.graph.txt").exists()
+ assert (tmpdir / "2.after-fusing.graph.txt").exists()
+ assert (tmpdir / "3.after-fusing.graph.txt").exists()
+ assert (tmpdir / "4.final.graph.txt").exists()
+
+ assert (tmpdir / "mlir.txt").exists()
+ assert (tmpdir / "client_parameters.json").exists()
diff --git a/frontends/concrete-python/tests/compilation/test_circuit.py b/frontends/concrete-python/tests/compilation/test_circuit.py
new file mode 100644
index 000000000..fefdd9c07
--- /dev/null
+++ b/frontends/concrete-python/tests/compilation/test_circuit.py
@@ -0,0 +1,345 @@
+"""
+Tests of `Circuit` class.
+"""
+
+import tempfile
+from pathlib import Path
+
+import numpy as np
+import pytest
+
+from concrete.numpy import Client, ClientSpecs, EvaluationKeys, LookupTable, Server, compiler
+
+
+def test_circuit_str(helpers):
+ """
+ Test `__str__` method of `Circuit` class.
+ """
+
+ configuration = helpers.configuration()
+
+ @compiler({"x": "encrypted", "y": "encrypted"})
+ def f(x, y):
+ return x + y
+
+ inputset = [(np.random.randint(0, 2**4), np.random.randint(0, 2**5)) for _ in range(100)]
+ circuit = f.compile(inputset, configuration.fork(p_error=6e-5))
+
+ assert str(circuit) == circuit.graph.format()
+
+
+def test_circuit_feedback(helpers):
+ """
+ Test feedback properties of `Circuit` class.
+ """
+
+ configuration = helpers.configuration()
+
+ p_error = 0.1
+ global_p_error = 0.05
+
+ @compiler({"x": "encrypted", "y": "encrypted"})
+ def f(x, y):
+ return np.sqrt(((x + y) ** 2) + 10).astype(np.int64)
+
+ inputset = [(np.random.randint(0, 2**2), np.random.randint(0, 2**2)) for _ in range(100)]
+ circuit = f.compile(inputset, configuration, p_error=p_error, global_p_error=global_p_error)
+
+ assert isinstance(circuit.complexity, float)
+ assert isinstance(circuit.size_of_secret_keys, int)
+ assert isinstance(circuit.size_of_bootstrap_keys, int)
+ assert isinstance(circuit.size_of_keyswitch_keys, int)
+ assert isinstance(circuit.size_of_inputs, int)
+ assert isinstance(circuit.size_of_outputs, int)
+ assert isinstance(circuit.p_error, float)
+ assert isinstance(circuit.global_p_error, float)
+
+ assert circuit.p_error <= p_error
+ assert circuit.global_p_error <= global_p_error
+
+
+def test_circuit_bad_run(helpers):
+ """
+ Test `run` method of `Circuit` class with bad parameters.
+ """
+
+ configuration = helpers.configuration()
+
+ @compiler({"x": "encrypted", "y": "encrypted"})
+ def f(x, y):
+ return x + y
+
+ inputset = [(np.random.randint(0, 2**4), np.random.randint(0, 2**5)) for _ in range(100)]
+ circuit = f.compile(inputset, configuration)
+
+ # with 1 argument
+ # ---------------
+
+ with pytest.raises(ValueError) as excinfo:
+ circuit.encrypt_run_decrypt(1)
+
+ assert str(excinfo.value) == "Expected 2 inputs but got 1"
+
+ # with 3 arguments
+ # ----------------
+
+ with pytest.raises(ValueError) as excinfo:
+ circuit.encrypt_run_decrypt(1, 2, 3)
+
+ assert str(excinfo.value) == "Expected 2 inputs but got 3"
+
+ # with negative argument 0
+ # ------------------------
+
+ with pytest.raises(ValueError) as excinfo:
+ circuit.encrypt_run_decrypt(-1, 11)
+
+ assert str(excinfo.value) == (
+ "Expected argument 0 to be EncryptedScalar but it's EncryptedScalar"
+ )
+
+ # with negative argument 1
+ # ------------------------
+
+ with pytest.raises(ValueError) as excinfo:
+ circuit.encrypt_run_decrypt(1, -11)
+
+ assert str(excinfo.value) == (
+ "Expected argument 1 to be EncryptedScalar but it's EncryptedScalar"
+ )
+
+ # with large argument 0
+ # ---------------------
+
+ with pytest.raises(ValueError) as excinfo:
+ circuit.encrypt_run_decrypt(100, 10)
+
+ assert str(excinfo.value) == (
+ "Expected argument 0 to be EncryptedScalar but it's EncryptedScalar"
+ )
+
+ # with large argument 1
+ # ---------------------
+
+ with pytest.raises(ValueError) as excinfo:
+ circuit.encrypt_run_decrypt(1, 100)
+
+ assert str(excinfo.value) == (
+ "Expected argument 1 to be EncryptedScalar but it's EncryptedScalar"
+ )
+
+
+def test_client_server_api(helpers):
+ """
+ Test client/server API.
+ """
+
+ configuration = helpers.configuration()
+
+ @compiler({"x": "encrypted"})
+ def function(x):
+ return x + 42
+
+ inputset = [np.random.randint(0, 10, size=(3,)) for _ in range(10)]
+ circuit = function.compile(inputset, configuration.fork(jit=False))
+
+ # for coverage
+ circuit.keygen()
+
+ with tempfile.TemporaryDirectory() as tmp_dir:
+ tmp_dir_path = Path(tmp_dir)
+
+ server_path = tmp_dir_path / "server.zip"
+ circuit.server.save(server_path)
+
+ client_path = tmp_dir_path / "client.zip"
+ circuit.client.save(client_path)
+
+ circuit.cleanup()
+
+ server = Server.load(server_path)
+
+ serialized_client_specs = server.client_specs.serialize()
+ client_specs = ClientSpecs.unserialize(serialized_client_specs)
+
+ clients = [
+ Client(client_specs, configuration.insecure_key_cache_location),
+ Client.load(client_path, configuration.insecure_key_cache_location),
+ ]
+
+ for client in clients:
+ args = client.encrypt([3, 8, 1])
+
+ serialized_args = client.specs.serialize_public_args(args)
+ serialized_evaluation_keys = client.evaluation_keys.serialize()
+
+ unserialized_args = server.client_specs.unserialize_public_args(serialized_args)
+ unserialized_evaluation_keys = EvaluationKeys.unserialize(serialized_evaluation_keys)
+
+ result = server.run(unserialized_args, unserialized_evaluation_keys)
+ serialized_result = server.client_specs.serialize_public_result(result)
+
+ unserialized_result = client.specs.unserialize_public_result(serialized_result)
+ output = client.decrypt(unserialized_result)
+
+ assert np.array_equal(output, [45, 50, 43])
+
+ with pytest.raises(RuntimeError) as excinfo:
+ server.save("UNUSED", via_mlir=True)
+
+ assert str(excinfo.value) == "Loaded server objects cannot be saved again via MLIR"
+
+ server.cleanup()
+
+
+def test_client_server_api_via_mlir(helpers):
+ """
+ Test client/server API.
+ """
+
+ configuration = helpers.configuration()
+
+ @compiler({"x": "encrypted"})
+ def function(x):
+ return x + 42
+
+ inputset = [np.random.randint(0, 10, size=(3,)) for _ in range(10)]
+ circuit = function.compile(inputset, configuration.fork(jit=False))
+
+ with tempfile.TemporaryDirectory() as tmp_dir:
+ tmp_dir_path = Path(tmp_dir)
+
+ server_path = tmp_dir_path / "server.zip"
+ circuit.server.save(server_path, via_mlir=True)
+
+ client_path = tmp_dir_path / "client.zip"
+ circuit.client.save(client_path)
+
+ circuit.cleanup()
+
+ server = Server.load(server_path)
+
+ serialized_client_specs = server.client_specs.serialize()
+ client_specs = ClientSpecs.unserialize(serialized_client_specs)
+
+ clients = [
+ Client(client_specs, configuration.insecure_key_cache_location),
+ Client.load(client_path, configuration.insecure_key_cache_location),
+ ]
+
+ for client in clients:
+ args = client.encrypt([3, 8, 1])
+
+ serialized_args = client.specs.serialize_public_args(args)
+ serialized_evaluation_keys = client.evaluation_keys.serialize()
+
+ unserialized_args = server.client_specs.unserialize_public_args(serialized_args)
+ unserialized_evaluation_keys = EvaluationKeys.unserialize(serialized_evaluation_keys)
+
+ result = server.run(unserialized_args, unserialized_evaluation_keys)
+ serialized_result = server.client_specs.serialize_public_result(result)
+
+ unserialized_result = client.specs.unserialize_public_result(serialized_result)
+ output = client.decrypt(unserialized_result)
+
+ assert np.array_equal(output, [45, 50, 43])
+
+ server.cleanup()
+
+
+def test_bad_server_save(helpers):
+ """
+ Test `save` method of `Server` class with bad parameters.
+ """
+
+ configuration = helpers.configuration()
+
+ @compiler({"x": "encrypted"})
+ def function(x):
+ return x + 42
+
+ inputset = range(10)
+ circuit = function.compile(inputset, configuration)
+
+ with pytest.raises(RuntimeError) as excinfo:
+ circuit.server.save("test.zip")
+
+ assert str(excinfo.value) == "Just-in-Time compilation cannot be saved"
+
+
+@pytest.mark.parametrize("p_error", [0.75, 0.5, 0.4, 0.25, 0.2, 0.1, 0.01, 0.001])
+@pytest.mark.parametrize("bit_width", [5])
+@pytest.mark.parametrize("sample_size", [1_000_000])
+@pytest.mark.parametrize("tolerance", [0.075])
+def test_p_error_simulation(p_error, bit_width, sample_size, tolerance, helpers):
+ """
+ Test p_error simulation.
+ """
+
+ configuration = helpers.configuration().fork(global_p_error=None)
+
+ table = LookupTable([0] + [x - 1 for x in range(1, 2**bit_width)])
+
+ @compiler({"x": "encrypted"})
+ def function(x):
+ return table[x + 1]
+
+ inputset = [np.random.randint(0, (2**bit_width) - 1, size=(sample_size,)) for _ in range(100)]
+ circuit = function.compile(inputset, configuration=configuration, p_error=p_error)
+
+ assert circuit.p_error < p_error
+
+ sample = np.random.randint(0, (2**bit_width) - 1, size=(sample_size,))
+ output = circuit.simulate(sample)
+
+ errors = np.sum(output != sample)
+
+ expected_number_of_errors_on_average = sample_size * circuit.p_error
+ tolerated_difference = expected_number_of_errors_on_average * tolerance
+
+ acceptable_number_of_errors = [
+ round(expected_number_of_errors_on_average - tolerated_difference),
+ round(expected_number_of_errors_on_average + tolerated_difference),
+ ]
+ assert acceptable_number_of_errors[0] <= errors <= acceptable_number_of_errors[1]
+
+
+def test_circuit_run_with_unused_arg(helpers):
+ """
+ Test `encrypt_run_decrypt` method of `Circuit` class with unused arguments.
+ """
+
+ configuration = helpers.configuration()
+
+ @compiler({"x": "encrypted", "y": "encrypted"})
+ def f(x, y): # pylint: disable=unused-argument
+ return x + 10
+
+ inputset = [
+ (np.random.randint(2**3, 2**4), np.random.randint(2**4, 2**5)) for _ in range(100)
+ ]
+ circuit = f.compile(inputset, configuration)
+
+ with pytest.raises(ValueError, match="Expected 2 inputs but got 1"):
+ circuit.encrypt_run_decrypt(10)
+
+ assert circuit.encrypt_run_decrypt(10, 0) == 20
+ assert circuit.encrypt_run_decrypt(10, 10) == 20
+ assert circuit.encrypt_run_decrypt(10, 20) == 20
+
+
+def test_dataflow_circuit(helpers):
+ """
+ Test execution with dataflow_parallelize=True.
+ """
+
+ configuration = helpers.configuration().fork(dataflow_parallelize=True)
+
+ @compiler({"x": "encrypted", "y": "encrypted"})
+ def f(x, y):
+ return (x**2) + (y // 2)
+
+ inputset = [(np.random.randint(0, 2**3), np.random.randint(0, 2**3)) for _ in range(100)]
+ circuit = f.compile(inputset, configuration)
+
+ assert circuit.encrypt_run_decrypt(5, 6) == 28
diff --git a/frontends/concrete-python/tests/compilation/test_compiler.py b/frontends/concrete-python/tests/compilation/test_compiler.py
new file mode 100644
index 000000000..2a790312a
--- /dev/null
+++ b/frontends/concrete-python/tests/compilation/test_compiler.py
@@ -0,0 +1,332 @@
+"""
+Tests of `Compiler` class.
+"""
+
+import numpy as np
+import pytest
+
+from concrete.numpy.compilation import Compiler
+
+
+def test_compiler_bad_init():
+ """
+ Test `__init__` method of `Compiler` class with bad parameters.
+ """
+
+ def f(x, y, z):
+ return x + y + z
+
+ # missing all
+ # -----------
+
+ with pytest.raises(ValueError) as excinfo:
+ Compiler(f, {})
+
+ assert str(excinfo.value) == (
+ "Encryption statuses of parameters 'x', 'y' and 'z' of function 'f' are not provided"
+ )
+
+ # missing x and y
+ # ---------------
+
+ with pytest.raises(ValueError) as excinfo:
+ Compiler(f, {"z": "clear"})
+
+ assert str(excinfo.value) == (
+ "Encryption statuses of parameters 'x' and 'y' of function 'f' are not provided"
+ )
+
+ # missing x
+ # ---------
+
+ with pytest.raises(ValueError) as excinfo:
+ Compiler(f, {"y": "encrypted", "z": "clear"})
+
+ assert str(excinfo.value) == (
+ "Encryption status of parameter 'x' of function 'f' is not provided"
+ )
+
+ # additional a, b, c
+ # ------------------
+ with pytest.raises(ValueError) as excinfo:
+ Compiler(
+ f,
+ {
+ "x": "encrypted",
+ "y": "encrypted",
+ "z": "encrypted",
+ "a": "encrypted",
+ "b": "encrypted",
+ "c": "encrypted",
+ },
+ )
+
+ assert str(excinfo.value) == (
+ "Encryption statuses of 'a', 'b' and 'c' are provided "
+ "but they are not a parameter of function 'f'"
+ )
+
+ # additional a and b
+ # ------------------
+
+ with pytest.raises(ValueError) as excinfo:
+ Compiler(
+ f,
+ {
+ "x": "encrypted",
+ "y": "encrypted",
+ "z": "encrypted",
+ "a": "encrypted",
+ "b": "encrypted",
+ },
+ )
+
+ assert str(excinfo.value) == (
+ "Encryption statuses of 'a' and 'b' are provided "
+ "but they are not a parameter of function 'f'"
+ )
+
+ # additional a
+ # ------------
+
+ with pytest.raises(ValueError) as excinfo:
+ Compiler(
+ f,
+ {
+ "x": "encrypted",
+ "y": "encrypted",
+ "z": "encrypted",
+ "a": "encrypted",
+ },
+ )
+
+ assert str(excinfo.value) == (
+ "Encryption status of 'a' is provided but it is not a parameter of function 'f'"
+ )
+
+
+def test_compiler_bad_call():
+ """
+ Test `__call__` method of `Compiler` class with bad parameters.
+ """
+
+ def f(x, y, z):
+ return x + y + z
+
+ with pytest.raises(RuntimeError) as excinfo:
+ compiler = Compiler(f, {"x": "encrypted", "y": "encrypted", "z": "clear"})
+ compiler(1, 2, 3, invalid=4)
+
+ assert str(excinfo.value) == "Calling function 'f' with kwargs is not supported"
+
+
+def test_compiler_bad_trace(helpers):
+ """
+ Test `trace` method of `Compiler` class with bad parameters.
+ """
+
+ configuration = helpers.configuration()
+
+ # without inputset
+ # ----------------
+
+ def f(x, y, z):
+ return x + y + z
+
+ with pytest.raises(RuntimeError) as excinfo:
+ compiler = Compiler(
+ f,
+ {"x": "encrypted", "y": "encrypted", "z": "clear"},
+ )
+ compiler.trace(configuration=configuration)
+
+ assert str(excinfo.value) == "Tracing function 'f' without an inputset is not supported"
+
+ # bad return
+ # ----------
+
+ def g():
+ return np.array([{}, ()], dtype=object)
+
+ with pytest.raises(ValueError) as excinfo:
+ compiler = Compiler(g, {})
+ compiler.trace(inputset=[()], configuration=configuration)
+
+ assert str(excinfo.value) == "Function 'g' returned '[{} ()]', which is not supported"
+
+
+def test_compiler_bad_compile(helpers):
+ """
+ Test `compile` method of `Compiler` class with bad parameters.
+ """
+
+ configuration = helpers.configuration()
+
+ def f(x, y, z):
+ return x + y + z
+
+ # without inputset
+ # ----------------
+
+ with pytest.raises(RuntimeError) as excinfo:
+ compiler = Compiler(
+ f,
+ {"x": "encrypted", "y": "encrypted", "z": "clear"},
+ )
+ compiler.compile(configuration=configuration)
+
+ assert str(excinfo.value) == "Compiling function 'f' without an inputset is not supported"
+
+ # with bad inputset at the first input
+ # ------------------------------------
+
+ with pytest.raises(ValueError) as excinfo:
+ compiler = Compiler(
+ f,
+ {"x": "encrypted", "y": "encrypted", "z": "clear"},
+ )
+ inputset = [1]
+ compiler.compile(inputset, configuration=configuration)
+
+ assert str(excinfo.value) == (
+ "Input #0 of your inputset is not well formed "
+ "(expected a tuple of 3 values got a single value)"
+ )
+
+ # with bad inputset at the second input
+ # -------------------------------------
+
+ with pytest.raises(ValueError) as excinfo:
+ compiler = Compiler(
+ f,
+ {"x": "encrypted", "y": "encrypted", "z": "clear"},
+ )
+ inputset = [(1, 2, 3), (1, 2)]
+ compiler.compile(inputset, configuration=configuration)
+
+ assert str(excinfo.value) == (
+ "Input #1 of your inputset is not well formed "
+ "(expected a tuple of 3 values got a tuple of 2 values)"
+ )
+
+
+def test_compiler_compile_bad_inputset(helpers):
+ """
+ Test `compile` method of `Compiler` class with bad inputset.
+ """
+
+ configuration = helpers.configuration()
+
+ # with inf
+ # --------
+
+ def f(x):
+ return (x + np.inf).astype(np.int64)
+
+ with pytest.raises(RuntimeError) as excinfo:
+ compiler = Compiler(f, {"x": "encrypted"})
+ compiler.compile(range(10), configuration=configuration)
+
+ assert str(excinfo.value) == "Bound measurement using inputset[0] failed"
+
+ helpers.check_str(
+ """
+
+Evaluation of the graph failed
+
+%0 = x # EncryptedScalar
+%1 = subgraph(%0) # EncryptedScalar
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ evaluation of this node failed
+return %1
+
+Subgraphs:
+
+ %1 = subgraph(%0):
+
+ %0 = input # EncryptedScalar
+ %1 = inf # ClearScalar
+ %2 = add(%0, %1) # EncryptedScalar
+ %3 = astype(%2, dtype=int_) # EncryptedScalar
+ return %3
+
+ """.strip(),
+ str(excinfo.value.__cause__).strip(),
+ )
+
+ helpers.check_str(
+ """
+
+Evaluation of the graph failed
+
+%0 = input # EncryptedScalar
+%1 = inf # ClearScalar
+%2 = add(%0, %1) # EncryptedScalar
+%3 = astype(%2, dtype=int_) # EncryptedScalar
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ evaluation of this node failed
+return %3
+
+ """.strip(),
+ str(excinfo.value.__cause__.__cause__).strip(),
+ )
+
+ assert (
+ str(excinfo.value.__cause__.__cause__.__cause__)
+ == "An `Inf` value is tried to be converted to integer"
+ )
+
+ # with nan
+ # --------
+
+ def g(x):
+ return (x + np.nan).astype(np.int64)
+
+ with pytest.raises(RuntimeError) as excinfo:
+ compiler = Compiler(g, {"x": "encrypted"})
+ compiler.compile(range(10), configuration=configuration)
+
+ assert str(excinfo.value) == "Bound measurement using inputset[0] failed"
+
+ helpers.check_str(
+ """
+
+Evaluation of the graph failed
+
+%0 = x # EncryptedScalar
+%1 = subgraph(%0) # EncryptedScalar
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ evaluation of this node failed
+return %1
+
+Subgraphs:
+
+ %1 = subgraph(%0):
+
+ %0 = input # EncryptedScalar
+ %1 = nan # ClearScalar
+ %2 = add(%0, %1) # EncryptedScalar
+ %3 = astype(%2, dtype=int_) # EncryptedScalar
+ return %3
+
+ """.strip(),
+ str(excinfo.value.__cause__).strip(),
+ )
+
+ helpers.check_str(
+ """
+
+Evaluation of the graph failed
+
+%0 = input # EncryptedScalar
+%1 = nan # ClearScalar
+%2 = add(%0, %1) # EncryptedScalar
+%3 = astype(%2, dtype=int_) # EncryptedScalar
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ evaluation of this node failed
+return %3
+
+ """.strip(),
+ str(excinfo.value.__cause__.__cause__).strip(),
+ )
+
+ assert (
+ str(excinfo.value.__cause__.__cause__.__cause__)
+ == "A `NaN` value is tried to be converted to integer"
+ )
diff --git a/frontends/concrete-python/tests/compilation/test_configuration.py b/frontends/concrete-python/tests/compilation/test_configuration.py
new file mode 100644
index 000000000..085c70773
--- /dev/null
+++ b/frontends/concrete-python/tests/compilation/test_configuration.py
@@ -0,0 +1,105 @@
+"""
+Tests of `Configuration` class.
+"""
+
+import pytest
+
+from concrete.numpy.compilation import Configuration
+
+
+@pytest.mark.parametrize(
+ "kwargs,expected_error,expected_message",
+ [
+ pytest.param(
+ {"enable_unsafe_features": False, "use_insecure_key_cache": True},
+ RuntimeError,
+ "Insecure key cache cannot be used without enabling unsafe features",
+ ),
+ pytest.param(
+ {
+ "enable_unsafe_features": True,
+ "use_insecure_key_cache": True,
+ "insecure_key_cache_location": None,
+ },
+ RuntimeError,
+ "Insecure key cache cannot be enabled without specifying its location",
+ ),
+ ],
+)
+def test_configuration_bad_init(kwargs, expected_error, expected_message):
+ """
+ Test `__init__` method of `Configuration` class with bad parameters.
+ """
+
+ with pytest.raises(expected_error) as excinfo:
+ Configuration(**kwargs)
+
+ assert str(excinfo.value) == expected_message
+
+
+def test_configuration_fork():
+ """
+ Test `fork` method of `Configuration` class.
+ """
+
+ config1 = Configuration(enable_unsafe_features=True, loop_parallelize=False)
+ config2 = config1.fork(enable_unsafe_features=False, loop_parallelize=True)
+
+ assert config1 is not config2
+
+ assert config1.enable_unsafe_features is True
+ assert config1.loop_parallelize is False
+
+ assert config2.enable_unsafe_features is False
+ assert config2.loop_parallelize is True
+
+
+@pytest.mark.parametrize(
+ "kwargs,expected_error,expected_message",
+ [
+ pytest.param(
+ {"foo": False},
+ TypeError,
+ "Unexpected keyword argument 'foo'",
+ ),
+ pytest.param(
+ {"dump_artifacts_on_unexpected_failures": "yes"},
+ TypeError,
+ "Unexpected type for keyword argument 'dump_artifacts_on_unexpected_failures' "
+ "(expected 'bool', got 'str')",
+ ),
+ pytest.param(
+ {"insecure_key_cache_location": 3},
+ TypeError,
+ "Unexpected type for keyword argument 'insecure_key_cache_location' "
+ "(expected 'Optional[str]', got 'int')",
+ ),
+ pytest.param(
+ {"p_error": "yes"},
+ TypeError,
+ "Unexpected type for keyword argument 'p_error' "
+ "(expected 'Optional[float]', got 'str')",
+ ),
+ pytest.param(
+ {"global_p_error": "mamma mia"},
+ TypeError,
+ "Unexpected type for keyword argument 'global_p_error' "
+ "(expected 'Optional[float]', got 'str')",
+ ),
+ pytest.param(
+ {"show_optimizer": "please"},
+ TypeError,
+ "Unexpected type for keyword argument 'show_optimizer' "
+ "(expected 'Optional[bool]', got 'str')",
+ ),
+ ],
+)
+def test_configuration_bad_fork(kwargs, expected_error, expected_message):
+ """
+ Test `fork` method of `Configuration` class with bad parameters.
+ """
+
+ with pytest.raises(expected_error) as excinfo:
+ Configuration().fork(**kwargs)
+
+ assert str(excinfo.value) == expected_message
diff --git a/frontends/concrete-python/tests/compilation/test_decorators.py b/frontends/concrete-python/tests/compilation/test_decorators.py
new file mode 100644
index 000000000..a5407f5dc
--- /dev/null
+++ b/frontends/concrete-python/tests/compilation/test_decorators.py
@@ -0,0 +1,260 @@
+"""
+Tests of `compiler` and `circuit` decorators.
+"""
+
+import numpy as np
+import pytest
+
+import concrete.numpy as cnp
+
+
+def test_compiler_call_and_compile(helpers):
+ """
+ Test `__call__` and `compile` methods of `compiler` decorator back to back.
+ """
+
+ configuration = helpers.configuration()
+
+ @cnp.compiler({"x": "encrypted"})
+ def function(x):
+ return x + 42
+
+ for i in range(10):
+ function(i)
+
+ circuit = function.compile(configuration=configuration)
+
+ sample = 5
+ helpers.check_execution(circuit, function, sample)
+
+
+def test_compiler_verbose_trace(helpers, capsys):
+ """
+ Test `trace` method of `compiler` decorator with verbose flag.
+ """
+
+ configuration = helpers.configuration()
+ artifacts = cnp.DebugArtifacts()
+
+ @cnp.compiler({"x": "encrypted"})
+ def function(x):
+ return x + 42
+
+ inputset = range(10)
+ function.trace(inputset, configuration, artifacts, show_graph=True)
+
+ captured = capsys.readouterr()
+ assert captured.out.strip() == (
+ f"""
+
+Computation Graph
+------------------------------------------------------------------
+{str(list(artifacts.textual_representations_of_graphs.values())[-1][-1])}
+------------------------------------------------------------------
+
+ """.strip()
+ )
+
+
+def test_compiler_verbose_compile(helpers, capsys):
+ """
+ Test `compile` method of `compiler` decorator with verbose flag.
+ """
+
+ configuration = helpers.configuration()
+ artifacts = cnp.DebugArtifacts()
+
+ @cnp.compiler({"x": "encrypted"})
+ def function(x):
+ return x + 42
+
+ inputset = range(10)
+ function.compile(inputset, configuration, artifacts, verbose=True)
+
+ captured = capsys.readouterr()
+ assert captured.out.strip().startswith(
+ f"""
+
+Computation Graph
+--------------------------------------------------------------------------------
+{list(artifacts.textual_representations_of_graphs.values())[-1][-1]}
+--------------------------------------------------------------------------------
+
+MLIR
+--------------------------------------------------------------------------------
+{artifacts.mlir_to_compile}
+--------------------------------------------------------------------------------
+
+Optimizer
+--------------------------------------------------------------------------------
+
+ """.strip()
+ )
+
+
+def test_circuit(helpers):
+ """
+ Test circuit decorator.
+ """
+
+ @cnp.circuit({"x": "encrypted"}, helpers.configuration())
+ def circuit1(x: cnp.uint2):
+ return x + 42
+
+ helpers.check_str(
+ """
+
+%0 = x # EncryptedScalar
+%1 = 42 # ClearScalar
+%2 = add(%0, %1) # EncryptedScalar
+return %2
+
+ """.strip(),
+ str(circuit1),
+ )
+
+ # ======================================================================
+
+ @cnp.circuit({"x": "encrypted"}, helpers.configuration())
+ def circuit2(x: cnp.tensor[cnp.uint2, 3, 2]):
+ return x + 42
+
+ helpers.check_str(
+ """
+
+%0 = x # EncryptedTensor
+%1 = 42 # ClearScalar
+%2 = add(%0, %1) # EncryptedTensor
+return %2
+
+ """.strip(),
+ str(circuit2),
+ )
+
+ # ======================================================================
+
+ @cnp.circuit({"x": "encrypted"}, helpers.configuration())
+ def circuit3(x: cnp.uint3):
+ def square(x):
+ return x**2
+
+ return cnp.univariate(square, outputs=cnp.uint7)(x)
+
+ helpers.check_str(
+ """
+
+%0 = x # EncryptedScalar
+%1 = square(%0) # EncryptedScalar
+return %1
+
+ """.strip(),
+ str(circuit3),
+ )
+
+ # ======================================================================
+
+ @cnp.circuit({"x": "encrypted"}, helpers.configuration())
+ def circuit4(x: cnp.uint3):
+ return ((np.sin(x) ** 2) + (np.cos(x) ** 2)).astype(cnp.uint3)
+
+ helpers.check_str(
+ """
+
+%0 = x # EncryptedScalar
+%1 = subgraph(%0) # EncryptedScalar
+return %1
+
+Subgraphs:
+
+ %1 = subgraph(%0):
+
+ %0 = input # EncryptedScalar
+ %1 = sin(%0) # EncryptedScalar
+ %2 = 2 # ClearScalar
+ %3 = power(%1, %2) # EncryptedScalar
+ %4 = cos(%0) # EncryptedScalar
+ %5 = 2 # ClearScalar
+ %6 = power(%4, %5) # EncryptedScalar
+ %7 = add(%3, %6) # EncryptedScalar
+ %8 = astype(%7) # EncryptedScalar
+ return %8
+
+ """.strip(),
+ str(circuit4),
+ )
+
+ # ======================================================================
+
+ @cnp.circuit({"x": "encrypted"}, helpers.configuration())
+ def circuit5(x: cnp.int2):
+ return x + 42
+
+ helpers.check_str(
+ """
+
+%0 = x # EncryptedScalar
+%1 = 42 # ClearScalar
+%2 = add(%0, %1) # EncryptedScalar
+return %2
+
+ """.strip(),
+ str(circuit5),
+ )
+
+
+def test_bad_circuit(helpers):
+ """
+ Test circuit decorator with bad parameters.
+ """
+
+ # bad annotation
+ # --------------
+
+ with pytest.raises(ValueError) as excinfo:
+
+ @cnp.circuit({"x": "encrypted"}, helpers.configuration())
+ def circuit1(x: int):
+ return x + 42
+
+ assert str(excinfo.value) == (
+ f"Annotation {str(int)} for argument 'x' is not valid "
+ f"(please use a cnp type such as `cnp.uint4` or 'cnp.tensor[cnp.uint4, 3, 2]')"
+ )
+
+ # missing encryption status
+ # -------------------------
+
+ with pytest.raises(ValueError) as excinfo:
+
+ @cnp.circuit({}, helpers.configuration())
+ def circuit2(x: cnp.uint3):
+ return x + 42
+
+ assert str(excinfo.value) == (
+ "Encryption status of parameter 'x' of function 'circuit2' is not provided"
+ )
+
+ # bad astype
+ # ----------
+ with pytest.raises(ValueError) as excinfo:
+
+ @cnp.circuit({"x": "encrypted"}, helpers.configuration())
+ def circuit3(x: cnp.uint3):
+ return x.astype(np.int64)
+
+ assert str(excinfo.value) == (
+ "`astype` method must be called with a concrete.numpy type "
+ "for direct circuit definition (e.g., value.astype(cnp.uint4))"
+ )
+
+ # round
+ # -----
+ with pytest.raises(RuntimeError) as excinfo:
+
+ @cnp.circuit({"x": "encrypted"}, helpers.configuration())
+ def circuit4(x: cnp.uint3):
+ return round(x)
+
+ assert str(excinfo.value) == (
+ "'round(x)' cannot be used in direct definition (you may use np.around instead)"
+ )
diff --git a/frontends/concrete-python/tests/conftest.py b/frontends/concrete-python/tests/conftest.py
new file mode 100644
index 000000000..c839a3116
--- /dev/null
+++ b/frontends/concrete-python/tests/conftest.py
@@ -0,0 +1,331 @@
+"""
+Configuration of `pytest`.
+"""
+
+import json
+import os
+import random
+from pathlib import Path
+from typing import Any, Callable, Dict, List, Tuple, Union
+
+import numpy as np
+import pytest
+
+import concrete.numpy as cnp
+import tests
+
+tests_directory = os.path.dirname(tests.__file__)
+
+
+INSECURE_KEY_CACHE_LOCATION = None
+
+
+def pytest_addoption(parser):
+ """
+ Add CLI options.
+ """
+
+ parser.addoption(
+ "--global-coverage",
+ type=str,
+ default=None,
+ action="store",
+ help="JSON file to dump pytest-cov terminal report.",
+ )
+ parser.addoption(
+ "--key-cache",
+ type=str,
+ default=None,
+ action="store",
+ help="Specify the location of the key cache",
+ )
+
+
+def pytest_sessionstart(session):
+ """
+ Initialize insecure key cache.
+ """
+ # pylint: disable=global-statement
+ global INSECURE_KEY_CACHE_LOCATION
+ # pylint: enable=global-statement
+
+ key_cache_location = session.config.getoption("--key-cache", default=None)
+ if key_cache_location is not None:
+ if key_cache_location.lower() == "disable":
+ key_cache_location = None
+ else:
+ key_cache_location = Path(key_cache_location).expanduser().resolve()
+ else:
+ key_cache_location = Path.home().resolve() / ".cache" / "concrete-numpy" / "pytest"
+
+ if key_cache_location:
+ key_cache_location.mkdir(parents=True, exist_ok=True)
+ print(f"INSECURE_KEY_CACHE_LOCATION={str(key_cache_location)}")
+
+ INSECURE_KEY_CACHE_LOCATION = str(key_cache_location)
+
+
+def pytest_sessionfinish(session, exitstatus): # pylint: disable=unused-argument
+ """
+ Save global coverage info after testing is finished.
+ """
+
+ # Hacked together from the source code, they don't have an option to export to file,
+ # and it's too much work to get a PR in for such a little thing.
+ # https://github.com/pytest-dev/pytest-cov/blob/ec344d8adf2d78238d8f07cb20ed2463d7536970/src/pytest_cov/plugin.py#L329
+ if session.config.pluginmanager.hasplugin("_cov"):
+ global_coverage_option = session.config.getoption("--global-coverage", default=None)
+ if global_coverage_option is not None:
+ coverage_plugin = session.config.pluginmanager.getplugin("_cov")
+ coverage_txt = coverage_plugin.cov_report.getvalue()
+
+ coverage_status = 0
+ if (
+ coverage_plugin.options.cov_fail_under is not None
+ and coverage_plugin.options.cov_fail_under > 0
+ and coverage_plugin.cov_total < coverage_plugin.options.cov_fail_under
+ ):
+ coverage_status = 1
+
+ global_coverage_file_path = Path(global_coverage_option).resolve()
+ with open(global_coverage_file_path, "w", encoding="utf-8") as f:
+ json.dump({"exit_code": coverage_status, "content": coverage_txt}, f)
+
+
+class Helpers:
+ """
+ Helpers class, which provides various helpers to tests.
+ """
+
+ @staticmethod
+ def configuration() -> cnp.Configuration:
+ """
+ Get the test configuration to use during testing.
+
+ Returns:
+ cnp.Configuration:
+ test configuration
+ """
+
+ return cnp.Configuration(
+ dump_artifacts_on_unexpected_failures=False,
+ enable_unsafe_features=True,
+ use_insecure_key_cache=True,
+ loop_parallelize=True,
+ dataflow_parallelize=False,
+ auto_parallelize=False,
+ jit=True,
+ insecure_key_cache_location=INSECURE_KEY_CACHE_LOCATION,
+ global_p_error=(1 / 10_000),
+ )
+
+ @staticmethod
+ def generate_encryption_statuses(parameters: Dict[str, Dict[str, Any]]) -> Dict[str, str]:
+ """
+ Generate parameter encryption statuses accoring to a parameter specification.
+
+ Args:
+ parameters (Dict[str, Dict[str, Any]]):
+ parameter specification to use
+
+ e.g.,
+
+ {
+ "param1": {"range": [0, 10], "status": "clear"},
+ "param2": {"range": [3, 30], "status": "encrypted", "shape": (3,)},
+ }
+
+ Returns:
+ Dict[str, str]:
+ parameter encryption statuses
+ generated according to the given parameter specification
+ """
+
+ return {
+ parameter: details["status"] if "status" in details else "encrypted"
+ for parameter, details in parameters.items()
+ }
+
+ @staticmethod
+ def generate_inputset(
+ parameters: Dict[str, Dict[str, Any]],
+ size: int = 128,
+ ) -> List[Union[Tuple[Union[int, np.ndarray], ...], Union[int, np.ndarray]]]:
+ """
+ Generate a random inputset of desired size accoring to a parameter specification.
+
+ Args:
+ parameters (Dict[str, Dict[str, Any]]):
+ parameter specification to use
+
+ e.g.,
+
+ {
+ "param1": {"range": [0, 10], "status": "clear"},
+ "param2": {"range": [3, 30], "status": "encrypted", "shape": (3,)},
+ }
+
+ size (int):
+ size of the resulting inputset
+
+ Returns:
+ List[Union[Tuple[Union[int, np.ndarray], ...], Union[int, np.ndarray]]]:
+ random inputset of desired size
+ generated according to the given parameter specification
+ """
+
+ inputset = []
+
+ for _ in range(size):
+ sample = Helpers.generate_sample(parameters)
+ inputset.append(tuple(sample) if len(sample) > 1 else sample[0])
+
+ return inputset
+
+ @staticmethod
+ def generate_sample(parameters: Dict[str, Dict[str, Any]]) -> List[Union[int, np.ndarray]]:
+ """
+ Generate a random sample accoring to a parameter specification.
+
+ Args:
+ parameters (Dict[str, Dict[str, Any]]):
+ parameter specification to use
+
+ e.g.,
+
+ {
+ "param1": {"range": [0, 10], "status": "clear"},
+ "param2": {"range": [3, 30], "status": "encrypted", "shape": (3,)},
+ }
+
+ Returns:
+ List[Union[int, np.ndarray]]:
+ random sample
+ generated according to the given parameter specification
+ """
+
+ sample = []
+
+ for description in parameters.values():
+ minimum, maximum = description.get("range", [0, (2**16) - 1])
+
+ if "shape" in description:
+ shape = description["shape"]
+ sample.append(np.random.randint(minimum, maximum + 1, size=shape, dtype=np.int64))
+ else:
+ sample.append(np.int64(random.randint(minimum, maximum)))
+
+ return sample
+
+ @staticmethod
+ def check_execution(
+ circuit: cnp.Circuit,
+ function: Callable,
+ sample: Union[Any, List[Any]],
+ retries: int = 1,
+ simulate: bool = False,
+ ):
+ """
+ Assert that `circuit` is behaves the same as `function` on `sample`.
+
+ Args:
+ circuit (cnp.Circuit):
+ compiled circuit
+
+ function (Callable):
+ original function
+
+ sample (List[Any]):
+ inputs
+
+ retries (int, default = 1):
+ number of times to retry (for probabilistic execution)
+
+ simulate (bool, default = False):
+ whether to simulate instead of fhe execution
+ """
+
+ if not isinstance(sample, list):
+ sample = [sample]
+
+ def sanitize(values):
+ if not isinstance(values, tuple):
+ values = (values,)
+
+ result = []
+ for value in values:
+ if isinstance(value, (bool, np.bool_)):
+ value = int(value)
+ elif isinstance(value, np.ndarray) and value.dtype == np.bool_:
+ value = value.astype(np.int64)
+
+ result.append(value)
+
+ return tuple(result)
+
+ for i in range(retries):
+ expected = sanitize(function(*sample))
+ actual = sanitize(
+ circuit.simulate(*sample) if simulate else circuit.encrypt_run_decrypt(*sample)
+ )
+
+ if all(np.array_equal(e, a) for e, a in zip(expected, actual)):
+ break
+
+ if i == retries - 1:
+ message = f"""
+
+Expected Output
+===============
+{expected}
+
+Actual Output
+=============
+{actual}
+
+ """
+ raise AssertionError(message)
+
+ @staticmethod
+ def check_str(expected: str, actual: str):
+ """
+ Assert that `circuit` is behaves the same as `function` on `sample`.
+
+ Args:
+ expected (str):
+ expected str
+
+ actual (str):
+ actual str
+ """
+
+ # remove error line information
+ # there are explicit tests to make sure the line information is correct
+ # however, it would have been very hard to keep the other tests up to date
+
+ actual = "\n".join(
+ line for line in actual.splitlines() if not line.strip().startswith(tests_directory)
+ )
+
+ assert (
+ actual.strip() == expected.strip()
+ ), f"""
+
+Expected Output
+===============
+{expected}
+
+Actual Output
+=============
+{actual}
+
+ """
+
+
+@pytest.fixture
+def helpers():
+ """
+ Fixture that provides `Helpers` class to tests.
+ """
+
+ return Helpers
diff --git a/frontends/concrete-python/tests/dtypes/__init__.py b/frontends/concrete-python/tests/dtypes/__init__.py
new file mode 100644
index 000000000..d348fda21
--- /dev/null
+++ b/frontends/concrete-python/tests/dtypes/__init__.py
@@ -0,0 +1,3 @@
+"""
+Tests of `concrete.numpy.dtypes` namespace.
+"""
diff --git a/frontends/concrete-python/tests/dtypes/test_float.py b/frontends/concrete-python/tests/dtypes/test_float.py
new file mode 100644
index 000000000..7aa4234ac
--- /dev/null
+++ b/frontends/concrete-python/tests/dtypes/test_float.py
@@ -0,0 +1,92 @@
+"""
+Tests of `Float` data type.
+"""
+
+import pytest
+
+from concrete.numpy.dtypes import Float
+
+
+@pytest.mark.parametrize(
+ "bit_width,expected_error,expected_message",
+ [
+ pytest.param(
+ 128,
+ ValueError,
+ "Float(128) is not supported (bit width must be one of 16, 32 or 64)",
+ ),
+ pytest.param(
+ "abc",
+ ValueError,
+ "Float('abc') is not supported (bit width must be one of 16, 32 or 64)",
+ ),
+ ],
+)
+def test_float_bad_init(bit_width, expected_error, expected_message):
+ """
+ Test `__init__` method of `Float` data type with bad parameters.
+ """
+
+ with pytest.raises(expected_error) as excinfo:
+ Float(bit_width)
+
+ assert str(excinfo.value) == expected_message
+
+
+@pytest.mark.parametrize(
+ "lhs,rhs,expected_result",
+ [
+ pytest.param(
+ Float(32),
+ Float(32),
+ True,
+ ),
+ pytest.param(
+ Float(32),
+ Float(64),
+ False,
+ ),
+ pytest.param(
+ Float(32),
+ "Float(32)",
+ False,
+ ),
+ pytest.param(
+ "Float(32)",
+ Float(32),
+ False,
+ ),
+ ],
+)
+def test_float_eq(lhs, rhs, expected_result):
+ """
+ Test `__eq__` method of `Float` data type.
+ """
+
+ assert (lhs == rhs) == expected_result
+ assert (rhs == lhs) == expected_result
+
+
+@pytest.mark.parametrize(
+ "data_type,expected_result",
+ [
+ pytest.param(
+ Float(16),
+ "float16",
+ ),
+ pytest.param(
+ Float(32),
+ "float32",
+ ),
+ pytest.param(
+ Float(64),
+ "float64",
+ ),
+ ],
+)
+def test_float_str(data_type, expected_result):
+ """
+ Test `__str__` method of `Float` data type.
+ """
+
+ assert str(data_type) == expected_result
diff --git a/frontends/concrete-python/tests/dtypes/test_integer.py b/frontends/concrete-python/tests/dtypes/test_integer.py
new file mode 100644
index 000000000..aa88d2596
--- /dev/null
+++ b/frontends/concrete-python/tests/dtypes/test_integer.py
@@ -0,0 +1,691 @@
+"""
+Tests of `Integer` data type.
+"""
+
+import numpy as np
+import pytest
+
+from concrete.numpy.dtypes import Integer, SignedInteger, UnsignedInteger
+
+
+@pytest.mark.parametrize(
+ "value,force_signed,expected_result",
+ [
+ pytest.param(
+ -4,
+ False,
+ SignedInteger(3),
+ ),
+ pytest.param(
+ -3,
+ False,
+ SignedInteger(3),
+ ),
+ pytest.param(
+ -2,
+ False,
+ SignedInteger(2),
+ ),
+ pytest.param(
+ -1,
+ False,
+ SignedInteger(1),
+ ),
+ pytest.param(
+ 0,
+ False,
+ UnsignedInteger(1),
+ ),
+ pytest.param(
+ 1,
+ False,
+ UnsignedInteger(1),
+ ),
+ pytest.param(
+ 2,
+ False,
+ UnsignedInteger(2),
+ ),
+ pytest.param(
+ 3,
+ False,
+ UnsignedInteger(2),
+ ),
+ pytest.param(
+ 4,
+ False,
+ UnsignedInteger(3),
+ ),
+ pytest.param(
+ -4,
+ True,
+ SignedInteger(3),
+ ),
+ pytest.param(
+ -3,
+ True,
+ SignedInteger(3),
+ ),
+ pytest.param(
+ -2,
+ True,
+ SignedInteger(2),
+ ),
+ pytest.param(
+ -1,
+ True,
+ SignedInteger(1),
+ ),
+ pytest.param(
+ 0,
+ True,
+ SignedInteger(1),
+ ),
+ pytest.param(
+ 1,
+ True,
+ SignedInteger(2),
+ ),
+ pytest.param(
+ 2,
+ True,
+ SignedInteger(3),
+ ),
+ pytest.param(
+ 3,
+ True,
+ SignedInteger(3),
+ ),
+ pytest.param(
+ 4,
+ True,
+ SignedInteger(4),
+ ),
+ pytest.param(
+ np.array([0, 1]),
+ False,
+ UnsignedInteger(1),
+ ),
+ pytest.param(
+ np.array([0, 1]),
+ True,
+ SignedInteger(2),
+ ),
+ pytest.param(
+ [-1, 1],
+ False,
+ SignedInteger(2),
+ ),
+ pytest.param(
+ [-1, 1],
+ True,
+ SignedInteger(2),
+ ),
+ ],
+)
+def test_integer_that_can_represent(value, force_signed, expected_result):
+ """
+ Test `that_can_represent` function of `Integer` data type.
+ """
+
+ assert Integer.that_can_represent(value, force_signed) == expected_result
+
+
+@pytest.mark.parametrize(
+ "value,force_signed,expected_error,expected_message",
+ [
+ pytest.param(
+ "abc",
+ False,
+ ValueError,
+ "Integer cannot represent 'abc'",
+ ),
+ pytest.param(
+ "abc",
+ True,
+ ValueError,
+ "Integer cannot represent 'abc'",
+ ),
+ pytest.param(
+ 4.2,
+ False,
+ ValueError,
+ "Integer cannot represent 4.2",
+ ),
+ pytest.param(
+ 4.2,
+ True,
+ ValueError,
+ "Integer cannot represent 4.2",
+ ),
+ pytest.param(
+ np.array([2.2, 1.1]),
+ False,
+ ValueError,
+ "Integer cannot represent array([2.2, 1.1])",
+ ),
+ pytest.param(
+ np.array([2.2, 1.1]),
+ True,
+ ValueError,
+ "Integer cannot represent array([2.2, 1.1])",
+ ),
+ pytest.param(
+ [1, (), 3],
+ True,
+ ValueError,
+ "Integer cannot represent [1, (), 3]",
+ ),
+ ],
+)
+def test_integer_bad_that_can_represent(value, force_signed, expected_error, expected_message):
+ """
+ Test `that_can_represent` function of `Integer` data type with bad parameters.
+ """
+
+ with pytest.raises(expected_error) as excinfo:
+ Integer.that_can_represent(value, force_signed)
+
+ assert str(excinfo.value) == expected_message
+
+
+@pytest.mark.parametrize(
+ "constructor,bit_width,expected_error,expected_message",
+ [
+ pytest.param(
+ SignedInteger,
+ 0,
+ ValueError,
+ "SignedInteger(0) is not supported (bit width must be a positive integer)",
+ ),
+ pytest.param(
+ UnsignedInteger,
+ 0,
+ ValueError,
+ "UnsignedInteger(0) is not supported (bit width must be a positive integer)",
+ ),
+ pytest.param(
+ SignedInteger,
+ -1,
+ ValueError,
+ "SignedInteger(-1) is not supported (bit width must be a positive integer)",
+ ),
+ pytest.param(
+ UnsignedInteger,
+ -1,
+ ValueError,
+ "UnsignedInteger(-1) is not supported (bit width must be a positive integer)",
+ ),
+ pytest.param(
+ SignedInteger,
+ "abc",
+ ValueError,
+ "SignedInteger('abc') is not supported (bit width must be a positive integer)",
+ ),
+ pytest.param(
+ UnsignedInteger,
+ "abc",
+ ValueError,
+ "UnsignedInteger('abc') is not supported (bit width must be a positive integer)",
+ ),
+ ],
+)
+def test_integer_bad_init(constructor, bit_width, expected_error, expected_message):
+ """
+ Test `__init__` method of `Integer` data type with bad parameters.
+ """
+
+ with pytest.raises(expected_error) as excinfo:
+ constructor(bit_width)
+
+ assert str(excinfo.value) == expected_message
+
+
+@pytest.mark.parametrize(
+ "lhs,rhs,expected_result",
+ [
+ pytest.param(
+ SignedInteger(5),
+ SignedInteger(5),
+ True,
+ ),
+ pytest.param(
+ UnsignedInteger(5),
+ UnsignedInteger(5),
+ True,
+ ),
+ pytest.param(
+ SignedInteger(5),
+ SignedInteger(6),
+ False,
+ ),
+ pytest.param(
+ SignedInteger(6),
+ SignedInteger(5),
+ False,
+ ),
+ pytest.param(
+ UnsignedInteger(5),
+ UnsignedInteger(6),
+ False,
+ ),
+ pytest.param(
+ UnsignedInteger(6),
+ UnsignedInteger(5),
+ False,
+ ),
+ pytest.param(
+ SignedInteger(5),
+ UnsignedInteger(5),
+ False,
+ ),
+ pytest.param(
+ UnsignedInteger(5),
+ SignedInteger(5),
+ False,
+ ),
+ pytest.param(
+ SignedInteger(5),
+ "SignedInteger(5)",
+ False,
+ ),
+ pytest.param(
+ "SignedInteger(5)",
+ SignedInteger(5),
+ False,
+ ),
+ ],
+)
+def test_integer_eq(lhs, rhs, expected_result):
+ """
+ Test `__eq__` method of `Integer` data type.
+ """
+
+ assert (lhs == rhs) == expected_result
+ assert (rhs == lhs) == expected_result
+
+
+@pytest.mark.parametrize(
+ "data_type,expected_result",
+ [
+ pytest.param(
+ UnsignedInteger(4),
+ "uint4",
+ ),
+ pytest.param(
+ UnsignedInteger(7),
+ "uint7",
+ ),
+ pytest.param(
+ SignedInteger(4),
+ "int4",
+ ),
+ pytest.param(
+ SignedInteger(7),
+ "int7",
+ ),
+ ],
+)
+def test_integer_str(data_type, expected_result):
+ """
+ Test `__str__` method of `Integer` data type.
+ """
+
+ assert str(data_type) == expected_result
+
+
+@pytest.mark.parametrize(
+ "data_type,expected_result",
+ [
+ pytest.param(
+ UnsignedInteger(1),
+ 0,
+ ),
+ pytest.param(
+ UnsignedInteger(3),
+ 0,
+ ),
+ pytest.param(
+ UnsignedInteger(5),
+ 0,
+ ),
+ pytest.param(
+ SignedInteger(1),
+ -1,
+ ),
+ pytest.param(
+ SignedInteger(3),
+ -4,
+ ),
+ pytest.param(
+ SignedInteger(5),
+ -16,
+ ),
+ ],
+)
+def test_integer_min(data_type, expected_result):
+ """
+ Test `min` method of `Integer` data type.
+ """
+
+ assert data_type.min() == expected_result
+
+
+@pytest.mark.parametrize(
+ "data_type,expected_result",
+ [
+ pytest.param(
+ UnsignedInteger(1),
+ 1,
+ ),
+ pytest.param(
+ UnsignedInteger(3),
+ 7,
+ ),
+ pytest.param(
+ UnsignedInteger(5),
+ 31,
+ ),
+ pytest.param(
+ SignedInteger(1),
+ 0,
+ ),
+ pytest.param(
+ SignedInteger(3),
+ 3,
+ ),
+ pytest.param(
+ SignedInteger(5),
+ 15,
+ ),
+ ],
+)
+def test_integer_max(data_type, expected_result):
+ """
+ Test `max` method of `Integer` data type.
+ """
+
+ assert data_type.max() == expected_result
+
+
+@pytest.mark.parametrize(
+ "data_type,value,expected_result",
+ [
+ pytest.param(
+ UnsignedInteger(1),
+ -4,
+ False,
+ ),
+ pytest.param(
+ UnsignedInteger(1),
+ -3,
+ False,
+ ),
+ pytest.param(
+ UnsignedInteger(1),
+ -2,
+ False,
+ ),
+ pytest.param(
+ UnsignedInteger(1),
+ -1,
+ False,
+ ),
+ pytest.param(
+ UnsignedInteger(1),
+ 0,
+ True,
+ ),
+ pytest.param(
+ UnsignedInteger(1),
+ 1,
+ True,
+ ),
+ pytest.param(
+ UnsignedInteger(1),
+ 2,
+ False,
+ ),
+ pytest.param(
+ UnsignedInteger(1),
+ 3,
+ False,
+ ),
+ pytest.param(
+ UnsignedInteger(1),
+ 4,
+ False,
+ ),
+ pytest.param(
+ UnsignedInteger(2),
+ -4,
+ False,
+ ),
+ pytest.param(
+ UnsignedInteger(2),
+ -3,
+ False,
+ ),
+ pytest.param(
+ UnsignedInteger(2),
+ -2,
+ False,
+ ),
+ pytest.param(
+ UnsignedInteger(2),
+ -1,
+ False,
+ ),
+ pytest.param(
+ UnsignedInteger(2),
+ 0,
+ True,
+ ),
+ pytest.param(
+ UnsignedInteger(2),
+ 1,
+ True,
+ ),
+ pytest.param(
+ UnsignedInteger(2),
+ 2,
+ True,
+ ),
+ pytest.param(
+ UnsignedInteger(2),
+ 3,
+ True,
+ ),
+ pytest.param(
+ UnsignedInteger(2),
+ 4,
+ False,
+ ),
+ pytest.param(
+ UnsignedInteger(3),
+ -4,
+ False,
+ ),
+ pytest.param(
+ UnsignedInteger(3),
+ -3,
+ False,
+ ),
+ pytest.param(
+ UnsignedInteger(3),
+ -2,
+ False,
+ ),
+ pytest.param(
+ UnsignedInteger(3),
+ -1,
+ False,
+ ),
+ pytest.param(
+ UnsignedInteger(3),
+ 0,
+ True,
+ ),
+ pytest.param(
+ UnsignedInteger(3),
+ 1,
+ True,
+ ),
+ pytest.param(
+ UnsignedInteger(3),
+ 2,
+ True,
+ ),
+ pytest.param(
+ UnsignedInteger(3),
+ 3,
+ True,
+ ),
+ pytest.param(
+ UnsignedInteger(3),
+ 4,
+ True,
+ ),
+ pytest.param(
+ SignedInteger(1),
+ -4,
+ False,
+ ),
+ pytest.param(
+ SignedInteger(1),
+ -3,
+ False,
+ ),
+ pytest.param(
+ SignedInteger(1),
+ -2,
+ False,
+ ),
+ pytest.param(
+ SignedInteger(1),
+ -1,
+ True,
+ ),
+ pytest.param(
+ SignedInteger(1),
+ 0,
+ True,
+ ),
+ pytest.param(
+ SignedInteger(1),
+ 1,
+ False,
+ ),
+ pytest.param(
+ SignedInteger(1),
+ 2,
+ False,
+ ),
+ pytest.param(
+ SignedInteger(1),
+ 3,
+ False,
+ ),
+ pytest.param(
+ SignedInteger(1),
+ 4,
+ False,
+ ),
+ pytest.param(
+ SignedInteger(2),
+ -4,
+ False,
+ ),
+ pytest.param(
+ SignedInteger(2),
+ -3,
+ False,
+ ),
+ pytest.param(
+ SignedInteger(2),
+ -2,
+ True,
+ ),
+ pytest.param(
+ SignedInteger(2),
+ -1,
+ True,
+ ),
+ pytest.param(
+ SignedInteger(2),
+ 0,
+ True,
+ ),
+ pytest.param(
+ SignedInteger(2),
+ 1,
+ True,
+ ),
+ pytest.param(
+ SignedInteger(2),
+ 2,
+ False,
+ ),
+ pytest.param(
+ SignedInteger(2),
+ 3,
+ False,
+ ),
+ pytest.param(
+ SignedInteger(2),
+ 4,
+ False,
+ ),
+ pytest.param(
+ SignedInteger(3),
+ -4,
+ True,
+ ),
+ pytest.param(
+ SignedInteger(3),
+ -3,
+ True,
+ ),
+ pytest.param(
+ SignedInteger(3),
+ -2,
+ True,
+ ),
+ pytest.param(
+ SignedInteger(3),
+ -1,
+ True,
+ ),
+ pytest.param(
+ SignedInteger(3),
+ 0,
+ True,
+ ),
+ pytest.param(
+ SignedInteger(3),
+ 1,
+ True,
+ ),
+ pytest.param(
+ SignedInteger(3),
+ 2,
+ True,
+ ),
+ pytest.param(
+ SignedInteger(3),
+ 3,
+ True,
+ ),
+ pytest.param(
+ SignedInteger(3),
+ 4,
+ False,
+ ),
+ ],
+)
+def test_integer_can_represent(data_type, value, expected_result):
+ """
+ Test `can_represent` method of `Integer` data type.
+ """
+
+ assert data_type.can_represent(value) == expected_result
diff --git a/frontends/concrete-python/tests/dtypes/test_utils.py b/frontends/concrete-python/tests/dtypes/test_utils.py
new file mode 100644
index 000000000..d427fef14
--- /dev/null
+++ b/frontends/concrete-python/tests/dtypes/test_utils.py
@@ -0,0 +1,77 @@
+"""
+Tests of utilities related to data types.
+"""
+
+import pytest
+
+from concrete.numpy.dtypes import Float, SignedInteger, UnsignedInteger
+from concrete.numpy.dtypes.utils import combine_dtypes
+
+
+@pytest.mark.parametrize(
+ "dtypes,expected_result",
+ [
+ pytest.param(
+ [Float(64), Float(64)],
+ Float(64),
+ ),
+ pytest.param(
+ [Float(32), Float(64)],
+ Float(64),
+ ),
+ pytest.param(
+ [Float(16), Float(64)],
+ Float(64),
+ ),
+ pytest.param(
+ [Float(32), Float(16)],
+ Float(32),
+ ),
+ pytest.param(
+ [Float(16), Float(16)],
+ Float(16),
+ ),
+ pytest.param(
+ [SignedInteger(5), Float(64)],
+ Float(64),
+ ),
+ pytest.param(
+ [Float(32), SignedInteger(5)],
+ Float(32),
+ ),
+ pytest.param(
+ [SignedInteger(5), Float(16)],
+ Float(16),
+ ),
+ pytest.param(
+ [SignedInteger(5), SignedInteger(6)],
+ SignedInteger(6),
+ ),
+ pytest.param(
+ [UnsignedInteger(5), UnsignedInteger(6)],
+ UnsignedInteger(6),
+ ),
+ pytest.param(
+ [SignedInteger(5), UnsignedInteger(6)],
+ SignedInteger(7),
+ ),
+ pytest.param(
+ [SignedInteger(5), UnsignedInteger(4)],
+ SignedInteger(5),
+ ),
+ pytest.param(
+ [UnsignedInteger(6), SignedInteger(5)],
+ SignedInteger(7),
+ ),
+ pytest.param(
+ [UnsignedInteger(4), SignedInteger(5)],
+ SignedInteger(5),
+ ),
+ ],
+)
+def test_combine_dtypes(dtypes, expected_result):
+ """
+ Test `combine_dtypes` function.
+ """
+
+ assert combine_dtypes(dtypes) == expected_result
diff --git a/frontends/concrete-python/tests/execution/__init__.py b/frontends/concrete-python/tests/execution/__init__.py
new file mode 100644
index 000000000..5d8e8e694
--- /dev/null
+++ b/frontends/concrete-python/tests/execution/__init__.py
@@ -0,0 +1,3 @@
+"""
+Tests of execution.
+"""
diff --git a/frontends/concrete-python/tests/execution/test_add.py b/frontends/concrete-python/tests/execution/test_add.py
new file mode 100644
index 000000000..ec07f4b8c
--- /dev/null
+++ b/frontends/concrete-python/tests/execution/test_add.py
@@ -0,0 +1,181 @@
+"""
+Tests of execution of add operation.
+"""
+
+import numpy as np
+import pytest
+
+import concrete.numpy as cnp
+
+
+@pytest.mark.parametrize(
+ "function",
+ [
+ pytest.param(
+ lambda x: x + 42,
+ id="x + 42",
+ ),
+ pytest.param(
+ lambda x: 42 + x,
+ id="42 + x",
+ ),
+ pytest.param(
+ lambda x: x + np.array([1, 2, 3]),
+ id="x + [1, 2, 3]",
+ ),
+ pytest.param(
+ lambda x: np.array([1, 2, 3]) + x,
+ id="[1, 2, 3] + x",
+ ),
+ pytest.param(
+ lambda x: x + np.array([[1, 2, 3], [4, 5, 6]]),
+ id="x + [[1, 2, 3], [4, 5, 6]]",
+ ),
+ pytest.param(
+ lambda x: np.array([[1, 2, 3], [4, 5, 6]]) + x,
+ id="[[1, 2, 3], [4, 5, 6]] + x",
+ ),
+ ],
+)
+@pytest.mark.parametrize(
+ "parameters",
+ [
+ {
+ "x": {"range": [0, 85], "status": "encrypted"},
+ },
+ {
+ "x": {"range": [0, 85], "status": "encrypted", "shape": (3,)},
+ },
+ {
+ "x": {"range": [0, 85], "status": "encrypted", "shape": (2, 3)},
+ },
+ {
+ "x": {"range": [-50, 10], "status": "encrypted"},
+ },
+ {
+ "x": {"range": [-50, 10], "status": "encrypted", "shape": (3,)},
+ },
+ {
+ "x": {"range": [-50, 10], "status": "encrypted", "shape": (2, 3)},
+ },
+ {
+ "x": {"range": [0, 1000000], "status": "encrypted"},
+ },
+ {
+ "x": {"range": [0, 1000000], "status": "encrypted", "shape": (3,)},
+ },
+ {
+ "x": {"range": [0, 1000000], "status": "encrypted", "shape": (2, 3)},
+ },
+ ],
+)
+def test_constant_add(function, parameters, helpers):
+ """
+ Test add where one of the operators is a constant.
+ """
+
+ parameter_encryption_statuses = helpers.generate_encryption_statuses(parameters)
+ configuration = helpers.configuration()
+
+ compiler = cnp.Compiler(function, parameter_encryption_statuses)
+
+ inputset = helpers.generate_inputset(parameters)
+ circuit = compiler.compile(inputset, configuration)
+
+ sample = helpers.generate_sample(parameters)
+ helpers.check_execution(circuit, function, sample)
+
+
+@pytest.mark.parametrize(
+ "function",
+ [
+ pytest.param(
+ lambda x, y: x + y,
+ id="x + y",
+ ),
+ ],
+)
+@pytest.mark.parametrize(
+ "parameters",
+ [
+ {
+ "x": {"range": [0, 60], "status": "clear"},
+ "y": {"range": [0, 60], "status": "encrypted"},
+ },
+ {
+ "x": {"range": [0, 60], "status": "encrypted"},
+ "y": {"range": [0, 60], "status": "clear"},
+ },
+ {
+ "x": {"range": [0, 60], "status": "encrypted"},
+ "y": {"range": [0, 60], "status": "encrypted"},
+ },
+ {
+ "x": {"range": [0, 60], "status": "clear", "shape": (3,)},
+ "y": {"range": [0, 60], "status": "encrypted"},
+ },
+ {
+ "x": {"range": [0, 60], "status": "encrypted", "shape": (3,)},
+ "y": {"range": [0, 60], "status": "clear"},
+ },
+ {
+ "x": {"range": [0, 60], "status": "encrypted", "shape": (3,)},
+ "y": {"range": [0, 60], "status": "encrypted"},
+ },
+ {
+ "x": {"range": [0, 60], "status": "clear"},
+ "y": {"range": [0, 60], "status": "encrypted", "shape": (3,)},
+ },
+ {
+ "x": {"range": [0, 60], "status": "encrypted"},
+ "y": {"range": [0, 60], "status": "clear", "shape": (3,)},
+ },
+ {
+ "x": {"range": [0, 60], "status": "encrypted"},
+ "y": {"range": [0, 60], "status": "encrypted", "shape": (3,)},
+ },
+ {
+ "x": {"range": [0, 60], "status": "clear", "shape": (3,)},
+ "y": {"range": [0, 60], "status": "encrypted", "shape": (3,)},
+ },
+ {
+ "x": {"range": [0, 60], "status": "encrypted", "shape": (3,)},
+ "y": {"range": [0, 60], "status": "clear", "shape": (3,)},
+ },
+ {
+ "x": {"range": [0, 60], "status": "encrypted", "shape": (3,)},
+ "y": {"range": [0, 60], "status": "encrypted", "shape": (3,)},
+ },
+ {
+ "x": {"range": [0, 60], "status": "clear", "shape": (2, 1)},
+ "y": {"range": [0, 60], "status": "encrypted", "shape": (3,)},
+ },
+ {
+ "x": {"range": [0, 60], "status": "encrypted", "shape": (2, 1)},
+ "y": {"range": [0, 60], "status": "clear", "shape": (3,)},
+ },
+ {
+ "x": {"range": [0, 60], "status": "encrypted", "shape": (2, 1)},
+ "y": {"range": [0, 60], "status": "encrypted", "shape": (3,)},
+ },
+ {
+ "x": {"range": [-30, 30], "status": "encrypted", "shape": (3, 2)},
+ "y": {"range": [-30, 30], "status": "encrypted", "shape": (3, 2)},
+ },
+ ],
+)
+def test_add(function, parameters, helpers):
+ """
+ Test add where both of the operators are dynamic.
+ """
+
+ parameter_encryption_statuses = helpers.generate_encryption_statuses(parameters)
+ configuration = helpers.configuration()
+
+ compiler = cnp.Compiler(function, parameter_encryption_statuses)
+
+ inputset = helpers.generate_inputset(parameters)
+ circuit = compiler.compile(inputset, configuration)
+
+ sample = helpers.generate_sample(parameters)
+ helpers.check_execution(circuit, function, sample)
diff --git a/frontends/concrete-python/tests/execution/test_array.py b/frontends/concrete-python/tests/execution/test_array.py
new file mode 100644
index 000000000..a75cab839
--- /dev/null
+++ b/frontends/concrete-python/tests/execution/test_array.py
@@ -0,0 +1,61 @@
+"""
+Tests of execution of array operation.
+"""
+
+import pytest
+
+import concrete.numpy as cnp
+
+
+@pytest.mark.parametrize(
+ "function,parameters",
+ [
+ pytest.param(
+ lambda x: cnp.array([x, x + 1, 1]),
+ {
+ "x": {"range": [0, 10], "status": "encrypted", "shape": ()},
+ },
+ id="cnp.array([x, x + 1, 1])",
+ ),
+ pytest.param(
+ lambda x, y: cnp.array([x, y]),
+ {
+ "x": {"range": [0, 10], "status": "encrypted", "shape": ()},
+ "y": {"range": [0, 10], "status": "clear", "shape": ()},
+ },
+ id="cnp.array([x, y])",
+ ),
+ pytest.param(
+ lambda x, y: cnp.array([[x, y], [y, x]]),
+ {
+ "x": {"range": [0, 10], "status": "encrypted", "shape": ()},
+ "y": {"range": [0, 10], "status": "clear", "shape": ()},
+ },
+ id="cnp.array([[x, y], [y, x]])",
+ ),
+ pytest.param(
+ lambda x, y, z: cnp.array([[x, 1], [y, 2], [z, 3]]),
+ {
+ "x": {"range": [0, 10], "status": "encrypted", "shape": ()},
+ "y": {"range": [0, 10], "status": "clear", "shape": ()},
+ "z": {"range": [0, 10], "status": "encrypted", "shape": ()},
+ },
+ id="cnp.array([[x, 1], [y, 2], [z, 3]])",
+ ),
+ ],
+)
+def test_array(function, parameters, helpers):
+ """
+ Test array.
+ """
+
+ parameter_encryption_statuses = helpers.generate_encryption_statuses(parameters)
+ configuration = helpers.configuration()
+
+ compiler = cnp.Compiler(function, parameter_encryption_statuses)
+
+ inputset = helpers.generate_inputset(parameters)
+ circuit = compiler.compile(inputset, configuration)
+
+ sample = helpers.generate_sample(parameters)
+ helpers.check_execution(circuit, function, sample)
diff --git a/frontends/concrete-python/tests/execution/test_bitwise.py b/frontends/concrete-python/tests/execution/test_bitwise.py
new file mode 100644
index 000000000..ad13984aa
--- /dev/null
+++ b/frontends/concrete-python/tests/execution/test_bitwise.py
@@ -0,0 +1,62 @@
+"""
+Tests of execution of bitwise operations.
+"""
+
+import pytest
+
+import concrete.numpy as cnp
+
+
+@pytest.mark.parametrize(
+ "function",
+ [
+ pytest.param(
+ lambda x, y: x & y,
+ id="x & y",
+ ),
+ pytest.param(
+ lambda x, y: x | y,
+ id="x | y",
+ ),
+ pytest.param(
+ lambda x, y: x ^ y,
+ id="x ^ y",
+ ),
+ ],
+)
+@pytest.mark.parametrize(
+ "parameters",
+ [
+ {
+ "x": {"range": [0, 255], "status": "encrypted"},
+ "y": {"range": [0, 255], "status": "encrypted"},
+ },
+ {
+ "x": {"range": [0, 7], "status": "encrypted"},
+ "y": {"range": [0, 7], "status": "encrypted", "shape": (3,)},
+ },
+ {
+ "x": {"range": [0, 7], "status": "encrypted", "shape": (3,)},
+ "y": {"range": [0, 7], "status": "encrypted"},
+ },
+ {
+ "x": {"range": [0, 7], "status": "encrypted", "shape": (3,)},
+ "y": {"range": [0, 7], "status": "encrypted", "shape": (3,)},
+ },
+ ],
+)
+def test_bitwise(function, parameters, helpers):
+ """
+ Test bitwise operations between encrypted integers.
+ """
+
+ parameter_encryption_statuses = helpers.generate_encryption_statuses(parameters)
+ configuration = helpers.configuration()
+
+ compiler = cnp.Compiler(function, parameter_encryption_statuses)
+
+ inputset = helpers.generate_inputset(parameters)
+ circuit = compiler.compile(inputset, configuration)
+
+ sample = helpers.generate_sample(parameters)
+ helpers.check_execution(circuit, function, sample)
diff --git a/frontends/concrete-python/tests/execution/test_broadcast_to.py b/frontends/concrete-python/tests/execution/test_broadcast_to.py
new file mode 100644
index 000000000..175d0a6f4
--- /dev/null
+++ b/frontends/concrete-python/tests/execution/test_broadcast_to.py
@@ -0,0 +1,40 @@
+"""
+Tests of execution of broadcast to operation.
+"""
+
+import numpy as np
+import pytest
+
+import concrete.numpy as cnp
+
+
+@pytest.mark.parametrize(
+ "from_shape,to_shape",
+ [
+ pytest.param((), (2,)),
+ pytest.param((), (2, 3)),
+ pytest.param((3,), (2, 3)),
+ pytest.param((3,), (4, 2, 3)),
+ pytest.param((1, 2), (4, 3, 2)),
+ pytest.param((3, 2), (4, 3, 2)),
+ pytest.param((3, 1), (4, 3, 5)),
+ pytest.param((3, 1, 4), (3, 2, 4)),
+ pytest.param((3, 1, 1), (5, 3, 1, 3)),
+ ],
+)
+def test_broadcast_to(from_shape, to_shape, helpers):
+ """
+ Test broadcast to.
+ """
+
+ def function(x):
+ return np.broadcast_to(x, to_shape)
+
+ configuration = helpers.configuration()
+ compiler = cnp.Compiler(function, {"x": "encrypted"})
+
+ inputset = [np.random.randint(0, 2**2, size=from_shape) for _ in range(100)]
+ circuit = compiler.compile(inputset, configuration)
+
+ sample = np.random.randint(0, 2**2, size=from_shape)
+ helpers.check_execution(circuit, function, sample)
diff --git a/frontends/concrete-python/tests/execution/test_comparison.py b/frontends/concrete-python/tests/execution/test_comparison.py
new file mode 100644
index 000000000..34e892357
--- /dev/null
+++ b/frontends/concrete-python/tests/execution/test_comparison.py
@@ -0,0 +1,161 @@
+"""
+Tests of execution of comparison operations.
+"""
+
+import pytest
+
+import concrete.numpy as cnp
+
+
+@pytest.mark.parametrize(
+ "function",
+ [
+ pytest.param(
+ lambda x, y: x == y,
+ id="x == y",
+ ),
+ pytest.param(
+ lambda x, y: x != y,
+ id="x != y",
+ ),
+ pytest.param(
+ lambda x, y: x < y,
+ id="x < y",
+ ),
+ pytest.param(
+ lambda x, y: x <= y,
+ id="x <= y",
+ ),
+ pytest.param(
+ lambda x, y: x > y,
+ id="x > y",
+ ),
+ pytest.param(
+ lambda x, y: x >= y,
+ id="x >= y",
+ ),
+ ],
+)
+@pytest.mark.parametrize(
+ "parameters",
+ [
+ {
+ "x": {"range": [0, 3], "status": "encrypted"},
+ "y": {"range": [0, 3], "status": "encrypted"},
+ },
+ {
+ "x": {"range": [0, 255], "status": "encrypted"},
+ "y": {"range": [0, 255], "status": "encrypted"},
+ },
+ {
+ "x": {"range": [-128, 127], "status": "encrypted"},
+ "y": {"range": [-128, 127], "status": "encrypted"},
+ },
+ {
+ "x": {"range": [-128, 127], "status": "encrypted"},
+ "y": {"range": [0, 255], "status": "encrypted"},
+ },
+ {
+ "x": {"range": [0, 255], "status": "encrypted"},
+ "y": {"range": [-128, 127], "status": "encrypted"},
+ },
+ {
+ "x": {"range": [-8, 7], "status": "encrypted"},
+ "y": {"range": [-8, 7], "status": "encrypted", "shape": (2,)},
+ },
+ {
+ "x": {"range": [-8, 7], "status": "encrypted", "shape": (2,)},
+ "y": {"range": [-8, 7], "status": "encrypted"},
+ },
+ {
+ "x": {"range": [-8, 7], "status": "encrypted", "shape": (2,)},
+ "y": {"range": [-8, 7], "status": "encrypted", "shape": (2,)},
+ },
+ ],
+)
+def test_comparison(function, parameters, helpers):
+ """
+ Test comparison operations between encrypted integers.
+ """
+
+ parameter_encryption_statuses = helpers.generate_encryption_statuses(parameters)
+ configuration = helpers.configuration()
+
+ compiler = cnp.Compiler(function, parameter_encryption_statuses)
+
+ inputset = helpers.generate_inputset(parameters)
+ circuit = compiler.compile(inputset, configuration)
+
+ sample = helpers.generate_sample(parameters)
+ helpers.check_execution(circuit, function, sample)
+
+
+@pytest.mark.parametrize(
+ "function",
+ [
+ pytest.param(
+ lambda x, y: (x == y) + 200,
+ id="(x == y) + 200",
+ ),
+ pytest.param(
+ lambda x, y: (x != y) + 200,
+ id="(x != y) + 200",
+ ),
+ pytest.param(
+ lambda x, y: (x < y) + 200,
+ id="(x < y) + 200",
+ ),
+ pytest.param(
+ lambda x, y: (x <= y) + 200,
+ id="(x <= y) + 200",
+ ),
+ pytest.param(
+ lambda x, y: (x > y) + 200,
+ id="(x > y) + 200",
+ ),
+ pytest.param(
+ lambda x, y: (x >= y) + 200,
+ id="(x >= y) + 200",
+ ),
+ ],
+)
+@pytest.mark.parametrize(
+ "parameters",
+ [
+ {
+ "x": {"range": [0, 15], "status": "encrypted"},
+ "y": {"range": [0, 15], "status": "encrypted"},
+ },
+ {
+ "x": {"range": [-8, 7], "status": "encrypted"},
+ "y": {"range": [-8, 7], "status": "encrypted"},
+ },
+ {
+ "x": {"range": [0, 15], "status": "encrypted"},
+ "y": {"range": [0, 15], "status": "encrypted", "shape": (2,)},
+ },
+ {
+ "x": {"range": [-8, 7], "status": "encrypted", "shape": (2,)},
+ "y": {"range": [-8, 7], "status": "encrypted"},
+ },
+ {
+ "x": {"range": [-10, 10], "status": "encrypted", "shape": (2,)},
+ "y": {"range": [-10, 10], "status": "encrypted", "shape": (2,)},
+ },
+ ],
+)
+def test_optimized_comparison(function, parameters, helpers):
+ """
+ Test comparison operations between encrypted integers with a single TLU.
+ """
+
+ parameter_encryption_statuses = helpers.generate_encryption_statuses(parameters)
+ configuration = helpers.configuration()
+
+ compiler = cnp.Compiler(function, parameter_encryption_statuses)
+
+ inputset = helpers.generate_inputset(parameters)
+ circuit = compiler.compile(inputset, configuration)
+
+ sample = helpers.generate_sample(parameters)
+ helpers.check_execution(circuit, function, sample)
diff --git a/frontends/concrete-python/tests/execution/test_concat.py b/frontends/concrete-python/tests/execution/test_concat.py
new file mode 100644
index 000000000..0eab4ce45
--- /dev/null
+++ b/frontends/concrete-python/tests/execution/test_concat.py
@@ -0,0 +1,176 @@
+"""
+Tests of execution of concatenate operation.
+"""
+
+import numpy as np
+import pytest
+
+import concrete.numpy as cnp
+
+
+@pytest.mark.parametrize(
+ "function,parameters",
+ [
+ pytest.param(
+ lambda x, y: np.concatenate((x, y)),
+ {
+ "x": {"shape": (4, 2)},
+ "y": {"shape": (3, 2)},
+ },
+ ),
+ pytest.param(
+ lambda x, y: np.concatenate((x, y), axis=0),
+ {
+ "x": {"shape": (4, 2)},
+ "y": {"shape": (3, 2)},
+ },
+ ),
+ pytest.param(
+ lambda x, y: np.concatenate((x, y), axis=1),
+ {
+ "x": {"shape": (2, 4)},
+ "y": {"shape": (2, 3)},
+ },
+ ),
+ pytest.param(
+ lambda x, y: np.concatenate((x, y), axis=-1),
+ {
+ "x": {"shape": (2, 4)},
+ "y": {"shape": (2, 3)},
+ },
+ ),
+ pytest.param(
+ lambda x, y: np.concatenate((x, y), axis=-2),
+ {
+ "x": {"shape": (4, 2)},
+ "y": {"shape": (3, 2)},
+ },
+ ),
+ pytest.param(
+ lambda x, y: np.concatenate((x, y), axis=None),
+ {
+ "x": {"shape": (3, 4)},
+ "y": {"shape": (2, 3)},
+ },
+ ),
+ pytest.param(
+ lambda x, y, z: np.concatenate((x, y, z)),
+ {
+ "x": {"shape": (4, 2)},
+ "y": {"shape": (3, 2)},
+ "z": {"shape": (5, 2)},
+ },
+ ),
+ pytest.param(
+ lambda x, y, z: np.concatenate((x, y, z), axis=0),
+ {
+ "x": {"shape": (4, 2)},
+ "y": {"shape": (3, 2)},
+ "z": {"shape": (5, 2)},
+ },
+ ),
+ pytest.param(
+ lambda x, y, z: np.concatenate((x, y, z), axis=1),
+ {
+ "x": {"shape": (2, 4)},
+ "y": {"shape": (2, 3)},
+ "z": {"shape": (2, 5)},
+ },
+ ),
+ pytest.param(
+ lambda x, y, z: np.concatenate((x, y, z), axis=-1),
+ {
+ "x": {"shape": (2, 4)},
+ "y": {"shape": (2, 3)},
+ "z": {"shape": (2, 5)},
+ },
+ ),
+ pytest.param(
+ lambda x, y, z: np.concatenate((x, y, z), axis=-2),
+ {
+ "x": {"shape": (4, 2)},
+ "y": {"shape": (3, 2)},
+ "z": {"shape": (5, 2)},
+ },
+ ),
+ pytest.param(
+ lambda x, y, z: np.concatenate((x, y, z), axis=None),
+ {
+ "x": {"shape": (3, 4)},
+ "y": {"shape": (2, 3)},
+ "z": {"shape": (5, 1)},
+ },
+ ),
+ pytest.param(
+ lambda x, y: np.concatenate((x, y)),
+ {
+ "x": {"shape": (3, 4, 2)},
+ "y": {"shape": (5, 4, 2)},
+ },
+ ),
+ pytest.param(
+ lambda x, y: np.concatenate((x, y), axis=0),
+ {
+ "x": {"shape": (3, 4, 2)},
+ "y": {"shape": (5, 4, 2)},
+ },
+ ),
+ pytest.param(
+ lambda x, y: np.concatenate((x, y), axis=1),
+ {
+ "x": {"shape": (2, 4, 5)},
+ "y": {"shape": (2, 3, 5)},
+ },
+ ),
+ pytest.param(
+ lambda x, y: np.concatenate((x, y), axis=2),
+ {
+ "x": {"shape": (2, 3, 4)},
+ "y": {"shape": (2, 3, 5)},
+ },
+ ),
+ pytest.param(
+ lambda x, y: np.concatenate((x, y), axis=-1),
+ {
+ "x": {"shape": (2, 3, 4)},
+ "y": {"shape": (2, 3, 5)},
+ },
+ ),
+ pytest.param(
+ lambda x, y: np.concatenate((x, y), axis=-2),
+ {
+ "x": {"shape": (2, 4, 5)},
+ "y": {"shape": (2, 3, 5)},
+ },
+ ),
+ pytest.param(
+ lambda x, y: np.concatenate((x, y), axis=-3),
+ {
+ "x": {"shape": (3, 4, 2)},
+ "y": {"shape": (5, 4, 2)},
+ },
+ ),
+ pytest.param(
+ lambda x, y: np.concatenate((x, y), axis=None),
+ {
+ "x": {"shape": (3, 4, 5)},
+ "y": {"shape": (5, 2, 3)},
+ },
+ ),
+ ],
+)
+def test_concatenate(function, parameters, helpers):
+ """
+ Test concatenate.
+ """
+
+ parameter_encryption_statuses = helpers.generate_encryption_statuses(parameters)
+ configuration = helpers.configuration()
+
+ compiler = cnp.Compiler(function, parameter_encryption_statuses)
+
+ inputset = helpers.generate_inputset(parameters)
+ circuit = compiler.compile(inputset, configuration)
+
+ sample = helpers.generate_sample(parameters)
+ helpers.check_execution(circuit, function, sample)
diff --git a/frontends/concrete-python/tests/execution/test_convolution.py b/frontends/concrete-python/tests/execution/test_convolution.py
new file mode 100644
index 000000000..d7494f6ae
--- /dev/null
+++ b/frontends/concrete-python/tests/execution/test_convolution.py
@@ -0,0 +1,504 @@
+"""
+Tests of execution of convolution operation.
+"""
+
+import numpy as np
+import pytest
+
+import concrete.numpy as cnp
+import concrete.onnx as connx
+from concrete.numpy.representation.node import Node
+from concrete.numpy.tracing.tracer import Tracer
+
+
+@pytest.mark.parametrize(
+ "input_shape,weight_shape, group",
+ [
+ pytest.param(
+ (1, 1, 4, 4),
+ (1, 1, 2, 2),
+ 1,
+ ),
+ pytest.param(
+ (4, 3, 4, 4),
+ (2, 3, 2, 2),
+ 1,
+ ),
+ pytest.param(
+ (1, 6, 4, 4),
+ (6, 1, 2, 2),
+ 6,
+ ),
+ ],
+)
+@pytest.mark.parametrize(
+ "strides",
+ [
+ (2, 2),
+ ],
+)
+@pytest.mark.parametrize(
+ "dilations",
+ [
+ (1, 1),
+ ],
+)
+@pytest.mark.parametrize(
+ "has_bias",
+ [
+ True,
+ False,
+ ],
+)
+def test_conv2d(input_shape, weight_shape, group, strides, dilations, has_bias, helpers):
+ """
+ Test conv2d.
+ """
+
+ configuration = helpers.configuration()
+
+ weight = np.random.randint(0, 4, size=weight_shape)
+
+ if has_bias:
+ bias = np.random.randint(0, 4, size=(weight_shape[0],))
+ else:
+ bias = None
+
+ @cnp.compiler({"x": "encrypted"})
+ def function(x):
+ return connx.conv(x, weight, bias, strides=strides, dilations=dilations, group=group)
+
+ inputset = [np.random.randint(0, 4, size=input_shape) for i in range(100)]
+ circuit = function.compile(inputset, configuration)
+
+ sample = np.random.randint(0, 4, size=input_shape)
+ helpers.check_execution(circuit, function, sample)
+
+
+@pytest.mark.parametrize(
+ "input_shape,weight_shape,bias_shape,pads,strides,dilations,kernel_shape,group,auto_pad,"
+ "expected_error,expected_message",
+ [
+ pytest.param(
+ (1, 1, 4, 4),
+ (1, 1, 2, 2),
+ (1,),
+ (0, 0, 0, 0),
+ (1, 1),
+ (1, 1),
+ None,
+ 1,
+ "VALID",
+ ValueError,
+ "auto_pad should be in {'NOTSET'}, but got 'VALID'",
+ ),
+ pytest.param(
+ (1, 1, 1, 4),
+ (1, 1, 2, 2),
+ (1,),
+ (1, 0, 2, 0),
+ (1, 1),
+ (1, 1),
+ None,
+ 1,
+ "NOTSET",
+ RuntimeError,
+ "padding should be the same for the beginning of the dimension and its end, but got "
+ "1 in the beginning, and 2 at the end for dimension 0",
+ ),
+ pytest.param(
+ (1, 1, 4),
+ (1, 1, 2),
+ (1,),
+ (),
+ (1,),
+ (1,),
+ None,
+ 1,
+ "NOTSET",
+ ValueError,
+ "pads should be of form (D_begin_pad, D_end_pad) when performing 1D convolution, "
+ "but it's ()",
+ ),
+ pytest.param(
+ (1, 1, 4),
+ (1, 1, 2),
+ (1,),
+ (0, 0),
+ (),
+ (1,),
+ None,
+ 1,
+ "NOTSET",
+ ValueError,
+ "strides should be of form (D_stride,) when performing 1D " "convolution, but it's ()",
+ ),
+ pytest.param(
+ (1, 1, 4),
+ (1, 1, 2),
+ (1,),
+ (0, 0),
+ (1,),
+ (),
+ None,
+ 1,
+ "NOTSET",
+ ValueError,
+ "dilations should be of form (D_dilation,) when performing 1D "
+ "convolution, but it's ()",
+ ),
+ pytest.param(
+ (1, 1, 4, 4),
+ (1, 1, 2, 2),
+ (1,),
+ (),
+ (1, 1),
+ (1, 1),
+ None,
+ 1,
+ "NOTSET",
+ ValueError,
+ "pads should be of form (height_begin_pad, width_begin_pad, "
+ "height_end_pad, width_end_pad) when performing 2D convolution, "
+ "but it's ()",
+ ),
+ pytest.param(
+ (1, 1, 4, 4),
+ (1, 1, 2, 2),
+ (1,),
+ (0, 0, 0, 0),
+ (),
+ (1, 1),
+ None,
+ 1,
+ "NOTSET",
+ ValueError,
+ "strides should be of form (height_stride, width_stride) when performing 2D "
+ "convolution, but it's ()",
+ ),
+ pytest.param(
+ (1, 1, 4, 4),
+ (1, 1, 2, 2),
+ (1,),
+ (0, 0, 0, 0),
+ (1, 1),
+ (),
+ None,
+ 1,
+ "NOTSET",
+ ValueError,
+ "dilations should be of form (height_dilation, width_dilation) when performing 2D "
+ "convolution, but it's ()",
+ ),
+ pytest.param(
+ (1, 1, 4, 4, 4),
+ (1, 1, 2, 2, 4),
+ (1,),
+ (),
+ (1, 1, 1),
+ (1, 1, 1),
+ None,
+ 1,
+ "NOTSET",
+ ValueError,
+ "pads should be of form (D_begin_pad, height_begin_pad, width_begin_pad, "
+ "D_end_pad, height_end_pad, width_end_pad) when performing 3D convolution, "
+ "but it's ()",
+ ),
+ pytest.param(
+ (1, 1, 4, 4, 4),
+ (1, 1, 2, 2, 2),
+ (1,),
+ (0, 0, 0, 0, 0, 0),
+ (),
+ (1, 1, 1),
+ None,
+ 1,
+ "NOTSET",
+ ValueError,
+ "strides should be of form (D_stride, height_stride, width_stride) when performing 3D "
+ "convolution, but it's ()",
+ ),
+ pytest.param(
+ (1, 1, 4, 4, 4),
+ (1, 1, 2, 2, 2),
+ (1,),
+ (0, 0, 0, 0, 0, 0),
+ (1, 1, 1),
+ (),
+ None,
+ 1,
+ "NOTSET",
+ ValueError,
+ "dilations should be of form (D_dilation, height_dilation, width_dilation) when "
+ "performing 3D convolution, but it's ()",
+ ),
+ pytest.param(
+ (),
+ (1, 1, 2, 2),
+ (1,),
+ (0, 0, 0, 0),
+ (1, 1),
+ (1, 1),
+ None,
+ 1,
+ "NOTSET",
+ ValueError,
+ "expected input x to have at least 3 dimensions (N, C, D1, ...), but got 0",
+ ),
+ pytest.param(
+ (1, 1, 4, 4),
+ (),
+ (1,),
+ (0, 0, 0, 0),
+ (1, 1),
+ (1, 1),
+ None,
+ 1,
+ "NOTSET",
+ ValueError,
+ "expected weight to have at least 3 dimensions (F, C / group, K1, ...), but got 0",
+ ),
+ pytest.param(
+ (1, 1, 4, 4),
+ (1, 1, 2, 2),
+ (),
+ (0, 0, 0, 0),
+ (1, 1),
+ (1, 1),
+ None,
+ 1,
+ "NOTSET",
+ ValueError,
+ "expected bias to have a single dimension (F,), but got 0",
+ ),
+ pytest.param(
+ (1, 1, 4, 4),
+ (1, 1, 2, 2),
+ (1,),
+ (0, 0, 0, 0),
+ (1, 1),
+ (1, 1),
+ (1, 2),
+ 1,
+ "NOTSET",
+ ValueError,
+ "expected kernel_shape to be (2, 2), but got (1, 2)",
+ ),
+ pytest.param(
+ (1, 1, 4, 4),
+ (1, 1, 2, 2),
+ (1,),
+ (0, 0, 0, 0),
+ (1, 1),
+ (1, 1),
+ None,
+ None,
+ "NOTSET",
+ ValueError,
+ "expected group to be an integer > 0, but got None",
+ ),
+ pytest.param(
+ (1, 1, 4, 4),
+ (1, 2, 2, 2),
+ (1,),
+ (0, 0, 0, 0),
+ (1, 1),
+ (1, 1),
+ None,
+ 1,
+ "NOTSET",
+ ValueError,
+ "expected number of channel in weight to be 1.0 (C / group), but got 2",
+ ),
+ pytest.param(
+ (1, 1, 4),
+ (1, 1, 2),
+ (1,),
+ (0, 0),
+ (1,),
+ (1,),
+ None,
+ 1,
+ "NOTSET",
+ NotImplementedError,
+ "conv1d conversion to MLIR is not yet implemented",
+ ),
+ pytest.param(
+ (1, 1, 4, 4, 4),
+ (1, 1, 2, 2, 2),
+ (1,),
+ (0, 0, 0, 0, 0, 0),
+ (1, 1, 1),
+ (1, 1, 1),
+ None,
+ 1,
+ "NOTSET",
+ NotImplementedError,
+ "conv3d conversion to MLIR is not yet implemented",
+ ),
+ pytest.param(
+ (1, 1, 4, 4, 4, 4),
+ (1, 1, 2, 2, 2, 2),
+ (1,),
+ (0, 0, 0, 0, 0, 0, 0, 0),
+ (1, 1, 1, 1),
+ (1, 1, 1, 1),
+ None,
+ 1,
+ "NOTSET",
+ NotImplementedError,
+ "only 1D, 2D, and 3D convolutions are supported",
+ ),
+ pytest.param(
+ (1, 2, 4, 4),
+ (1, 1, 2, 2),
+ (1,),
+ (0, 0, 0, 0),
+ (1, 1),
+ (1, 1),
+ None,
+ 2,
+ "NOTSET",
+ ValueError,
+ "expected number of feature maps (1) to be a multiple of group (2)",
+ ),
+ ],
+)
+# pylint: disable=too-many-arguments
+def test_bad_conv_compilation(
+ input_shape,
+ weight_shape,
+ bias_shape,
+ pads,
+ strides,
+ dilations,
+ kernel_shape,
+ group,
+ auto_pad,
+ expected_error,
+ expected_message,
+ helpers,
+):
+ # pylint: enable=too-many-arguments
+ """
+ Test conv with bad parameters.
+ """
+
+ configuration = helpers.configuration()
+
+ weight = np.random.randint(0, 4, size=weight_shape)
+
+ if bias_shape is not None:
+ bias = np.random.randint(0, 4, size=bias_shape)
+ else:
+ bias = None
+
+ @cnp.compiler({"x": "encrypted"})
+ def function(x):
+ return connx.conv(
+ x,
+ weight,
+ bias=bias,
+ pads=pads,
+ strides=strides,
+ dilations=dilations,
+ kernel_shape=kernel_shape,
+ group=group,
+ auto_pad=auto_pad,
+ )
+
+ inputset = [np.random.randint(0, 4, size=input_shape) for i in range(100)]
+ with pytest.raises(expected_error) as excinfo:
+ function.compile(inputset, configuration)
+
+ # Get the root cause error
+ current_error = excinfo.value
+ cause = current_error.__cause__
+ while cause:
+ current_error = cause
+ cause = current_error.__cause__
+
+ assert str(current_error) == expected_message
+
+
+@pytest.mark.parametrize(
+ "conv_func_name",
+ [
+ "conv",
+ 223,
+ None,
+ ],
+)
+@pytest.mark.parametrize(
+ "func",
+ [
+ # pylint: disable=protected-access
+ connx.convolution._evaluate_conv,
+ connx.convolution._trace_conv,
+ # pylint: enable=protected-access
+ ],
+)
+def test_bad_conv_func_name(conv_func_name, func):
+ """
+ Test invalid conv function name.
+ """
+ with pytest.raises(AssertionError) as excinfo:
+ func(None, None, None, None, None, None, None, conv_func_name)
+ assert (
+ str(excinfo.value) == f"expected conv_func to be one of ['conv1d', 'conv2d', 'conv3d'], "
+ f"but got {conv_func_name}"
+ )
+
+
+@pytest.mark.parametrize(
+ "x,weight,bias,expected_error,expected_message",
+ [
+ pytest.param(
+ np.array([1, 2, 3]),
+ "not same type as x",
+ None,
+ TypeError,
+ "expected weight to be of same type as x",
+ ),
+ pytest.param(
+ np.array([1, 2, 3]),
+ np.array([1, 2, 3]),
+ "not same type as x",
+ TypeError,
+ "expected bias to be of same type as x",
+ ),
+ pytest.param(
+ Tracer(Node.constant(np.array([1, 2, 3])), []),
+ "not same type as x",
+ None,
+ TypeError,
+ "expected weight to be of type Tracer or ndarray",
+ ),
+ pytest.param(
+ Tracer(Node.constant(np.array([1, 2, 3])), []),
+ np.array([1, 2, 3]),
+ "not same type as x",
+ TypeError,
+ "expected bias to be of type Tracer or ndarray",
+ ),
+ ],
+)
+def test_inconsistent_input_types(
+ x,
+ weight,
+ bias,
+ expected_error,
+ expected_message,
+):
+ """
+ Test conv with inconsistent input types.
+ """
+ with pytest.raises(expected_error) as excinfo:
+ connx.conv(
+ x,
+ weight,
+ bias=bias,
+ )
+
+ assert str(excinfo.value) == expected_message
diff --git a/frontends/concrete-python/tests/execution/test_direct_table_lookup.py b/frontends/concrete-python/tests/execution/test_direct_table_lookup.py
new file mode 100644
index 000000000..8d1bb7a1d
--- /dev/null
+++ b/frontends/concrete-python/tests/execution/test_direct_table_lookup.py
@@ -0,0 +1,323 @@
+"""
+Tests of execution of direct table lookup operation.
+"""
+
+import numpy as np
+import pytest
+
+import concrete.numpy as cnp
+
+
+def identity_table_lookup_generator(n):
+ """
+ Get identity table lookup function.
+ """
+
+ return lambda x: cnp.LookupTable(range(2**n))[x]
+
+
+def random_table_lookup_1b(x):
+ """
+ Lookup on a random table with 1-bit input.
+ """
+
+ # fmt: off
+ table = cnp.LookupTable([10, 12])
+ # fmt: on
+
+ return table[x]
+
+
+def random_table_lookup_2b(x):
+ """
+ Lookup on a random table with 2-bit input.
+ """
+
+ # fmt: off
+ table = cnp.LookupTable([3, 8, 22, 127])
+ # fmt: on
+
+ return table[x]
+
+
+def random_table_lookup_3b(x):
+ """
+ Lookup on a random table with 3-bit input.
+ """
+
+ # fmt: off
+ table = cnp.LookupTable([30, 52, 125, 23, 17, 12, 90, 4])
+ # fmt: on
+
+ return table[x]
+
+
+def random_table_lookup_4b(x):
+ """
+ Lookup on a random table with 4-bit input.
+ """
+
+ # fmt: off
+ table = cnp.LookupTable([30, 52, 125, 23, 17, 12, 90, 4, 21, 51, 22, 15, 53, 100, 75, 90])
+ # fmt: on
+
+ return table[x]
+
+
+def random_table_lookup_5b(x):
+ """
+ Lookup on a random table with 5-bit input.
+ """
+
+ # fmt: off
+ table = cnp.LookupTable(
+ [
+ 1, 5, 2, 3, 10, 2, 4, 8, 1, 12, 15, 12, 10, 1, 0, 2,
+ 4, 3, 8, 7, 10, 11, 6, 13, 9, 0, 2, 1, 15, 11, 12, 5
+ ]
+ )
+ # fmt: on
+
+ return table[x]
+
+
+def random_table_lookup_6b(x):
+ """
+ Lookup on a random table with 6-bit input.
+ """
+
+ # fmt: off
+ table = cnp.LookupTable(
+ [
+ 95, 74, 11, 83, 24, 116, 28, 75, 26, 85, 114, 121, 91, 123, 78, 69,
+ 72, 115, 67, 5, 39, 11, 120, 88, 56, 43, 74, 16, 72, 85, 103, 92,
+ 44, 115, 50, 56, 107, 77, 25, 71, 52, 45, 80, 35, 69, 8, 40, 87,
+ 26, 85, 84, 53, 73, 95, 86, 22, 16, 45, 59, 112, 53, 113, 98, 116
+ ]
+ )
+ # fmt: on
+
+ return table[x]
+
+
+def random_table_lookup_7b(x):
+ """
+ Lookup on a random table with 7-bit input.
+ """
+
+ # fmt: off
+ table = cnp.LookupTable(
+ [
+ 13, 58, 38, 58, 15, 15, 77, 86, 80, 94, 108, 27, 126, 60, 65, 95,
+ 50, 79, 22, 97, 38, 60, 25, 48, 73, 112, 27, 45, 88, 20, 67, 17,
+ 16, 6, 71, 60, 77, 43, 93, 40, 41, 31, 99, 122, 120, 40, 94, 13,
+ 111, 44, 96, 62, 108, 91, 34, 90, 103, 58, 3, 103, 19, 69, 55, 108,
+ 0, 111, 113, 0, 0, 73, 22, 52, 81, 2, 88, 76, 36, 121, 97, 121,
+ 123, 79, 82, 120, 12, 65, 54, 101, 90, 52, 84, 106, 23, 15, 110, 79,
+ 85, 101, 30, 61, 104, 35, 81, 30, 98, 44, 111, 32, 68, 18, 45, 123,
+ 84, 80, 68, 27, 31, 38, 126, 61, 51, 7, 49, 37, 63, 114, 22, 18,
+ ]
+ )
+ # fmt: on
+
+ return table[x]
+
+
+def negative_identity_table_lookup_generator(n):
+ """
+ Get negative identity table lookup function.
+ """
+
+ return lambda x: cnp.LookupTable([-i for i in range(2**n)])[x]
+
+
+@pytest.mark.parametrize(
+ "bits,function",
+ [
+ pytest.param(1, identity_table_lookup_generator(1)),
+ pytest.param(2, identity_table_lookup_generator(2)),
+ pytest.param(3, identity_table_lookup_generator(3)),
+ pytest.param(4, identity_table_lookup_generator(4)),
+ pytest.param(5, identity_table_lookup_generator(5)),
+ pytest.param(6, identity_table_lookup_generator(6)),
+ pytest.param(7, identity_table_lookup_generator(7)),
+ pytest.param(1, random_table_lookup_1b),
+ pytest.param(2, random_table_lookup_2b),
+ pytest.param(3, random_table_lookup_3b),
+ pytest.param(4, random_table_lookup_4b),
+ pytest.param(5, random_table_lookup_5b),
+ pytest.param(6, random_table_lookup_6b),
+ pytest.param(7, random_table_lookup_7b),
+ pytest.param(1, negative_identity_table_lookup_generator(1)),
+ pytest.param(2, negative_identity_table_lookup_generator(2)),
+ pytest.param(3, negative_identity_table_lookup_generator(3)),
+ pytest.param(4, negative_identity_table_lookup_generator(4)),
+ pytest.param(5, negative_identity_table_lookup_generator(5)),
+ pytest.param(6, negative_identity_table_lookup_generator(6)),
+ ],
+)
+def test_direct_table_lookup(bits, function, helpers):
+ """
+ Test direct table lookup.
+ """
+
+ configuration = helpers.configuration()
+
+ # scalar
+ # ------
+
+ compiler = cnp.Compiler(function, {"x": "encrypted"})
+
+ inputset = range(2**bits)
+ circuit = compiler.compile(inputset, configuration)
+
+ sample = int(np.random.randint(0, 2**bits))
+ helpers.check_execution(circuit, function, sample)
+
+ # tensor
+ # ------
+
+ compiler = cnp.Compiler(function, {"x": "encrypted"})
+
+ inputset = [np.random.randint(0, 2**bits, size=(3, 2)) for _ in range(100)]
+ circuit = compiler.compile(inputset, configuration)
+
+ sample = np.random.randint(0, 2**bits, size=(3, 2))
+ helpers.check_execution(circuit, function, sample)
+
+ # negative scalar
+ # ---------------
+
+ compiler = cnp.Compiler(function, {"x": "encrypted"})
+
+ inputset = range(-(2 ** (bits - 1)), 2 ** (bits - 1))
+ circuit = compiler.compile(inputset, configuration)
+
+ sample = int(np.random.randint(-(2 ** (bits - 1)), 2 ** (bits - 1)))
+ helpers.check_execution(circuit, function, sample)
+
+ # negative tensor
+ # ---------------
+
+ compiler = cnp.Compiler(function, {"x": "encrypted"})
+
+ inputset = [
+ np.random.randint(-(2 ** (bits - 1)), 2 ** (bits - 1), size=(3, 2)) for _ in range(100)
+ ]
+ circuit = compiler.compile(inputset, configuration)
+
+ sample = np.random.randint(-(2 ** (bits - 1)), 2 ** (bits - 1), size=(3, 2))
+ helpers.check_execution(circuit, function, sample)
+
+
+def test_direct_multi_table_lookup(helpers):
+ """
+ Test direct multi table lookup.
+ """
+
+ configuration = helpers.configuration()
+
+ square = cnp.LookupTable([i * i for i in range(4)])
+ cube = cnp.LookupTable([i * i * i for i in range(4)])
+
+ table = cnp.LookupTable(
+ [
+ [square, cube],
+ [cube, square],
+ [square, cube],
+ ]
+ )
+
+ def function(x):
+ return table[x]
+
+ compiler = cnp.Compiler(function, {"x": "encrypted"})
+
+ inputset = [np.random.randint(0, 2**2, size=(3, 2)) for _ in range(100)]
+ circuit = compiler.compile(inputset, configuration)
+
+ sample = np.random.randint(0, 2**2, size=(3, 2))
+ helpers.check_execution(circuit, function, sample)
+
+
+def test_bad_direct_table_lookup(helpers):
+ """
+ Test direct table lookup with bad parameters.
+ """
+
+ configuration = helpers.configuration()
+
+ # empty table
+ # -----------
+
+ with pytest.raises(ValueError) as excinfo:
+ cnp.LookupTable([])
+
+ assert str(excinfo.value) == "LookupTable cannot be constructed with []"
+
+ # invalid table
+ # -------------
+
+ with pytest.raises(ValueError) as excinfo:
+ cnp.LookupTable([[0, 1], [2, 3]])
+
+ assert str(excinfo.value) == "LookupTable cannot be constructed with [[0, 1], [2, 3]]"
+
+ # invalid multi table
+ # -------------------
+
+ with pytest.raises(ValueError) as excinfo:
+ cnp.LookupTable(["abc", 3.2])
+
+ assert str(excinfo.value) == "LookupTable cannot be constructed with ['abc', 3.2]"
+
+ # simulation with float value
+ # ---------------------------
+
+ with pytest.raises(ValueError) as excinfo:
+ random_table_lookup_3b(1.1)
+
+ assert str(excinfo.value) == "LookupTable cannot be looked up with 1.1"
+
+ # simulation with invalid shape
+ # -----------------------------
+
+ square = cnp.LookupTable([i * i for i in range(4)])
+ cube = cnp.LookupTable([i * i * i for i in range(4)])
+
+ table = cnp.LookupTable(
+ [
+ [square, cube],
+ [cube, square],
+ [square, cube],
+ ]
+ )
+
+ with pytest.raises(ValueError) as excinfo:
+ _ = table[np.array([1, 2])]
+
+ assert str(excinfo.value) == "LookupTable of shape (3, 2) cannot be looked up with [1 2]"
+
+ # compilation with float value
+ # ----------------------------
+
+ compiler = cnp.Compiler(random_table_lookup_3b, {"x": "encrypted"})
+
+ inputset = [1.5]
+ with pytest.raises(ValueError) as excinfo:
+ compiler.compile(inputset, configuration)
+
+ assert str(excinfo.value) == "LookupTable cannot be looked up with EncryptedScalar"
+
+ # compilation with invalid shape
+ # ------------------------------
+
+ compiler = cnp.Compiler(lambda x: table[x], {"x": "encrypted"})
+
+ inputset = [10, 5, 6, 2]
+ with pytest.raises(ValueError) as excinfo:
+ compiler.compile(inputset, configuration)
+
+ assert str(excinfo.value) == (
+ "LookupTable of shape (3, 2) cannot be looked up with EncryptedScalar"
+ )
diff --git a/frontends/concrete-python/tests/execution/test_dot.py b/frontends/concrete-python/tests/execution/test_dot.py
new file mode 100644
index 000000000..daae5c13b
--- /dev/null
+++ b/frontends/concrete-python/tests/execution/test_dot.py
@@ -0,0 +1,47 @@
+"""
+Tests of execution of dot operation.
+"""
+
+import numpy as np
+import pytest
+
+import concrete.numpy as cnp
+
+
+@pytest.mark.parametrize(
+ "size",
+ [1, 4, 6, 10],
+)
+def test_dot(size, helpers):
+ """
+ Test dot.
+ """
+
+ configuration = helpers.configuration()
+
+ bound = int(np.floor(np.sqrt(127 / size)))
+ cst = np.random.randint(0, bound, size=(size,))
+
+ @cnp.compiler({"x": "encrypted"})
+ def left_function(x):
+ return np.dot(x, cst)
+
+ @cnp.compiler({"x": "encrypted"})
+ def right_function(x):
+ return np.dot(cst, x)
+
+ @cnp.compiler({"x": "encrypted"})
+ def method(x):
+ return x.dot(cst)
+
+ inputset = [np.random.randint(0, bound, size=(size,)) for i in range(100)]
+
+ left_function_circuit = left_function.compile(inputset, configuration)
+ right_function_circuit = right_function.compile(inputset, configuration)
+ method_circuit = method.compile(inputset, configuration)
+
+ sample = np.random.randint(0, bound, size=(size,))
+
+ helpers.check_execution(left_function_circuit, left_function, sample)
+ helpers.check_execution(right_function_circuit, right_function, sample)
+ helpers.check_execution(method_circuit, method, sample)
diff --git a/frontends/concrete-python/tests/execution/test_iter.py b/frontends/concrete-python/tests/execution/test_iter.py
new file mode 100644
index 000000000..a3345a103
--- /dev/null
+++ b/frontends/concrete-python/tests/execution/test_iter.py
@@ -0,0 +1,30 @@
+"""
+Tests of execution of iteration of tracer.
+"""
+
+import numpy as np
+import pytest
+
+import concrete.numpy as cnp
+
+
+@pytest.mark.parametrize("shape", [(3,), (3, 2), (3, 2, 4)])
+def test_iter(shape, helpers):
+ """
+ Test iteration of tracers.
+ """
+
+ def function(x):
+ result = cnp.zeros(x.shape[1:])
+ for value in x:
+ result += value
+ return result
+
+ configuration = helpers.configuration()
+ compiler = cnp.Compiler(function, {"x": "encrypted"})
+
+ inputset = [np.random.randint(0, 2**2, size=shape) for _ in range(100)]
+ circuit = compiler.compile(inputset, configuration)
+
+ sample = np.random.randint(0, 2**2, size=shape)
+ helpers.check_execution(circuit, function, sample)
diff --git a/frontends/concrete-python/tests/execution/test_matmul.py b/frontends/concrete-python/tests/execution/test_matmul.py
new file mode 100644
index 000000000..dca275147
--- /dev/null
+++ b/frontends/concrete-python/tests/execution/test_matmul.py
@@ -0,0 +1,158 @@
+"""
+Tests of execution of matmul operation.
+"""
+
+import numpy as np
+import pytest
+
+import concrete.numpy as cnp
+
+
+@pytest.mark.parametrize(
+ "lhs_shape,rhs_shape,bounds",
+ [
+ pytest.param(
+ (3, 2),
+ (2, 3),
+ (0, 3),
+ ),
+ pytest.param(
+ (1, 2),
+ (2, 1),
+ (0, 3),
+ ),
+ pytest.param(
+ (3, 3),
+ (3, 3),
+ (0, 3),
+ ),
+ pytest.param(
+ (2, 1),
+ (1, 2),
+ (0, 7),
+ ),
+ pytest.param(
+ (2,),
+ (2,),
+ (0, 7),
+ ),
+ pytest.param(
+ (5, 5),
+ (5,),
+ (0, 3),
+ ),
+ pytest.param(
+ (5,),
+ (5, 5),
+ (0, 3),
+ ),
+ pytest.param(
+ (5,),
+ (5, 3),
+ (0, 3),
+ ),
+ pytest.param(
+ (5, 3),
+ (3,),
+ (0, 3),
+ ),
+ pytest.param(
+ (5,),
+ (4, 5, 3),
+ (0, 5),
+ ),
+ pytest.param(
+ (4, 5, 3),
+ (3,),
+ (0, 5),
+ ),
+ pytest.param(
+ (5,),
+ (2, 4, 5, 3),
+ (0, 5),
+ ),
+ pytest.param(
+ (2, 4, 5, 3),
+ (3,),
+ (0, 5),
+ ),
+ pytest.param(
+ (5, 4, 3),
+ (3, 2),
+ (0, 5),
+ ),
+ pytest.param(
+ (4, 3),
+ (5, 3, 2),
+ (0, 5),
+ ),
+ pytest.param(
+ (2, 5, 4, 3),
+ (3, 2),
+ (0, 5),
+ ),
+ pytest.param(
+ (5, 4, 3),
+ (1, 3, 2),
+ (0, 5),
+ ),
+ pytest.param(
+ (1, 4, 3),
+ (5, 3, 2),
+ (0, 5),
+ ),
+ pytest.param(
+ (5, 4, 3),
+ (2, 1, 3, 2),
+ (0, 5),
+ ),
+ pytest.param(
+ (2, 1, 4, 3),
+ (5, 3, 2),
+ (0, 5),
+ ),
+ ],
+)
+def test_matmul(lhs_shape, rhs_shape, bounds, helpers):
+ """
+ Test matmul.
+ """
+
+ configuration = helpers.configuration()
+
+ minimum, maximum = bounds
+
+ lhs_cst = list(np.random.randint(minimum, maximum, size=lhs_shape))
+ rhs_cst = list(np.random.randint(minimum, maximum, size=rhs_shape))
+
+ @cnp.compiler({"x": "encrypted"})
+ def lhs_operator(x):
+ return x @ rhs_cst
+
+ @cnp.compiler({"x": "encrypted"})
+ def rhs_operator(x):
+ return lhs_cst @ x
+
+ @cnp.compiler({"x": "encrypted"})
+ def lhs_function(x):
+ return np.matmul(x, rhs_cst)
+
+ @cnp.compiler({"x": "encrypted"})
+ def rhs_function(x):
+ return np.matmul(lhs_cst, x)
+
+ lhs_inputset = [np.random.randint(minimum, maximum, size=lhs_shape) for i in range(100)]
+ rhs_inputset = [np.random.randint(minimum, maximum, size=rhs_shape) for i in range(100)]
+
+ lhs_operator_circuit = lhs_operator.compile(lhs_inputset, configuration)
+ rhs_operator_circuit = rhs_operator.compile(rhs_inputset, configuration)
+ lhs_function_circuit = lhs_function.compile(lhs_inputset, configuration)
+ rhs_function_circuit = rhs_function.compile(rhs_inputset, configuration)
+
+ lhs_sample = np.random.randint(minimum, maximum, size=lhs_shape)
+ rhs_sample = np.random.randint(minimum, maximum, size=rhs_shape)
+
+ helpers.check_execution(lhs_operator_circuit, lhs_operator, lhs_sample)
+ helpers.check_execution(rhs_operator_circuit, rhs_operator, rhs_sample)
+ helpers.check_execution(lhs_function_circuit, lhs_function, lhs_sample)
+ helpers.check_execution(rhs_function_circuit, rhs_function, rhs_sample)
diff --git a/frontends/concrete-python/tests/execution/test_maxpool.py b/frontends/concrete-python/tests/execution/test_maxpool.py
new file mode 100644
index 000000000..4c8316fe7
--- /dev/null
+++ b/frontends/concrete-python/tests/execution/test_maxpool.py
@@ -0,0 +1,393 @@
+"""
+Tests of execution of maxpool operation.
+"""
+
+import numpy as np
+import pytest
+
+import concrete.numpy as cnp
+import concrete.onnx as connx
+
+
+@pytest.mark.parametrize(
+ "operation,sample_input,expected_output",
+ [
+ pytest.param(
+ {"kernel_shape": (3,)},
+ [1, 2, 2, 3, 2, 2, 2, 4, 1, 5, 2, 6],
+ [2, 3, 3, 3, 2, 4, 4, 5, 5, 6],
+ ),
+ pytest.param(
+ {"kernel_shape": (3,), "strides": (2,)},
+ [1, 2, 2, 3, 2, 2, 2, 4, 1, 5, 2, 6, 7],
+ [2, 3, 2, 4, 5, 7],
+ ),
+ pytest.param(
+ {
+ "kernel_shape": (2, 2),
+ },
+ [
+ [3, 1, 2],
+ [1, 1, 1],
+ [2, 3, 4],
+ [4, 1, 2],
+ ],
+ [
+ [3, 2],
+ [3, 4],
+ [4, 4],
+ ],
+ ),
+ pytest.param(
+ {
+ "kernel_shape": (2, 2),
+ "strides": (2, 1),
+ },
+ [
+ [3, 1, 2],
+ [1, 1, 1],
+ [2, 3, 4],
+ [4, 1, 2],
+ ],
+ [
+ [3, 2],
+ [4, 4],
+ ],
+ ),
+ ],
+)
+def test_maxpool(
+ operation,
+ sample_input,
+ expected_output,
+ helpers,
+):
+ """
+ Test maxpool.
+ """
+
+ sample_input = np.expand_dims(np.array(sample_input), axis=(0, 1))
+ expected_output = np.expand_dims(np.array(expected_output), axis=(0, 1))
+
+ assert np.array_equal(connx.maxpool(sample_input, **operation), expected_output)
+
+ @cnp.compiler({"x": "encrypted"})
+ def function(x):
+ return connx.maxpool(x, **operation)
+
+ graph = function.trace([sample_input], helpers.configuration())
+ assert np.array_equal(graph(sample_input), expected_output)
+
+
+@pytest.mark.parametrize(
+ "input_shape,operation,expected_error,expected_message",
+ [
+ pytest.param(
+ (10, 10),
+ {
+ "kernel_shape": (),
+ },
+ ValueError,
+ "Expected input to have at least 3 dimensions (N, C, D1, ...) but it only has 2",
+ ),
+ pytest.param(
+ (1, 1, 5, 4, 3, 2),
+ {
+ "kernel_shape": (),
+ },
+ NotImplementedError,
+ "4D maximum pooling is not supported yet",
+ ),
+ pytest.param(
+ (1, 1, 5, 4),
+ {
+ "kernel_shape": "",
+ },
+ TypeError,
+ "Expected kernel_shape to be a tuple or a list but it's str",
+ ),
+ pytest.param(
+ (1, 1, 5, 4),
+ {
+ "kernel_shape": ["0"],
+ },
+ TypeError,
+ "Expected kernel_shape to consist of integers but it has an element of type str",
+ ),
+ pytest.param(
+ (1, 1, 5, 4),
+ {
+ "kernel_shape": (3,),
+ },
+ ValueError,
+ "Expected kernel_shape to have 2 elements but it has 1",
+ ),
+ pytest.param(
+ (1, 1, 5, 4),
+ {
+ "kernel_shape": (2, 3),
+ "strides": "",
+ },
+ TypeError,
+ "Expected strides to be a tuple or a list but it's str",
+ ),
+ pytest.param(
+ (1, 1, 5, 4),
+ {
+ "kernel_shape": (2, 3),
+ "strides": ["0"],
+ },
+ TypeError,
+ "Expected strides to consist of integers but it has an element of type str",
+ ),
+ pytest.param(
+ (1, 1, 5, 4),
+ {
+ "kernel_shape": (2, 3),
+ "strides": (3,),
+ },
+ ValueError,
+ "Expected strides to have 2 elements but it has 1",
+ ),
+ pytest.param(
+ (1, 1, 5, 4),
+ {
+ "kernel_shape": (2, 3),
+ "auto_pad": True,
+ },
+ TypeError,
+ "Expected auto_pad to be of type str but it's bool",
+ ),
+ pytest.param(
+ (1, 1, 5, 4),
+ {
+ "kernel_shape": (2, 3),
+ "auto_pad": "YES_PLEASE",
+ },
+ ValueError,
+ "Expected auto_pad to be one of NOTSET, SAME_LOWER, SAME_UPPER, VALID "
+ "but it's YES_PLEASE",
+ ),
+ pytest.param(
+ (1, 1, 5, 4),
+ {
+ "kernel_shape": (2, 3),
+ "auto_pad": "VALID",
+ },
+ NotImplementedError,
+ "Desired auto_pad of VALID is not supported yet",
+ ),
+ pytest.param(
+ (1, 1, 5, 4),
+ {
+ "kernel_shape": (2, 3),
+ "pads": "",
+ },
+ TypeError,
+ "Expected pads to be a tuple or a list but it's str",
+ ),
+ pytest.param(
+ (1, 1, 5, 4),
+ {
+ "kernel_shape": (2, 3),
+ "pads": ["0"],
+ },
+ TypeError,
+ "Expected pads to consist of integers but it has an element of type str",
+ ),
+ pytest.param(
+ (1, 1, 5, 4),
+ {
+ "kernel_shape": (2, 3),
+ "pads": (3,),
+ },
+ ValueError,
+ "Expected pads to have 4 elements but it has 1",
+ ),
+ pytest.param(
+ (1, 1, 5, 4),
+ {
+ "kernel_shape": (2, 3),
+ "pads": (1, 1, 2, 2),
+ },
+ NotImplementedError,
+ "Desired pads of (1, 1, 2, 2) is not supported yet because of uneven padding",
+ ),
+ pytest.param(
+ (1, 1, 5, 4),
+ {
+ "kernel_shape": (2, 3),
+ "dilations": "",
+ },
+ TypeError,
+ "Expected dilations to be a tuple or a list but it's str",
+ ),
+ pytest.param(
+ (1, 1, 5, 4),
+ {
+ "kernel_shape": (2, 3),
+ "dilations": ["0"],
+ },
+ TypeError,
+ "Expected dilations to consist of integers but it has an element of type str",
+ ),
+ pytest.param(
+ (1, 1, 5, 4),
+ {
+ "kernel_shape": (2, 3),
+ "dilations": (3,),
+ },
+ ValueError,
+ "Expected dilations to have 2 elements but it has 1",
+ ),
+ pytest.param(
+ (1, 1, 5, 4),
+ {
+ "kernel_shape": (2, 3),
+ "ceil_mode": None,
+ },
+ TypeError,
+ "Expected ceil_mode to be of type int but it's NoneType",
+ ),
+ pytest.param(
+ (1, 1, 5, 4),
+ {
+ "kernel_shape": (2, 3),
+ "ceil_mode": 10,
+ },
+ ValueError,
+ "Expected ceil_mode to be one of 0, 1 but it's 10",
+ ),
+ pytest.param(
+ (1, 1, 5, 4),
+ {
+ "kernel_shape": (2, 3),
+ "ceil_mode": 1,
+ },
+ NotImplementedError,
+ "Desired ceil_mode of 1 is not supported yet",
+ ),
+ pytest.param(
+ (1, 1, 5, 4),
+ {
+ "kernel_shape": (2, 3),
+ "storage_order": None,
+ },
+ TypeError,
+ "Expected storage_order to be of type int but it's NoneType",
+ ),
+ pytest.param(
+ (1, 1, 5, 4),
+ {
+ "kernel_shape": (2, 3),
+ "storage_order": 10,
+ },
+ ValueError,
+ "Expected storage_order to be one of 0, 1 but it's 10",
+ ),
+ pytest.param(
+ (1, 1, 5, 4),
+ {
+ "kernel_shape": (2, 3),
+ "storage_order": 1,
+ },
+ NotImplementedError,
+ "Desired storage_order of 1 is not supported yet",
+ ),
+ ],
+)
+def test_bad_maxpool(
+ input_shape,
+ operation,
+ expected_error,
+ expected_message,
+ helpers,
+):
+ """
+ Test maxpool with bad parameters.
+ """
+
+ with pytest.raises(expected_error) as excinfo:
+ connx.maxpool(np.random.randint(0, 10, size=input_shape), **operation)
+
+ helpers.check_str(expected_message, str(excinfo.value))
+
+
+def test_bad_maxpool_special(helpers):
+ """
+ Test maxpool with bad parameters for special cases.
+ """
+
+ # compile
+ # -------
+
+ @cnp.compiler({"x": "encrypted"})
+ def not_compilable(x):
+ return connx.maxpool(x, kernel_shape=(4, 3))
+
+ inputset = [np.random.randint(0, 10, size=(1, 1, 10, 10)) for i in range(100)]
+ with pytest.raises(NotImplementedError) as excinfo:
+ not_compilable.compile(inputset, helpers.configuration())
+
+ helpers.check_str("MaxPool operation cannot be compiled yet", str(excinfo.value))
+
+ # clear input
+ # -----------
+
+ @cnp.compiler({"x": "clear"})
+ def clear_input(x):
+ return connx.maxpool(x, kernel_shape=(4, 3, 2))
+
+ inputset = [np.zeros((1, 1, 10, 10, 10), dtype=np.int64)]
+ with pytest.raises(RuntimeError) as excinfo:
+ clear_input.compile(inputset, helpers.configuration())
+
+ helpers.check_str(
+ # pylint: disable=line-too-long
+ """
+
+Function you are trying to compile cannot be converted to MLIR
+
+%0 = x # ClearTensor ∈ [0, 0]
+%1 = maxpool(%0, kernel_shape=(4, 3, 2), strides=(1, 1, 1), pads=(0, 0, 0, 0, 0, 0), dilations=(1, 1, 1), ceil_mode=False) # ClearTensor ∈ [0, 0]
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ only encrypted maxpool is supported
+return %1
+
+ """.strip(), # noqa: E501
+ # pylint: enable=line-too-long
+ str(excinfo.value),
+ )
+
+ # badly typed ndarray input
+ # -------------------------
+
+ with pytest.raises(TypeError) as excinfo:
+ connx.maxpool(np.array([{}, None]), ())
+
+ helpers.check_str(
+ # pylint: disable=line-too-long
+ """
+
+Expected input elements to be of type np.integer, np.floating, or np.bool_ but it's dtype[object_]
+
+ """.strip(), # noqa: E501
+ # pylint: enable=line-too-long
+ str(excinfo.value),
+ )
+
+ # badly typed input
+ # -----------------
+
+ with pytest.raises(TypeError) as excinfo:
+ connx.maxpool("", ())
+
+ helpers.check_str(
+ # pylint: disable=line-too-long
+ """
+
+Expected input to be of type np.ndarray or Tracer but it's str
+
+ """.strip(), # noqa: E501
+ # pylint: enable=line-too-long
+ str(excinfo.value),
+ )
diff --git a/frontends/concrete-python/tests/execution/test_mul.py b/frontends/concrete-python/tests/execution/test_mul.py
new file mode 100644
index 000000000..925236967
--- /dev/null
+++ b/frontends/concrete-python/tests/execution/test_mul.py
@@ -0,0 +1,76 @@
+"""
+Tests of execution of mul operation.
+"""
+
+import numpy as np
+import pytest
+
+import concrete.numpy as cnp
+
+
+@pytest.mark.parametrize(
+ "function",
+ [
+ pytest.param(
+ lambda x: x * 3,
+ id="x * 3",
+ ),
+ pytest.param(
+ lambda x: 3 * x,
+ id="3 * x",
+ ),
+ pytest.param(
+ lambda x: np.dot(x, 3),
+ id="np.dot(x, 3)",
+ ),
+ pytest.param(
+ lambda x: np.dot(3, x),
+ id="np.dot(3, x)",
+ ),
+ pytest.param(
+ lambda x: x * np.array([1, 2, 3]),
+ id="x * [1, 2, 3]",
+ ),
+ pytest.param(
+ lambda x: np.array([1, 2, 3]) * x,
+ id="[1, 2, 3] * x",
+ ),
+ pytest.param(
+ lambda x: x * np.array([[1, 2, 3], [3, 1, 2]]),
+ id="x * [[1, 2, 3], [3, 1, 2]]",
+ ),
+ pytest.param(
+ lambda x: np.array([[1, 2, 3], [3, 1, 2]]) * x,
+ id="[[1, 2, 3], [3, 1, 2]] * x",
+ ),
+ ],
+)
+@pytest.mark.parametrize(
+ "parameters",
+ [
+ {
+ "x": {"range": [0, 40], "status": "encrypted"},
+ },
+ {
+ "x": {"range": [0, 40], "status": "encrypted", "shape": (3,)},
+ },
+ {
+ "x": {"range": [0, 40], "status": "encrypted", "shape": (2, 3)},
+ },
+ ],
+)
+def test_constant_mul(function, parameters, helpers):
+ """
+ Test mul where one of the operators is a constant.
+ """
+
+ parameter_encryption_statuses = helpers.generate_encryption_statuses(parameters)
+ configuration = helpers.configuration()
+
+ compiler = cnp.Compiler(function, parameter_encryption_statuses)
+
+ inputset = helpers.generate_inputset(parameters)
+ circuit = compiler.compile(inputset, configuration)
+
+ sample = helpers.generate_sample(parameters)
+ helpers.check_execution(circuit, function, sample)
diff --git a/frontends/concrete-python/tests/execution/test_neg.py b/frontends/concrete-python/tests/execution/test_neg.py
new file mode 100644
index 000000000..43d042cb3
--- /dev/null
+++ b/frontends/concrete-python/tests/execution/test_neg.py
@@ -0,0 +1,52 @@
+"""
+Tests of execution of neg operation.
+"""
+
+import numpy as np
+import pytest
+
+import concrete.numpy as cnp
+
+
+@pytest.mark.parametrize(
+ "parameters",
+ [
+ {
+ "x": {"range": [0, 64], "status": "encrypted"},
+ },
+ {
+ "x": {"range": [0, 64], "status": "encrypted", "shape": (3, 2)},
+ },
+ {
+ "x": {"range": [-63, 0], "status": "encrypted"},
+ },
+ {
+ "x": {"range": [-63, 0], "status": "encrypted", "shape": (3, 2)},
+ },
+ ],
+)
+def test_neg(parameters, helpers):
+ """
+ Test neg.
+ """
+
+ parameter_encryption_statuses = helpers.generate_encryption_statuses(parameters)
+ configuration = helpers.configuration()
+
+ @cnp.compiler(parameter_encryption_statuses)
+ def operator(x):
+ return -x
+
+ @cnp.compiler(parameter_encryption_statuses)
+ def function(x):
+ return np.negative(x)
+
+ inputset = helpers.generate_inputset(parameters)
+
+ operator_circuit = operator.compile(inputset, configuration)
+ function_circuit = function.compile(inputset, configuration)
+
+ sample = helpers.generate_sample(parameters)
+
+ helpers.check_execution(operator_circuit, operator, sample)
+ helpers.check_execution(function_circuit, function, sample)
diff --git a/frontends/concrete-python/tests/execution/test_ones.py b/frontends/concrete-python/tests/execution/test_ones.py
new file mode 100644
index 000000000..5f53c0375
--- /dev/null
+++ b/frontends/concrete-python/tests/execution/test_ones.py
@@ -0,0 +1,49 @@
+"""
+Tests of execution of ones operation.
+"""
+
+import random
+
+import pytest
+
+import concrete.numpy as cnp
+
+
+@pytest.mark.parametrize(
+ "function",
+ [
+ pytest.param(
+ lambda x: cnp.one() + x,
+ id="cnp.one() + x",
+ ),
+ pytest.param(
+ lambda x: cnp.ones(()) + x,
+ id="cnp.ones(()) + x",
+ ),
+ pytest.param(
+ lambda x: cnp.ones(10) + x,
+ id="cnp.ones(10) + x",
+ ),
+ pytest.param(
+ lambda x: cnp.ones((10,)) + x,
+ id="cnp.ones((10,)) + x",
+ ),
+ pytest.param(
+ lambda x: cnp.ones((3, 2)) + x,
+ id="cnp.ones((3, 2)) + x",
+ ),
+ ],
+)
+def test_ones(function, helpers):
+ """
+ Test ones.
+ """
+
+ configuration = helpers.configuration()
+ compiler = cnp.Compiler(function, {"x": "encrypted"})
+
+ inputset = range(10)
+ circuit = compiler.compile(inputset, configuration)
+
+ sample = random.randint(0, 11)
+ helpers.check_execution(circuit, function, sample)
diff --git a/frontends/concrete-python/tests/execution/test_others.py b/frontends/concrete-python/tests/execution/test_others.py
new file mode 100644
index 000000000..fb45bb002
--- /dev/null
+++ b/frontends/concrete-python/tests/execution/test_others.py
@@ -0,0 +1,902 @@
+"""
+Tests of execution of operations converted to table lookups.
+"""
+
+import numpy as np
+import pytest
+
+import concrete.numpy as cnp
+
+
+def fusable_with_bigger_search(x, y):
+ """
+ Fusable function that requires a single iteration for fusing.
+ """
+
+ x = x + 1
+
+ x_1 = x.astype(np.int64)
+ x_1 = x_1 + 1.5
+
+ x_2 = x.astype(np.int64)
+ x_2 = x_2 + 3.4
+
+ add = x_1 + x_2
+ add_int = add.astype(np.int64)
+
+ return add_int + y
+
+
+def fusable_with_bigger_search_needs_second_iteration(x, y):
+ """
+ Fusable function that requires more than one iteration for fusing.
+ """
+
+ x = x + 1
+ x = x + 0.5
+ x = np.cos(x)
+
+ x_1 = x.astype(np.int64)
+ x_1 = x_1 + 1.5
+
+ x_p = x + 1
+ x_p2 = x_p + 1
+
+ x_2 = (x_p + x_p2).astype(np.int64)
+ x_2 = x_2 + 3.4
+
+ add = x_1 + x_2
+ add_int = add.astype(np.int64)
+
+ return add_int + y
+
+
+def fusable_with_one_of_the_start_nodes_is_lca_generator():
+ """
+ Generator of a fusable function that has one of the start nodes as lca.
+ """
+
+ # pylint: disable=invalid-name,too-many-locals,too-many-statements
+
+ def subgraph_18(x):
+ t0 = 0
+ t1 = 3
+ t2 = 2
+ t3 = 2.4688520431518555
+ t4 = 2.4688520431518555
+ t5 = x
+ t6 = np.multiply(t4, t5)
+ t7 = np.true_divide(t6, t3)
+ t8 = np.add(t7, t2)
+ t9 = np.rint(t8)
+ t10 = np.clip(t9, t0, t1)
+ t11 = t10.astype(np.int64)
+ return t11
+
+ def subgraph_24(x):
+ t0 = 0
+ t1 = [0.15588106, -0.01305565]
+ t2 = 1.3664466152828822
+ t3 = [[4, -4]]
+ t4 = 0
+ t5 = x
+ t6 = t5.astype(np.float32)
+ t7 = np.add(t6, t4)
+ t8 = np.add(t7, t3)
+ t9 = np.multiply(t2, t8)
+ t10 = np.add(t1, t9)
+ t11 = np.greater(t10, t0)
+ return t11
+
+ cst0 = np.random.randint(-2, 2, size=(10, 2))
+ cst1 = np.random.randint(0, 2, size=(10, 1))
+
+ def function(x):
+ t0 = 0
+ t1 = 3
+ t2 = 1
+ t3 = 1.2921873902965313
+ t4 = 1.0507009873554805
+ t5 = 1
+ t6 = 1.7580993408473766
+ t7 = [0.15588106, -0.01305565]
+ t8 = 1
+ t9 = 1.3664466152828822
+ t10 = [[4, -4]]
+ t11 = 0
+ t12 = cst0
+ t13 = 0
+ t14 = cst1
+ t15 = x
+ t16 = -2
+ t17 = np.add(t15, t16)
+ t18 = subgraph_18(t17)
+ t19 = np.matmul(t18, t12)
+ t20 = np.matmul(t18, t14)
+ t21 = np.multiply(t13, t20)
+ t22 = np.add(t19, t21)
+ t23 = t22.astype(np.float32)
+ t24 = subgraph_24(t22)
+ t25 = np.add(t23, t11)
+ t26 = np.subtract(t5, t24)
+ t27 = np.add(t25, t10)
+ t28 = np.multiply(t9, t27)
+ t29 = np.add(t7, t28)
+ t30 = np.multiply(t4, t29)
+ t31 = np.exp(t29)
+ t32 = np.multiply(t24, t30)
+ t33 = np.subtract(t31, t8)
+ t34 = np.multiply(t6, t33)
+ t35 = np.multiply(t26, t34)
+ t36 = np.add(t32, t35)
+ t37 = np.true_divide(t36, t3)
+ t38 = np.add(t37, t2)
+ t39 = np.rint(t38)
+ t40 = np.clip(t39, t0, t1)
+ t41 = t40.astype(np.int64)
+ return t41
+
+ return function
+
+ # pylint: enable=invalid-name,too-many-locals,too-many-statements
+
+
+def fusable_with_hard_to_find_lca(x):
+ """
+ Fusable function that requires harder lca search.
+ """
+
+ a = x * 3
+ b = x // 3
+ c = a + b
+ return ((np.sin(a) ** 2) + (np.cos(c) ** 2)).round().astype(np.int64)
+
+
+def fusable_with_hard_to_find_lca_used_twice(x):
+ """
+ Fusable function that uses `fusable_with_hard_to_find_lca` twice.
+ """
+
+ a = x @ np.array([[3, 1], [4, 2]])
+ b = x @ np.array([[1, 2], [3, 4]])
+
+ a = fusable_with_hard_to_find_lca(a)
+ b = fusable_with_hard_to_find_lca(b)
+
+ return a + b
+
+
+def fusable_additional_1(x):
+ """
+ Another fusable function for additional safety.
+ """
+
+ a = x.astype(np.float64) * 3.0
+ b = x + 1
+ c = a.astype(np.int64)
+ return (a + b + c).astype(np.int64)
+
+
+def fusable_additional_2(x):
+ """
+ Another fusable function for additional safety.
+ """
+
+ a = x.astype(np.float64) / 3.0
+ b = x + 1
+ c = a * a
+ return (a + b + c).astype(np.int64)
+
+
+def deterministic_unary_function(x):
+ """
+ An example deterministic unary function.
+ """
+
+ def per_element(element):
+ result = 0
+ for i in range(element):
+ result += i
+ return result
+
+ return np.vectorize(per_element)(x)
+
+
+@pytest.mark.parametrize(
+ "function,parameters",
+ [
+ pytest.param(
+ lambda x: x // 3,
+ {
+ "x": {"status": "encrypted", "range": [0, 127]},
+ },
+ id="x // 3",
+ ),
+ pytest.param(
+ lambda x: 127 // x,
+ {
+ "x": {"status": "encrypted", "range": [1, 127]},
+ },
+ id="127 // x",
+ ),
+ pytest.param(
+ lambda x: (x / 3).astype(np.int64),
+ {
+ "x": {"status": "encrypted", "range": [0, 127]},
+ },
+ id="(x / 3).astype(np.int64)",
+ ),
+ pytest.param(
+ lambda x: (127 / x).astype(np.int64),
+ {
+ "x": {"status": "encrypted", "range": [1, 127]},
+ },
+ id="(127 / x).astype(np.int64)",
+ ),
+ pytest.param(
+ lambda x: x**2,
+ {
+ "x": {"status": "encrypted", "range": [0, 11]},
+ },
+ id="x ** 2",
+ ),
+ pytest.param(
+ lambda x: 2**x,
+ {
+ "x": {"status": "encrypted", "range": [0, 6]},
+ },
+ id="2 ** x",
+ ),
+ pytest.param(
+ lambda x: x % 10,
+ {
+ "x": {"status": "encrypted", "range": [0, 127]},
+ },
+ id="x % 10",
+ ),
+ pytest.param(
+ lambda x: 121 % x,
+ {
+ "x": {"status": "encrypted", "range": [1, 127]},
+ },
+ id="121 % x",
+ ),
+ pytest.param(
+ lambda x: +x,
+ {
+ "x": {"status": "encrypted", "range": [0, 127]},
+ },
+ id="+x",
+ ),
+ pytest.param(
+ lambda x: abs(42 - x),
+ {
+ "x": {"status": "encrypted", "range": [0, 84]},
+ },
+ id="abs(64 - x)",
+ ),
+ pytest.param(
+ lambda x: ~x,
+ {
+ "x": {"status": "encrypted", "range": [0, 16]},
+ },
+ id="~x",
+ ),
+ pytest.param(
+ lambda x: x & 10,
+ {
+ "x": {"status": "encrypted", "range": [0, 16]},
+ },
+ id="x & 10",
+ ),
+ pytest.param(
+ lambda x: 5 & x,
+ {
+ "x": {"status": "encrypted", "range": [0, 16]},
+ },
+ id="5 & x",
+ ),
+ pytest.param(
+ lambda x: x | 6,
+ {
+ "x": {"status": "encrypted", "range": [0, 16]},
+ },
+ id="x | 6",
+ ),
+ pytest.param(
+ lambda x: 11 | x,
+ {
+ "x": {"status": "encrypted", "range": [0, 16]},
+ },
+ id="11 | x",
+ ),
+ pytest.param(
+ lambda x: x ^ 9,
+ {
+ "x": {"status": "encrypted", "range": [0, 16]},
+ },
+ id="x ^ 9",
+ ),
+ pytest.param(
+ lambda x: 13 ^ x,
+ {
+ "x": {"status": "encrypted", "range": [0, 16]},
+ },
+ id="13 ^ x",
+ ),
+ pytest.param(
+ lambda x: x << 2,
+ {
+ "x": {"status": "encrypted", "range": [0, 16]},
+ },
+ id="x << 2",
+ ),
+ pytest.param(
+ lambda x: 2 << x,
+ {
+ "x": {"status": "encrypted", "range": [0, 5]},
+ },
+ id="2 << x",
+ ),
+ pytest.param(
+ lambda x: x >> 2,
+ {
+ "x": {"status": "encrypted", "range": [0, 120]},
+ },
+ id="x >> 2",
+ ),
+ pytest.param(
+ lambda x: 120 >> x,
+ {
+ "x": {"status": "encrypted", "range": [0, 16]},
+ },
+ id="120 >> x",
+ ),
+ pytest.param(
+ lambda x: x > 50,
+ {
+ "x": {"status": "encrypted", "range": [0, 100]},
+ },
+ id="x > 50",
+ ),
+ pytest.param(
+ lambda x: 50 > x, # pylint: disable=misplaced-comparison-constant
+ {
+ "x": {"status": "encrypted", "range": [0, 100]},
+ },
+ id="50 > x",
+ ),
+ pytest.param(
+ lambda x: x < 50,
+ {
+ "x": {"status": "encrypted", "range": [0, 100]},
+ },
+ id="x < 50",
+ ),
+ pytest.param(
+ lambda x: 50 < x, # pylint: disable=misplaced-comparison-constant
+ {
+ "x": {"status": "encrypted", "range": [0, 100]},
+ },
+ id="50 < x",
+ ),
+ pytest.param(
+ lambda x: x >= 50,
+ {
+ "x": {"status": "encrypted", "range": [0, 100]},
+ },
+ id="x >= 50",
+ ),
+ pytest.param(
+ lambda x: 50 >= x, # pylint: disable=misplaced-comparison-constant
+ {
+ "x": {"status": "encrypted", "range": [0, 100]},
+ },
+ id="50 >= x",
+ ),
+ pytest.param(
+ lambda x: x <= 50,
+ {
+ "x": {"status": "encrypted", "range": [0, 100]},
+ },
+ id="x <= 50",
+ ),
+ pytest.param(
+ lambda x: 50 <= x, # pylint: disable=misplaced-comparison-constant
+ {
+ "x": {"status": "encrypted", "range": [0, 100]},
+ },
+ id="50 <= x",
+ ),
+ pytest.param(
+ lambda x: x == 50,
+ {
+ "x": {"status": "encrypted", "range": [0, 100]},
+ },
+ id="x == 50",
+ ),
+ pytest.param(
+ lambda x: 50 == x, # pylint: disable=misplaced-comparison-constant
+ {
+ "x": {"status": "encrypted", "range": [0, 100]},
+ },
+ id="50 == x",
+ ),
+ pytest.param(
+ lambda x: x != 50,
+ {
+ "x": {"status": "encrypted", "range": [0, 100]},
+ },
+ id="x != 50",
+ ),
+ pytest.param(
+ lambda x: 50 != x, # pylint: disable=misplaced-comparison-constant
+ {
+ "x": {"status": "encrypted", "range": [0, 100]},
+ },
+ id="50 != x",
+ ),
+ pytest.param(
+ lambda x: x.clip(5, 10),
+ {
+ "x": {"status": "encrypted", "range": [0, 15]},
+ },
+ id="x.clip(5, 10)",
+ ),
+ pytest.param(
+ lambda x: (60 * np.sin(x)).astype(np.int64) + 60,
+ {
+ "x": {"status": "encrypted", "range": [0, 127]},
+ },
+ id="(60 * np.sin(x)).astype(np.int64) + 60",
+ ),
+ pytest.param(
+ lambda x: ((np.sin(x) ** 2) + (np.cos(x) ** 2)).astype(np.int64),
+ {
+ "x": {"status": "encrypted", "range": [0, 127]},
+ },
+ id="((np.sin(x) ** 2) + (np.cos(x) ** 2)).astype(np.int64)",
+ ),
+ pytest.param(
+ lambda x: np.maximum(x, [[10, 20], [30, 40], [50, 60]]),
+ {
+ "x": {"status": "encrypted", "range": [0, 127], "shape": (3, 2)},
+ },
+ id="np.maximum(x, [[10, 20], [30, 40], [50, 60]])",
+ ),
+ pytest.param(
+ fusable_with_bigger_search,
+ {
+ "x": {"status": "encrypted", "range": [5, 10]},
+ "y": {"status": "encrypted", "range": [5, 10]},
+ },
+ id="fusable_with_bigger_search",
+ ),
+ pytest.param(
+ fusable_with_bigger_search_needs_second_iteration,
+ {
+ "x": {"status": "encrypted", "range": [5, 10]},
+ "y": {"status": "encrypted", "range": [5, 10]},
+ },
+ id="fusable_with_bigger_search_needs_second_iteration",
+ ),
+ pytest.param(
+ fusable_with_one_of_the_start_nodes_is_lca_generator(),
+ {
+ "x": {"status": "encrypted", "range": [0, 4], "shape": (1, 10)},
+ },
+ id="fusable_with_one_of_the_start_nodes_is_lca",
+ ),
+ pytest.param(
+ fusable_with_hard_to_find_lca,
+ {
+ "x": {"status": "encrypted", "range": [0, 10]},
+ },
+ id="fusable_with_hard_to_find_lca",
+ ),
+ pytest.param(
+ fusable_with_hard_to_find_lca_used_twice,
+ {
+ "x": {"status": "encrypted", "range": [0, 4], "shape": (2, 2)},
+ },
+ id="fusable_with_hard_to_find_lca_used_twice",
+ ),
+ pytest.param(
+ fusable_additional_1,
+ {
+ "x": {"status": "encrypted", "range": [0, 10]},
+ },
+ id="fusable_additional_1",
+ ),
+ pytest.param(
+ fusable_additional_2,
+ {
+ "x": {"status": "encrypted", "range": [0, 10]},
+ },
+ id="fusable_additional_2",
+ ),
+ pytest.param(
+ lambda x: x + x.shape[0] + x.ndim + x.size,
+ {
+ "x": {"status": "encrypted", "range": [0, 15], "shape": (3, 2)},
+ },
+ id="x + x.shape[0] + x.ndim + x.size",
+ ),
+ pytest.param(
+ lambda x: (50 * np.sin(x.transpose())).astype(np.int64),
+ {
+ "x": {"status": "encrypted", "range": [0, 15], "shape": (3, 2)},
+ },
+ id="(50 * np.sin(x.transpose())).astype(np.int64)",
+ ),
+ pytest.param(
+ lambda x: np.where(x < 5, x * 3, x),
+ {
+ "x": {"status": "encrypted", "range": [0, 10]},
+ },
+ id="np.where(x < 5, x * 3, x)",
+ ),
+ pytest.param(
+ lambda x: x + np.ones_like(x),
+ {
+ "x": {"status": "encrypted", "range": [0, 10]},
+ },
+ id="x + np.ones_like(x)",
+ ),
+ pytest.param(
+ lambda x: x + np.zeros_like(x),
+ {
+ "x": {"status": "encrypted", "range": [0, 10]},
+ },
+ id="x + np.zeros_like(x)",
+ ),
+ pytest.param(
+ lambda x: cnp.univariate(deterministic_unary_function)(x),
+ {
+ "x": {"status": "encrypted", "range": [0, 10]},
+ },
+ id="cnp.univariate(deterministic_unary_function)(x)",
+ ),
+ pytest.param(
+ lambda x: round(np.sqrt(x)),
+ {
+ "x": {"status": "encrypted", "range": [0, 100], "shape": ()},
+ },
+ id="round(np.sqrt(x))",
+ ),
+ pytest.param(
+ lambda x: np.sqrt(x).round().astype(np.int64),
+ {
+ "x": {"status": "encrypted", "range": [0, 100]},
+ },
+ id="np.sqrt(x).round().astype(np.int64)",
+ ),
+ pytest.param(
+ lambda x: (2.5 * round(np.sqrt(x), ndigits=4)).astype(np.int64),
+ {
+ "x": {"status": "encrypted", "range": [0, 100], "shape": ()},
+ },
+ id="(2.5 * round(np.sqrt(x), decimals=4)).astype(np.int64)",
+ ),
+ pytest.param(
+ lambda x, y: cnp.LookupTable(list(range(32)))[x + y],
+ {
+ "x": {"status": "encrypted", "range": [-10, 10]},
+ "y": {"status": "encrypted", "range": [-10, 10]},
+ },
+ id="cnp.LookupTable(list(range(32)))[x + y]",
+ ),
+ pytest.param(
+ lambda x: np.expand_dims(x, axis=0),
+ {
+ "x": {"status": "encrypted", "range": [-10, 10], "shape": (3, 2)},
+ },
+ id="np.expand_dims(x, axis=0)",
+ ),
+ pytest.param(
+ lambda x: np.expand_dims(x, axis=1),
+ {
+ "x": {"status": "encrypted", "range": [-10, 10], "shape": (3, 2)},
+ },
+ id="np.expand_dims(x, axis=1)",
+ ),
+ pytest.param(
+ lambda x: np.expand_dims(x, axis=2),
+ {
+ "x": {"status": "encrypted", "range": [-10, 10], "shape": (3, 2)},
+ },
+ id="np.expand_dims(x, axis=2)",
+ ),
+ pytest.param(
+ lambda x: np.expand_dims(x, axis=(0, 1)),
+ {
+ "x": {"status": "encrypted", "range": [-10, 10], "shape": (3, 2)},
+ },
+ id="np.expand_dims(x, axis=(0, 1))",
+ ),
+ pytest.param(
+ lambda x: np.expand_dims(x, axis=(0, 2)),
+ {
+ "x": {"status": "encrypted", "range": [-10, 10], "shape": (3, 2)},
+ },
+ id="np.expand_dims(x, axis=(0, 2))",
+ ),
+ pytest.param(
+ lambda x: np.expand_dims(x, axis=(1, 2)),
+ {
+ "x": {"status": "encrypted", "range": [-10, 10], "shape": (3, 2)},
+ },
+ id="np.expand_dims(x, axis=(1, 2))",
+ ),
+ pytest.param(
+ lambda x: np.expand_dims(x, axis=(0, 1, 2)),
+ {
+ "x": {"status": "encrypted", "range": [-10, 10], "shape": (3, 2)},
+ },
+ id="np.expand_dims(x, axis=(0, 1, 2))",
+ ),
+ pytest.param(
+ lambda x: x**3,
+ {
+ "x": {"status": "encrypted", "range": [-30, 30]},
+ },
+ id="x ** 3",
+ ),
+ pytest.param(
+ lambda x: np.squeeze(x),
+ {
+ "x": {"status": "encrypted", "range": [-10, 10], "shape": (1, 2, 1, 3, 1, 4)},
+ },
+ id="np.squeeze(x)",
+ ),
+ pytest.param(
+ lambda x: np.squeeze(x, axis=2),
+ {
+ "x": {"status": "encrypted", "range": [-10, 10], "shape": (1, 2, 1, 3, 1, 4)},
+ },
+ id="np.squeeze(x, axis=2)",
+ ),
+ pytest.param(
+ lambda x: np.squeeze(x, axis=(0, 4)),
+ {
+ "x": {"status": "encrypted", "range": [-10, 10], "shape": (1, 2, 1, 3, 1, 4)},
+ },
+ id="np.squeeze(x, axis=(0, 4))",
+ ),
+ pytest.param(
+ lambda x: np.squeeze(x),
+ {
+ "x": {"status": "encrypted", "range": [-10, 10], "shape": (1, 1, 1)},
+ },
+ id="np.squeeze(x) where x.shape == (1, 1, 1)",
+ ),
+ pytest.param(
+ lambda x: np.squeeze(x, axis=1),
+ {
+ "x": {"status": "encrypted", "range": [-10, 10], "shape": (1, 1, 1)},
+ },
+ id="np.squeeze(x, axis=1) where x.shape == (1, 1, 1)",
+ ),
+ ],
+)
+def test_others(function, parameters, helpers):
+ """
+ Test others.
+ """
+
+ # scalar
+ # ------
+
+ if "shape" not in parameters["x"] or parameters["x"]["shape"] == ():
+ parameter_encryption_statuses = helpers.generate_encryption_statuses(parameters)
+ configuration = helpers.configuration()
+
+ compiler = cnp.Compiler(function, parameter_encryption_statuses)
+
+ inputset = helpers.generate_inputset(parameters)
+ circuit = compiler.compile(inputset, configuration)
+
+ sample = helpers.generate_sample(parameters)
+ helpers.check_execution(circuit, function, sample)
+
+ # tensor
+ # ------
+
+ if "shape" not in parameters["x"]:
+ parameters["x"]["shape"] = (3, 2)
+
+ if parameters["x"]["shape"] == ():
+ return
+
+ parameter_encryption_statuses = helpers.generate_encryption_statuses(parameters)
+ configuration = helpers.configuration()
+
+ compiler = cnp.Compiler(function, parameter_encryption_statuses)
+
+ inputset = helpers.generate_inputset(parameters)
+ circuit = compiler.compile(inputset, configuration)
+
+ sample = helpers.generate_sample(parameters)
+ helpers.check_execution(circuit, function, sample)
+
+
+def test_others_bad_fusing(helpers):
+ """
+ Test others with bad fusing.
+ """
+
+ configuration = helpers.configuration()
+
+ # two variable inputs
+ # -------------------
+
+ @cnp.compiler({"x": "encrypted", "y": "clear"})
+ def function1(x, y):
+ return (10 * (np.sin(x) ** 2) + 10 * (np.cos(y) ** 2)).astype(np.int64)
+
+ with pytest.raises(RuntimeError) as excinfo:
+ inputset = [(i, i) for i in range(100)]
+ function1.compile(inputset, configuration)
+
+ helpers.check_str(
+ # pylint: disable=line-too-long
+ """
+
+A subgraph within the function you are trying to compile cannot be fused because it has multiple input nodes
+
+ %0 = x # EncryptedScalar
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ this is one of the input nodes
+ %1 = y # ClearScalar
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ this is one of the input nodes
+ %2 = sin(%0) # EncryptedScalar
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ within this subgraph
+ %3 = 2 # ClearScalar
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ within this subgraph
+ %4 = power(%2, %3) # EncryptedScalar
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ within this subgraph
+ %5 = 10 # ClearScalar
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ within this subgraph
+ %6 = multiply(%5, %4) # EncryptedScalar
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ within this subgraph
+ %7 = cos(%1) # ClearScalar
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ within this subgraph
+ %8 = 2 # ClearScalar
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ within this subgraph
+ %9 = power(%7, %8) # ClearScalar
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ within this subgraph
+%10 = 10 # ClearScalar
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ within this subgraph
+%11 = multiply(%10, %9) # ClearScalar
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ within this subgraph
+%12 = add(%6, %11) # EncryptedScalar
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ within this subgraph
+%13 = astype(%12, dtype=int_) # EncryptedScalar
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ within this subgraph
+return %13
+
+ """, # noqa: E501
+ # pylint: enable=line-too-long
+ str(excinfo.value),
+ )
+
+ # intermediates with different shape
+ # ----------------------------------
+
+ @cnp.compiler({"x": "encrypted"})
+ def function2(x):
+ return np.abs(np.sin(x)).reshape((2, 3)).astype(np.int64)
+
+ with pytest.raises(RuntimeError) as excinfo:
+ inputset = [np.random.randint(0, 2**7, size=(3, 2)) for _ in range(100)]
+ function2.compile(inputset, configuration)
+
+ helpers.check_str(
+ # pylint: disable=line-too-long
+ """
+
+A subgraph within the function you are trying to compile cannot be fused because of a node, which is has a different shape than the input node
+
+%0 = x # EncryptedTensor
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ with this input node
+%1 = sin(%0) # EncryptedTensor
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ within this subgraph
+%2 = absolute(%1) # EncryptedTensor
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ within this subgraph
+%3 = reshape(%2, newshape=(2, 3)) # EncryptedTensor
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ within this subgraph
+%4 = astype(%3, dtype=int_) # EncryptedTensor
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ this node has a different shape than the input node
+return %4
+
+ """, # noqa: E501
+ # pylint: enable=line-too-long
+ str(excinfo.value),
+ )
+
+ # non-fusable operation
+ # ---------------------
+
+ @cnp.compiler({"x": "encrypted"})
+ def function3(x):
+ return np.abs(np.sin(x)).transpose().astype(np.int64)
+
+ with pytest.raises(RuntimeError) as excinfo:
+ inputset = [[[0, 1], [2, 3]]]
+ function3.compile(inputset, configuration)
+
+ helpers.check_str(
+ # pylint: disable=line-too-long
+ """
+
+A subgraph within the function you are trying to compile cannot be fused because of a node, which is marked explicitly as non-fusable
+
+%0 = x # EncryptedTensor
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ with this input node
+%1 = sin(%0) # EncryptedTensor
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ within this subgraph
+%2 = absolute(%1) # EncryptedTensor
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ within this subgraph
+%3 = transpose(%2) # EncryptedTensor
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ this node is not fusable
+%4 = astype(%3, dtype=int_) # EncryptedTensor
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ within this subgraph
+return %4
+
+ """, # noqa: E501
+ # pylint: enable=line-too-long
+ str(excinfo.value),
+ )
+
+ # integer two variable inputs
+ # ---------------------------
+
+ @cnp.compiler({"x": "encrypted", "y": "clear"})
+ def function4(x, y):
+ return np.maximum(x, y)
+
+ with pytest.raises(RuntimeError) as excinfo:
+ inputset = [(i, i) for i in range(100)]
+ function4.compile(inputset, configuration)
+
+ helpers.check_str(
+ # pylint: disable=line-too-long
+ """
+
+A subgraph within the function you are trying to compile cannot be fused because it has multiple input nodes
+
+%0 = x # EncryptedScalar
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ this is one of the input nodes
+%1 = y # ClearScalar
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ this is one of the input nodes
+%2 = maximum(%0, %1) # EncryptedScalar
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ within this subgraph
+return %2
+
+ """, # noqa: E501
+ # pylint: enable=line-too-long
+ str(excinfo.value),
+ )
+
+
+def test_others_bad_univariate(helpers):
+ """
+ Test univariate with bad function.
+ """
+
+ configuration = helpers.configuration()
+
+ def bad_univariate(x):
+ return np.array([x, x, x])
+
+ @cnp.compiler({"x": "encrypted"})
+ def f(x):
+ return cnp.univariate(bad_univariate)(x)
+
+ with pytest.raises(ValueError) as excinfo:
+ inputset = range(10)
+ f.compile(inputset, configuration)
+
+ helpers.check_str(
+ "Function bad_univariate cannot be used with cnp.univariate",
+ str(excinfo.value),
+ )
diff --git a/frontends/concrete-python/tests/execution/test_reshape.py b/frontends/concrete-python/tests/execution/test_reshape.py
new file mode 100644
index 000000000..ac44b5093
--- /dev/null
+++ b/frontends/concrete-python/tests/execution/test_reshape.py
@@ -0,0 +1,170 @@
+"""
+Tests of execution of reshape operation.
+"""
+
+import numpy as np
+import pytest
+
+import concrete.numpy as cnp
+
+
+@pytest.mark.parametrize(
+ "shape,newshape",
+ [
+ pytest.param(
+ (12,),
+ (12, 1),
+ ),
+ pytest.param(
+ (12,),
+ (1, 12),
+ ),
+ pytest.param(
+ (12,),
+ (3, 4),
+ ),
+ pytest.param(
+ (12,),
+ (3, 2, 2),
+ ),
+ pytest.param(
+ (3, 4),
+ 12,
+ ),
+ pytest.param(
+ (3, 4),
+ (12,),
+ ),
+ pytest.param(
+ (3, 4),
+ (4, 3),
+ ),
+ pytest.param(
+ (3, 4),
+ (2, 2, 3),
+ ),
+ pytest.param(
+ (3, 4),
+ (2, 3, 2),
+ ),
+ pytest.param(
+ (3, 4),
+ (3, 2, 2),
+ ),
+ pytest.param(
+ (3, 4),
+ (3, 1, 4),
+ ),
+ pytest.param(
+ (3, 4),
+ (12, 1),
+ ),
+ pytest.param(
+ (3, 4),
+ (-1,),
+ ),
+ pytest.param(
+ (3, 4),
+ -1,
+ ),
+ pytest.param(
+ (2, 2, 3),
+ (3, 4),
+ ),
+ pytest.param(
+ (2, 2, 3),
+ (4, 3),
+ ),
+ pytest.param(
+ (2, 2, 3),
+ (3, 2, 2),
+ ),
+ pytest.param(
+ (2, 3, 4, 5, 6),
+ (6, 4, 30),
+ ),
+ pytest.param(
+ (6, 4, 30),
+ (2, 3, 4, 5, 6),
+ ),
+ pytest.param(
+ (2, 3, 4, 5, 6),
+ (2, 60, 6),
+ ),
+ pytest.param(
+ (2, 60, 6),
+ (2, 3, 4, 5, 6),
+ ),
+ pytest.param(
+ (2, 3, 2, 3, 4),
+ (6, 6, -1),
+ ),
+ pytest.param(
+ (2, 3, 2, 3, 4),
+ (6, -1, 12),
+ ),
+ pytest.param(
+ (2, 3, 2, 3, 4),
+ (-1, 18, 4),
+ ),
+ ],
+)
+def test_reshape(shape, newshape, helpers):
+ """
+ Test reshape.
+ """
+
+ configuration = helpers.configuration()
+
+ @cnp.compiler({"x": "encrypted"})
+ def function(x):
+ return np.reshape(x, newshape)
+
+ @cnp.compiler({"x": "encrypted"})
+ def method(x):
+ return x.reshape(newshape)
+
+ inputset = [np.random.randint(0, 2**5, size=shape) for i in range(100)]
+
+ function_circuit = function.compile(inputset, configuration)
+ method_circuit = method.compile(inputset, configuration)
+
+ sample = np.random.randint(0, 2**5, size=shape)
+
+ helpers.check_execution(function_circuit, function, sample)
+ helpers.check_execution(method_circuit, method, sample)
+
+
+@pytest.mark.parametrize(
+ "shape",
+ [
+ pytest.param(
+ (12,),
+ ),
+ pytest.param(
+ (3, 4),
+ ),
+ pytest.param(
+ (2, 2, 3),
+ ),
+ pytest.param(
+ (2, 3, 4, 5, 6),
+ ),
+ ],
+)
+def test_flatten(shape, helpers):
+ """
+ Test flatten.
+ """
+
+ configuration = helpers.configuration()
+
+ @cnp.compiler({"x": "encrypted"})
+ def function(x):
+ return x.flatten()
+
+ inputset = [np.random.randint(0, 2**5, size=shape) for i in range(100)]
+ circuit = function.compile(inputset, configuration)
+
+ sample = np.random.randint(0, 2**5, size=shape)
+ helpers.check_execution(circuit, function, sample)
diff --git a/frontends/concrete-python/tests/execution/test_round_bit_pattern.py b/frontends/concrete-python/tests/execution/test_round_bit_pattern.py
new file mode 100644
index 000000000..dc48ff362
--- /dev/null
+++ b/frontends/concrete-python/tests/execution/test_round_bit_pattern.py
@@ -0,0 +1,282 @@
+"""
+Tests of execution of round bit pattern operation.
+"""
+
+import numpy as np
+import pytest
+
+import concrete.numpy as cnp
+from concrete.numpy.representation.utils import format_constant
+
+
+@pytest.mark.parametrize(
+ "sample,lsbs_to_remove,expected_output",
+ [
+ (0b_0000_0011, 0, 0b_0000_0011),
+ (0b_0000_0100, 0, 0b_0000_0100),
+ (0b_0000_0000, 3, 0b_0000_0000),
+ (0b_0000_0001, 3, 0b_0000_0000),
+ (0b_0000_0010, 3, 0b_0000_0000),
+ (0b_0000_0011, 3, 0b_0000_0000),
+ (0b_0000_0100, 3, 0b_0000_1000),
+ (0b_0000_0101, 3, 0b_0000_1000),
+ (0b_0000_0110, 3, 0b_0000_1000),
+ (0b_0000_0111, 3, 0b_0000_1000),
+ (0b_0000_1000, 3, 0b_0000_1000),
+ (0b_0000_1001, 3, 0b_0000_1000),
+ (0b_0000_1010, 3, 0b_0000_1000),
+ (0b_0000_1011, 3, 0b_0000_1000),
+ (0b_0000_1100, 3, 0b_0001_0000),
+ (0b_0000_1101, 3, 0b_0001_0000),
+ (0b_0000_1110, 3, 0b_0001_0000),
+ (0b_0000_1111, 3, 0b_0001_0000),
+ ],
+)
+def test_plain_round_bit_pattern(sample, lsbs_to_remove, expected_output):
+ """
+ Test round bit pattern in evaluation context.
+ """
+ assert cnp.round_bit_pattern(sample, lsbs_to_remove=lsbs_to_remove) == expected_output
+
+
+@pytest.mark.parametrize(
+ "sample,lsbs_to_remove,expected_error,expected_message",
+ [
+ (
+ np.array([3.2, 4.1]),
+ 3,
+ TypeError,
+ "Expected input elements to be integers but they are dtype[float64]",
+ ),
+ (
+ "foo",
+ 3,
+ TypeError,
+ "Expected input to be an int or a numpy array but it's str",
+ ),
+ ],
+)
+def test_bad_plain_round_bit_pattern(
+ sample,
+ lsbs_to_remove,
+ expected_error,
+ expected_message,
+):
+ """
+ Test round bit pattern in evaluation context with bad parameters.
+ """
+
+ with pytest.raises(expected_error) as excinfo:
+ cnp.round_bit_pattern(sample, lsbs_to_remove=lsbs_to_remove)
+
+ assert str(excinfo.value) == expected_message
+
+
+@pytest.mark.parametrize(
+ "input_bits,lsbs_to_remove",
+ [
+ (3, 1),
+ (3, 2),
+ (4, 1),
+ (4, 2),
+ (4, 3),
+ (5, 1),
+ (5, 2),
+ (5, 3),
+ (5, 4),
+ ],
+)
+def test_round_bit_pattern(input_bits, lsbs_to_remove, helpers):
+ """
+ Test round bit pattern in evaluation context.
+ """
+
+ @cnp.compiler({"x": "encrypted"})
+ def function(x):
+ x_rounded = cnp.round_bit_pattern(x, lsbs_to_remove=lsbs_to_remove)
+ return np.abs(50 * np.sin(x_rounded)).astype(np.int64)
+
+ circuit = function.compile([(2**input_bits) - 1], helpers.configuration())
+ helpers.check_execution(circuit, function, np.random.randint(0, 2**input_bits), simulate=True)
+
+
+def test_auto_rounding(helpers):
+ """
+ Test round bit pattern with auto rounding.
+ """
+
+ # with auto adjust rounders configuration
+ # ---------------------------------------
+
+ # y has the max value of 1999, so it's 11 bits
+ # our target msb is 5 bits, which means we need to remove 6 of the least significant bits
+
+ rounder1 = cnp.AutoRounder(target_msbs=5)
+
+ @cnp.compiler({"x": "encrypted"})
+ def function1(x):
+ y = x + 1000
+ z = cnp.round_bit_pattern(y, lsbs_to_remove=rounder1)
+ return np.sqrt(z).astype(np.int64)
+
+ inputset1 = range(1000)
+ function1.trace(inputset1, helpers.configuration(), auto_adjust_rounders=True)
+
+ assert rounder1.lsbs_to_remove == 6
+
+ # manual
+ # ------
+
+ # y has the max value of 1999, so it's 11 bits
+ # our target msb is 3 bits, which means we need to remove 8 of the least significant bits
+
+ rounder2 = cnp.AutoRounder(target_msbs=3)
+
+ @cnp.compiler({"x": "encrypted"})
+ def function2(x):
+ y = x + 1000
+ z = cnp.round_bit_pattern(y, lsbs_to_remove=rounder2)
+ return np.sqrt(z).astype(np.int64)
+
+ inputset2 = range(1000)
+ cnp.AutoRounder.adjust(function2, inputset2)
+
+ assert rounder2.lsbs_to_remove == 8
+
+ # complicated case
+ # ----------------
+
+ # have 2 ** 8 entries during evaluation, it won't matter after compilation
+ entries3 = list(range(2**8))
+ # we have 8-bit inputs for this table, and we only want to use first 5-bits
+ for i in range(0, 2**8, 2**3):
+ # so we set every 8th entry to a 4-bit value
+ entries3[i] = np.random.randint(0, (2**4) - (2**2))
+ # when this tlu is applied to an 8-bit value with 5-bit msb rounding, result will be 4-bits
+ table3 = cnp.LookupTable(entries3)
+ # and this is the rounder for table1, which should have lsbs_to_remove of 3
+ rounder3 = cnp.AutoRounder(target_msbs=5)
+
+ # have 2 ** 8 entries during evaluation, it won't matter after compilation
+ entries4 = list(range(2**8))
+ # we have 4-bit inputs for this table, and we only want to use first 2-bits
+ for i in range(0, 2**4, 2**2):
+ # so we set every 4th entry to an 8-bit value
+ entries4[i] = np.random.randint(2**7, 2**8)
+ # when this tlu is applied to a 4-bit value with 2-bit msb rounding, result will be 8-bits
+ table4 = cnp.LookupTable(entries4)
+ # and this is the rounder for table2, which should have lsbs_to_remove of 2
+ rounder4 = cnp.AutoRounder(target_msbs=2)
+
+ @cnp.compiler({"x": "encrypted"})
+ def function3(x):
+ a = cnp.round_bit_pattern(x, lsbs_to_remove=rounder3)
+ b = table3[a]
+ c = cnp.round_bit_pattern(b, lsbs_to_remove=rounder4)
+ d = table4[c]
+ return d
+
+ inputset3 = range((2**8) - (2**3))
+ circuit3 = function3.compile(
+ inputset3,
+ helpers.configuration(),
+ auto_adjust_rounders=True,
+ )
+
+ assert rounder3.lsbs_to_remove == 3
+ assert rounder4.lsbs_to_remove == 2
+
+ table3_formatted_string = format_constant(table3.table, 25)
+ table4_formatted_string = format_constant(table4.table, 25)
+
+ helpers.check_str(
+ f"""
+
+%0 = x # EncryptedScalar
+%1 = round_bit_pattern(%0, lsbs_to_remove=3) # EncryptedScalar
+%2 = tlu(%1, table={table3_formatted_string}) # EncryptedScalar
+%3 = round_bit_pattern(%2, lsbs_to_remove=2) # EncryptedScalar
+%4 = tlu(%3, table={table4_formatted_string}) # EncryptedScalar
+return %4
+
+ """,
+ str(circuit3.graph.format(show_bounds=False)),
+ )
+
+
+def test_auto_rounding_without_adjustment():
+ """
+ Test round bit pattern with auto rounding but without adjustment.
+ """
+
+ rounder = cnp.AutoRounder(target_msbs=5)
+
+ def function(x):
+ y = x + 1000
+ z = cnp.round_bit_pattern(y, lsbs_to_remove=rounder)
+ return np.sqrt(z).astype(np.int64)
+
+ with pytest.raises(RuntimeError) as excinfo:
+ function(100)
+
+ assert str(excinfo.value) == (
+ "AutoRounders cannot be used before adjustment, "
+ "please call AutoRounder.adjust with the function that will be compiled "
+ "and provide the exact inputset that will be used for compilation"
+ )
+
+
+def test_auto_rounding_with_empty_inputset():
+ """
+ Test round bit pattern with auto rounding but with empty inputset.
+ """
+
+ rounder = cnp.AutoRounder(target_msbs=5)
+
+ def function(x):
+ y = x + 1000
+ z = cnp.round_bit_pattern(y, lsbs_to_remove=rounder)
+ return np.sqrt(z).astype(np.int64)
+
+ with pytest.raises(ValueError) as excinfo:
+ cnp.AutoRounder.adjust(function, [])
+
+ assert str(excinfo.value) == "AutoRounders cannot be adjusted with an empty inputset"
+
+
+def test_auto_rounding_recursive_adjustment():
+ """
+ Test round bit pattern with auto rounding but with recursive adjustment.
+ """
+
+ rounder = cnp.AutoRounder(target_msbs=5)
+
+ def function(x):
+ cnp.AutoRounder.adjust(function, range(10))
+ y = x + 1000
+ z = cnp.round_bit_pattern(y, lsbs_to_remove=rounder)
+ return np.sqrt(z).astype(np.int64)
+
+ with pytest.raises(RuntimeError) as excinfo:
+ cnp.AutoRounder.adjust(function, range(10))
+
+ assert str(excinfo.value) == "AutoRounders cannot be adjusted recursively"
+
+
+def test_auto_rounding_construct_in_function():
+ """
+ Test round bit pattern with auto rounding but rounder is constructed within the function.
+ """
+
+ def function(x):
+ y = x + 1000
+ z = cnp.round_bit_pattern(y, lsbs_to_remove=cnp.AutoRounder(target_msbs=5))
+ return np.sqrt(z).astype(np.int64)
+
+ with pytest.raises(RuntimeError) as excinfo:
+ cnp.AutoRounder.adjust(function, range(10))
+
+ assert str(excinfo.value) == (
+ "AutoRounders cannot be constructed during adjustment, "
+ "please construct AutoRounders outside the function and reference it"
+ )
diff --git a/frontends/concrete-python/tests/execution/test_shift.py b/frontends/concrete-python/tests/execution/test_shift.py
new file mode 100644
index 000000000..f93fbf0e1
--- /dev/null
+++ b/frontends/concrete-python/tests/execution/test_shift.py
@@ -0,0 +1,178 @@
+"""
+Tests of execution of shift operations.
+"""
+
+import pytest
+
+import concrete.numpy as cnp
+
+
+@pytest.mark.parametrize(
+ "function",
+ [
+ pytest.param(
+ lambda x, y: x << y,
+ id="x << y",
+ ),
+ ],
+)
+@pytest.mark.parametrize(
+ "parameters",
+ [
+ {
+ "x": {"range": [0, 1], "status": "encrypted"},
+ "y": {"range": [0, 7], "status": "encrypted"},
+ },
+ {
+ "x": {"range": [0, 3], "status": "encrypted"},
+ "y": {"range": [0, 3], "status": "encrypted", "shape": (2,)},
+ },
+ {
+ "x": {"range": [0, 3], "status": "encrypted", "shape": (2,)},
+ "y": {"range": [0, 3], "status": "encrypted"},
+ },
+ {
+ "x": {"range": [0, 3], "status": "encrypted", "shape": (2,)},
+ "y": {"range": [0, 3], "status": "encrypted", "shape": (2,)},
+ },
+ ],
+)
+def test_left_shift(function, parameters, helpers):
+ """
+ Test left shift between encrypted integers.
+ """
+
+ parameter_encryption_statuses = helpers.generate_encryption_statuses(parameters)
+ configuration = helpers.configuration()
+
+ compiler = cnp.Compiler(function, parameter_encryption_statuses)
+
+ inputset = helpers.generate_inputset(parameters)
+ circuit = compiler.compile(inputset, configuration)
+
+ sample = helpers.generate_sample(parameters)
+ helpers.check_execution(circuit, function, sample)
+
+
+@pytest.mark.parametrize(
+ "function",
+ [
+ pytest.param(
+ lambda x, y: x >> y,
+ id="x >> y",
+ ),
+ ],
+)
+@pytest.mark.parametrize(
+ "parameters",
+ [
+ {
+ "x": {"range": [0, 1 << 7], "status": "encrypted"},
+ "y": {"range": [0, 7], "status": "encrypted"},
+ },
+ {
+ "x": {"range": [0, 1 << 4], "status": "encrypted"},
+ "y": {"range": [0, 3], "status": "encrypted", "shape": (2,)},
+ },
+ {
+ "x": {"range": [0, 1 << 4], "status": "encrypted", "shape": (2,)},
+ "y": {"range": [0, 3], "status": "encrypted"},
+ },
+ {
+ "x": {"range": [0, 1 << 4], "status": "encrypted", "shape": (2,)},
+ "y": {"range": [0, 3], "status": "encrypted", "shape": (2,)},
+ },
+ ],
+)
+def test_right_shift(function, parameters, helpers):
+ """
+ Test right shift between encrypted integers.
+ """
+
+ parameter_encryption_statuses = helpers.generate_encryption_statuses(parameters)
+ configuration = helpers.configuration()
+
+ compiler = cnp.Compiler(function, parameter_encryption_statuses)
+
+ inputset = helpers.generate_inputset(parameters)
+ circuit = compiler.compile(inputset, configuration)
+
+ sample = helpers.generate_sample(parameters)
+ helpers.check_execution(circuit, function, sample)
+
+
+@pytest.mark.parametrize(
+ "function",
+ [
+ pytest.param(
+ lambda x, y: x << y,
+ id="x << y",
+ ),
+ ],
+)
+@pytest.mark.parametrize(
+ "parameters",
+ [
+ {
+ "x": {"range": [0, 1], "status": "encrypted"},
+ "y": {"range": [0, 7], "status": "encrypted"},
+ },
+ ],
+)
+def test_left_shift_coverage(function, parameters, helpers):
+ """
+ Test left shift between encrypted integers all cases.
+ """
+
+ parameter_encryption_statuses = helpers.generate_encryption_statuses(parameters)
+ configuration = helpers.configuration()
+
+ compiler = cnp.Compiler(function, parameter_encryption_statuses)
+
+ inputset = helpers.generate_inputset(parameters)
+ circuit = compiler.compile(inputset, configuration)
+
+ for i in range(2):
+ for j in range(8):
+ helpers.check_execution(circuit, function, [i, j])
+
+
+@pytest.mark.parametrize(
+ "function",
+ [
+ pytest.param(
+ lambda x, y: x >> y,
+ id="x >> y",
+ ),
+ ],
+)
+@pytest.mark.parametrize(
+ "parameters",
+ [
+ {
+ "x": {"range": [0, 1 << 7], "status": "encrypted"},
+ "y": {"range": [0, 7], "status": "encrypted"},
+ },
+ ],
+)
+def test_right_shift_coverage(function, parameters, helpers):
+ """
+ Test right shift between encrypted integers all cases.
+ """
+
+ parameter_encryption_statuses = helpers.generate_encryption_statuses(parameters)
+ configuration = helpers.configuration()
+
+ compiler = cnp.Compiler(function, parameter_encryption_statuses)
+
+ inputset = helpers.generate_inputset(parameters)
+ circuit = compiler.compile(inputset, configuration)
+
+ helpers.check_execution(circuit, function, [0b11, 0])
+ helpers.check_execution(circuit, function, [0b11, 1])
+ helpers.check_execution(circuit, function, [0b110, 2])
+ helpers.check_execution(circuit, function, [0b1100, 3])
+ helpers.check_execution(circuit, function, [0b11000, 4])
+ helpers.check_execution(circuit, function, [0b110000, 5])
+ helpers.check_execution(circuit, function, [0b110000, 6])
+ helpers.check_execution(circuit, function, [0b1100000, 7])
diff --git a/frontends/concrete-python/tests/execution/test_static_assignment.py b/frontends/concrete-python/tests/execution/test_static_assignment.py
new file mode 100644
index 000000000..2f98076b7
--- /dev/null
+++ b/frontends/concrete-python/tests/execution/test_static_assignment.py
@@ -0,0 +1,530 @@
+"""
+Tests of execution of static assignment operation.
+"""
+
+import numpy as np
+import pytest
+
+import concrete.numpy as cnp
+
+
+def assignment_case_0():
+ """
+ Assignment test case.
+ """
+
+ shape = (3,)
+ value = np.random.randint(0, 2**7, size=())
+
+ def assign(x):
+ x[:] = value
+ return x
+
+ return shape, assign
+
+
+def assignment_case_1():
+ """
+ Assignment test case.
+ """
+
+ shape = (3,)
+ value = np.random.randint(0, 2**7, size=())
+
+ def assign(x):
+ x[0] = value
+ return x
+
+ return shape, assign
+
+
+def assignment_case_2():
+ """
+ Assignment test case.
+ """
+
+ shape = (3,)
+ value = np.random.randint(0, 2**7, size=())
+
+ def assign(x):
+ x[1] = value
+ return x
+
+ return shape, assign
+
+
+def assignment_case_3():
+ """
+ Assignment test case.
+ """
+
+ shape = (3,)
+ value = np.random.randint(0, 2**7, size=())
+
+ def assign(x):
+ x[2] = value
+ return x
+
+ return shape, assign
+
+
+def assignment_case_4():
+ """
+ Assignment test case.
+ """
+
+ shape = (5,)
+ value = np.random.randint(0, 2**7, size=())
+
+ def assign(x):
+ x[0:3] = value
+ return x
+
+ return shape, assign
+
+
+def assignment_case_5():
+ """
+ Assignment test case.
+ """
+
+ shape = (5,)
+ value = np.random.randint(0, 2**7, size=())
+
+ def assign(x):
+ x[1:4] = value
+ return x
+
+ return shape, assign
+
+
+def assignment_case_6():
+ """
+ Assignment test case.
+ """
+
+ shape = (5,)
+ value = np.random.randint(0, 2**7, size=())
+
+ def assign(x):
+ x[1:4:2] = value
+ return x
+
+ return shape, assign
+
+
+def assignment_case_7():
+ """
+ Assignment test case.
+ """
+
+ shape = (10,)
+ value = np.random.randint(0, 2**7, size=())
+
+ def assign(x):
+ x[::2] = value
+ return x
+
+ return shape, assign
+
+
+def assignment_case_8():
+ """
+ Assignment test case.
+ """
+
+ shape = (5,)
+ value = np.random.randint(0, 2**7, size=())
+
+ def assign(x):
+ x[2:0:-1] = value
+ return x
+
+ return shape, assign
+
+
+def assignment_case_9():
+ """
+ Assignment test case.
+ """
+
+ shape = (5,)
+ value = np.random.randint(0, 2**7, size=())
+
+ def assign(x):
+ x[4:0:-2] = value
+ return x
+
+ return shape, assign
+
+
+def assignment_case_10():
+ """
+ Assignment test case.
+ """
+
+ shape = (5,)
+ value = np.random.randint(0, 2**7, size=(3,))
+
+ def assign(x):
+ x[1:4] = value
+ return x
+
+ return shape, assign
+
+
+def assignment_case_11():
+ """
+ Assignment test case.
+ """
+
+ shape = (5,)
+ value = np.random.randint(0, 2**7, size=(3,))
+
+ def assign(x):
+ x[4:1:-1] = value
+ return x
+
+ return shape, assign
+
+
+def assignment_case_12():
+ """
+ Assignment test case.
+ """
+
+ shape = (10,)
+ value = np.random.randint(0, 2**7, size=(3,))
+
+ def assign(x):
+ x[1:7:2] = value
+ return x
+
+ return shape, assign
+
+
+def assignment_case_13():
+ """
+ Assignment test case.
+ """
+
+ shape = (10,)
+ value = np.random.randint(0, 2**7, size=(3,))
+
+ def assign(x):
+ x[7:1:-2] = value
+ return x
+
+ return shape, assign
+
+
+def assignment_case_14():
+ """
+ Assignment test case.
+ """
+
+ shape = (5, 4)
+ value = np.random.randint(0, 2**7, size=())
+
+ def assign(x):
+ x[0, 0] = value
+ return x
+
+ return shape, assign
+
+
+def assignment_case_15():
+ """
+ Assignment test case.
+ """
+
+ shape = (5, 4)
+ value = np.random.randint(0, 2**7, size=())
+
+ def assign(x):
+ x[3, 1] = value
+ return x
+
+ return shape, assign
+
+
+def assignment_case_16():
+ """
+ Assignment test case.
+ """
+
+ shape = (5, 4)
+ value = np.random.randint(0, 2**7, size=())
+
+ def assign(x):
+ x[0] = value
+ return x
+
+ return shape, assign
+
+
+def assignment_case_17():
+ """
+ Assignment test case.
+ """
+
+ shape = (5, 4)
+ value = np.random.randint(0, 2**7, size=(4,))
+
+ def assign(x):
+ x[0] = value
+ return x
+
+ return shape, assign
+
+
+def assignment_case_18():
+ """
+ Assignment test case.
+ """
+
+ shape = (5, 4)
+ value = np.random.randint(0, 2**7, size=(5,))
+
+ def assign(x):
+ x[:, 0] = value
+ return x
+
+ return shape, assign
+
+
+def assignment_case_19():
+ """
+ Assignment test case.
+ """
+
+ shape = (5, 4)
+ value = np.random.randint(0, 2**7, size=(5,))
+
+ def assign(x):
+ x[:, 1] = value
+ return x
+
+ return shape, assign
+
+
+def assignment_case_20():
+ """
+ Assignment test case.
+ """
+
+ shape = (5, 4)
+ value = np.random.randint(0, 2**7, size=())
+
+ def assign(x):
+ x[0:3, :] = value
+ return x
+
+ return shape, assign
+
+
+def assignment_case_21():
+ """
+ Assignment test case.
+ """
+
+ shape = (5, 4)
+ value = np.random.randint(0, 2**7, size=(3, 4))
+
+ def assign(x):
+ x[0:3, :] = value
+ return x
+
+ return shape, assign
+
+
+def assignment_case_22():
+ """
+ Assignment test case.
+ """
+
+ shape = (5, 4)
+ value = np.random.randint(0, 2**7, size=(4,))
+
+ def assign(x):
+ x[0:3, :] = value
+ return x
+
+ return shape, assign
+
+
+def assignment_case_23():
+ """
+ Assignment test case.
+ """
+
+ shape = (5, 4)
+ value = np.random.randint(0, 2**7, size=(3,))
+
+ def assign(x):
+ x[0:3, 1:4] = value
+ return x
+
+ return shape, assign
+
+
+def assignment_case_24():
+ """
+ Assignment test case.
+ """
+
+ shape = (5, 4)
+ value = np.random.randint(0, 2**7, size=(3, 3))
+
+ def assign(x):
+ x[0:3, 1:4] = value
+ return x
+
+ return shape, assign
+
+
+def assignment_case_25():
+ """
+ Assignment test case.
+ """
+
+ shape = (5, 4)
+ value = np.random.randint(0, 2**7, size=(3, 3))
+
+ def assign(x):
+ x[4:1:-1, 3:0:-1] = value
+ return x
+
+ return shape, assign
+
+
+def assignment_case_26():
+ """
+ Assignment test case.
+ """
+
+ shape = (5, 4)
+ value = np.random.randint(0, 2**7, size=(3,))
+
+ def assign(x):
+ x[3:0:-1, 0] = value
+ return x
+
+ return shape, assign
+
+
+def assignment_case_27():
+ """
+ Assignment test case.
+ """
+
+ shape = (5, 4)
+ value = np.random.randint(0, 2**7, size=(2,))
+
+ def assign(x):
+ x[0, 1:3] = value
+ return x
+
+ return shape, assign
+
+
+def assignment_case_28():
+ """
+ Assignment test case.
+ """
+
+ shape = (5, 4)
+ value = np.random.randint(0, 2**7, size=())
+
+ def assign(x):
+ x[2:4, 1:3] = value
+ return x
+
+ return shape, assign
+
+
+@pytest.mark.parametrize(
+ "shape,function",
+ [
+ pytest.param(*assignment_case_0()),
+ pytest.param(*assignment_case_1()),
+ pytest.param(*assignment_case_2()),
+ pytest.param(*assignment_case_3()),
+ pytest.param(*assignment_case_4()),
+ pytest.param(*assignment_case_5()),
+ pytest.param(*assignment_case_6()),
+ pytest.param(*assignment_case_7()),
+ pytest.param(*assignment_case_8()),
+ pytest.param(*assignment_case_9()),
+ pytest.param(*assignment_case_10()),
+ pytest.param(*assignment_case_11()),
+ pytest.param(*assignment_case_12()),
+ pytest.param(*assignment_case_13()),
+ pytest.param(*assignment_case_14()),
+ pytest.param(*assignment_case_15()),
+ pytest.param(*assignment_case_16()),
+ pytest.param(*assignment_case_17()),
+ pytest.param(*assignment_case_18()),
+ pytest.param(*assignment_case_19()),
+ pytest.param(*assignment_case_20()),
+ pytest.param(*assignment_case_21()),
+ pytest.param(*assignment_case_22()),
+ pytest.param(*assignment_case_23()),
+ pytest.param(*assignment_case_24()),
+ pytest.param(*assignment_case_25()),
+ pytest.param(*assignment_case_26()),
+ pytest.param(*assignment_case_27()),
+ pytest.param(*assignment_case_28()),
+ ],
+)
+def test_static_assignment(shape, function, helpers):
+ """
+ Test static assignment.
+ """
+
+ configuration = helpers.configuration()
+ compiler = cnp.Compiler(function, {"x": "encrypted"})
+
+ inputset = [np.random.randint(0, 2**7, size=shape) for _ in range(100)]
+ circuit = compiler.compile(inputset, configuration)
+
+ sample = np.random.randint(0, 2**7, size=shape)
+ helpers.check_execution(circuit, function, sample)
+
+
+def test_bad_static_assignment(helpers):
+ """
+ Test static assingment with bad parameters.
+ """
+
+ configuration = helpers.configuration()
+
+ # with float
+ # ----------
+
+ def f(x):
+ x[1.5] = 0
+ return x
+
+ compiler = cnp.Compiler(f, {"x": "encrypted"})
+
+ inputset = [np.random.randint(0, 2**3, size=(3,)) for _ in range(100)]
+ with pytest.raises(ValueError) as excinfo:
+ compiler.compile(inputset, configuration)
+
+ assert str(excinfo.value) == "Assigning to '1.5' is not supported"
+
+ # with bad slice
+ # --------------
+
+ def g(x):
+ x[slice(1.5, 2.5, None)] = 0
+ return x
+
+ compiler = cnp.Compiler(g, {"x": "encrypted"})
+
+ inputset = [np.random.randint(0, 2**3, size=(3,)) for _ in range(100)]
+ with pytest.raises(ValueError) as excinfo:
+ compiler.compile(inputset, configuration)
+
+ assert str(excinfo.value) == "Assigning to '1.5:2.5' is not supported"
diff --git a/frontends/concrete-python/tests/execution/test_static_indexing.py b/frontends/concrete-python/tests/execution/test_static_indexing.py
new file mode 100644
index 000000000..07772c153
--- /dev/null
+++ b/frontends/concrete-python/tests/execution/test_static_indexing.py
@@ -0,0 +1,198 @@
+"""
+Tests of execution of static indexing operation.
+"""
+
+import numpy as np
+import pytest
+
+import concrete.numpy as cnp
+
+
+@pytest.mark.parametrize(
+ "shape,function",
+ [
+ pytest.param(
+ (3,),
+ lambda x: x[0],
+ id="x[0] where x.shape == (3,)",
+ ),
+ pytest.param(
+ (3,),
+ lambda x: x[1],
+ id="x[1] where x.shape == (3,)",
+ ),
+ pytest.param(
+ (3,),
+ lambda x: x[2],
+ id="x[2] where x.shape == (3,)",
+ ),
+ pytest.param(
+ (3,),
+ lambda x: x[-1],
+ id="x[-1] where x.shape == (3,)",
+ ),
+ pytest.param(
+ (3,),
+ lambda x: x[-2],
+ id="x[-2] where x.shape == (3,)",
+ ),
+ pytest.param(
+ (3,),
+ lambda x: x[-3],
+ id="x[-3] where x.shape == (3,)",
+ ),
+ pytest.param(
+ (3,),
+ lambda x: x[0:1],
+ id="x[0:1] where x.shape == (3,)",
+ ),
+ pytest.param(
+ (3,),
+ lambda x: x[1:2],
+ id="x[1:2] where x.shape == (3,)",
+ ),
+ pytest.param(
+ (3,),
+ lambda x: x[2:3],
+ id="x[2:3] where x.shape == (3,)",
+ ),
+ pytest.param(
+ (3,),
+ lambda x: x[0:2],
+ id="x[0:2] where x.shape == (3,)",
+ ),
+ pytest.param(
+ (3,),
+ lambda x: x[1:3],
+ id="x[1:3] where x.shape == (3,)",
+ ),
+ pytest.param(
+ (3,),
+ lambda x: x[0:3],
+ id="x[0:3] where x.shape == (3,)",
+ ),
+ pytest.param(
+ (3,),
+ lambda x: x[2:0:-1],
+ id="x[2:0:-1] where x.shape == (3,)",
+ ),
+ pytest.param(
+ (3,),
+ lambda x: x[::-1],
+ id="x[::-1] where x.shape == (3,)",
+ ),
+ pytest.param(
+ (3,),
+ lambda x: x[:-1],
+ id="x[:-1] where x.shape == (3,)",
+ ),
+ pytest.param(
+ (3,),
+ lambda x: x[-2:],
+ id="x[-2:] where x.shape == (3,)",
+ ),
+ pytest.param(
+ (3, 4),
+ lambda x: x[0, 0],
+ id="x[0, 0] where x.shape == (3, 4)",
+ ),
+ pytest.param(
+ (3, 4),
+ lambda x: x[0, -1],
+ id="x[0, -1] where x.shape == (3, 4)",
+ ),
+ pytest.param(
+ (3, 4),
+ lambda x: x[-1, 0],
+ id="x[-1, 0] where x.shape == (3, 4)",
+ ),
+ pytest.param(
+ (3, 4),
+ lambda x: x[-1, -1],
+ id="x[-1, -1] where x.shape == (3, 4)",
+ ),
+ pytest.param(
+ (3, 4),
+ lambda x: x[2, 1],
+ id="x[2, 1] where x.shape == (3, 4)",
+ ),
+ pytest.param(
+ (3, 4),
+ lambda x: x[0],
+ id="x[0] where x.shape == (3, 4)",
+ ),
+ pytest.param(
+ (3, 4),
+ lambda x: x[:, 0],
+ id="x[:, 0] where x.shape == (3, 4)",
+ ),
+ pytest.param(
+ (3, 4),
+ lambda x: x[-1],
+ id="x[-1] where x.shape == (3, 4)",
+ ),
+ pytest.param(
+ (3, 4),
+ lambda x: x[:, -1],
+ id="x[:, -1] where x.shape == (3, 4)",
+ ),
+ pytest.param(
+ (3, 4),
+ lambda x: x[1:3, 1:3],
+ id="x[1:3, 1:3] where x.shape == (3, 4)",
+ ),
+ pytest.param(
+ (3, 4),
+ lambda x: x[::-1],
+ id="x[::-1] where x.shape == (3, 4)",
+ ),
+ pytest.param(
+ (10,),
+ lambda x: x[slice(np.int64(8), np.int64(2), np.int64(-2))],
+ id="x[8:2:-2] where x.shape == (10,)",
+ ),
+ ],
+)
+def test_static_indexing(shape, function, helpers):
+ """
+ Test static indexing.
+ """
+
+ configuration = helpers.configuration()
+ compiler = cnp.Compiler(function, {"x": "encrypted"})
+
+ inputset = [np.random.randint(0, 2**5, size=shape) for _ in range(100)]
+ circuit = compiler.compile(inputset, configuration)
+
+ sample = np.random.randint(0, 2**5, size=shape)
+ helpers.check_execution(circuit, function, sample)
+
+
+def test_bad_static_indexing(helpers):
+ """
+ Test static indexing with bad parameters.
+ """
+
+ configuration = helpers.configuration()
+
+ # with float
+ # ----------
+
+ compiler = cnp.Compiler(lambda x: x[1.5], {"x": "encrypted"})
+
+ inputset = [np.random.randint(0, 2**3, size=(3,)) for _ in range(100)]
+ with pytest.raises(ValueError) as excinfo:
+ compiler.compile(inputset, configuration)
+
+ assert str(excinfo.value) == "Indexing with '1.5' is not supported"
+
+ # with bad slice
+ # --------------
+
+ compiler = cnp.Compiler(lambda x: x[slice(1.5, 2.5, None)], {"x": "encrypted"})
+
+ inputset = [np.random.randint(0, 2**3, size=(3,)) for _ in range(100)]
+ with pytest.raises(ValueError) as excinfo:
+ compiler.compile(inputset, configuration)
+
+ assert str(excinfo.value) == "Indexing with '1.5:2.5' is not supported"
diff --git a/frontends/concrete-python/tests/execution/test_sub.py b/frontends/concrete-python/tests/execution/test_sub.py
new file mode 100644
index 000000000..8c3fdd313
--- /dev/null
+++ b/frontends/concrete-python/tests/execution/test_sub.py
@@ -0,0 +1,159 @@
+"""
+Tests of execution of sub operation.
+"""
+
+import numpy as np
+import pytest
+
+import concrete.numpy as cnp
+
+
+@pytest.mark.parametrize(
+ "function",
+ [
+ pytest.param(
+ lambda x: x - 42,
+ id="x - 42",
+ ),
+ pytest.param(
+ lambda x: 42 - x,
+ id="42 - x",
+ ),
+ pytest.param(
+ lambda x: x - np.array([1, 2, 3]),
+ id="x - [1, 2, 3]",
+ ),
+ pytest.param(
+ lambda x: np.array([1, 2, 3]) - x,
+ id="[1, 2, 3] - x",
+ ),
+ pytest.param(
+ lambda x: x - np.array([[1, 2, 3], [4, 5, 6]]),
+ id="x - [[1, 2, 3], [4, 5, 6]]",
+ ),
+ pytest.param(
+ lambda x: np.array([[1, 2, 3], [4, 5, 6]]) - x,
+ id="[[1, 2, 3], [4, 5, 6]] - x",
+ ),
+ ],
+)
+@pytest.mark.parametrize(
+ "parameters",
+ [
+ {
+ "x": {"range": [0, 60], "status": "encrypted"},
+ },
+ {
+ "x": {"range": [0, 60], "status": "encrypted", "shape": (3,)},
+ },
+ {
+ "x": {"range": [0, 60], "status": "encrypted", "shape": (2, 3)},
+ },
+ ],
+)
+def test_constant_sub(function, parameters, helpers):
+ """
+ Test sub where one of the operators is a constant.
+ """
+
+ parameter_encryption_statuses = helpers.generate_encryption_statuses(parameters)
+ configuration = helpers.configuration()
+
+ compiler = cnp.Compiler(function, parameter_encryption_statuses)
+
+ inputset = helpers.generate_inputset(parameters)
+ circuit = compiler.compile(inputset, configuration)
+
+ sample = helpers.generate_sample(parameters)
+ helpers.check_execution(circuit, function, sample)
+
+
+@pytest.mark.parametrize(
+ "function",
+ [
+ pytest.param(
+ lambda x, y: x - y,
+ id="x - y",
+ ),
+ ],
+)
+@pytest.mark.parametrize(
+ "parameters",
+ [
+ {
+ "x": {"range": [0, 60], "status": "clear"},
+ "y": {"range": [0, 60], "status": "encrypted"},
+ },
+ {
+ "x": {"range": [0, 60], "status": "encrypted"},
+ "y": {"range": [0, 60], "status": "clear"},
+ },
+ {
+ "x": {"range": [0, 60], "status": "encrypted"},
+ "y": {"range": [0, 60], "status": "encrypted"},
+ },
+ {
+ "x": {"range": [0, 60], "status": "clear", "shape": (3,)},
+ "y": {"range": [0, 60], "status": "encrypted"},
+ },
+ {
+ "x": {"range": [0, 60], "status": "encrypted", "shape": (3,)},
+ "y": {"range": [0, 60], "status": "clear"},
+ },
+ {
+ "x": {"range": [0, 60], "status": "encrypted", "shape": (3,)},
+ "y": {"range": [0, 60], "status": "encrypted"},
+ },
+ {
+ "x": {"range": [0, 60], "status": "clear"},
+ "y": {"range": [0, 60], "status": "encrypted", "shape": (3,)},
+ },
+ {
+ "x": {"range": [0, 60], "status": "encrypted"},
+ "y": {"range": [0, 60], "status": "clear", "shape": (3,)},
+ },
+ {
+ "x": {"range": [0, 60], "status": "encrypted"},
+ "y": {"range": [0, 60], "status": "encrypted", "shape": (3,)},
+ },
+ {
+ "x": {"range": [0, 60], "status": "clear", "shape": (3,)},
+ "y": {"range": [0, 60], "status": "encrypted", "shape": (3,)},
+ },
+ {
+ "x": {"range": [0, 60], "status": "encrypted", "shape": (3,)},
+ "y": {"range": [0, 60], "status": "clear", "shape": (3,)},
+ },
+ {
+ "x": {"range": [0, 60], "status": "encrypted", "shape": (3,)},
+ "y": {"range": [0, 60], "status": "encrypted", "shape": (3,)},
+ },
+ {
+ "x": {"range": [0, 60], "status": "clear", "shape": (2, 1)},
+ "y": {"range": [0, 60], "status": "encrypted", "shape": (3,)},
+ },
+ {
+ "x": {"range": [0, 60], "status": "encrypted", "shape": (2, 1)},
+ "y": {"range": [0, 60], "status": "clear", "shape": (3,)},
+ },
+ {
+ "x": {"range": [0, 60], "status": "encrypted", "shape": (2, 1)},
+ "y": {"range": [0, 60], "status": "encrypted", "shape": (3,)},
+ },
+ ],
+)
+def test_sub(function, parameters, helpers):
+ """
+ Test sub where both of the operators are dynamic.
+ """
+
+ parameter_encryption_statuses = helpers.generate_encryption_statuses(parameters)
+ configuration = helpers.configuration()
+
+ compiler = cnp.Compiler(function, parameter_encryption_statuses)
+
+ inputset = helpers.generate_inputset(parameters)
+ circuit = compiler.compile(inputset, configuration)
+
+ sample = helpers.generate_sample(parameters)
+ helpers.check_execution(circuit, function, sample)
diff --git a/frontends/concrete-python/tests/execution/test_sum.py b/frontends/concrete-python/tests/execution/test_sum.py
new file mode 100644
index 000000000..01e2b94ab
--- /dev/null
+++ b/frontends/concrete-python/tests/execution/test_sum.py
@@ -0,0 +1,120 @@
+"""
+Tests of execution of sum operation.
+"""
+
+import numpy as np
+import pytest
+
+import concrete.numpy as cnp
+
+
+@pytest.mark.parametrize(
+ "function,parameters",
+ [
+ pytest.param(
+ lambda x: np.sum(x),
+ {
+ "x": {"shape": (3, 2), "range": [0, 10], "status": "encrypted"},
+ },
+ ),
+ pytest.param(
+ lambda x: np.sum(x, axis=None), # type: ignore
+ {
+ "x": {"shape": (3, 2), "range": [0, 10], "status": "encrypted"},
+ },
+ ),
+ pytest.param(
+ lambda x: np.sum(x, axis=0),
+ {
+ "x": {"shape": (3, 2), "range": [0, 10], "status": "encrypted"},
+ },
+ ),
+ pytest.param(
+ lambda x: np.sum(x, axis=1),
+ {
+ "x": {"shape": (3, 2), "range": [0, 10], "status": "encrypted"},
+ },
+ ),
+ pytest.param(
+ lambda x: np.sum(x, axis=-1),
+ {
+ "x": {"shape": (3, 2), "range": [0, 10], "status": "encrypted"},
+ },
+ ),
+ pytest.param(
+ lambda x: np.sum(x, axis=-2),
+ {
+ "x": {"shape": (3, 2), "range": [0, 10], "status": "encrypted"},
+ },
+ ),
+ pytest.param(
+ lambda x: np.sum(x, axis=(0, 1)),
+ {
+ "x": {"shape": (3, 2), "range": [0, 10], "status": "encrypted"},
+ },
+ ),
+ pytest.param(
+ lambda x: np.sum(x, axis=(-2, -1)),
+ {
+ "x": {"shape": (3, 2), "range": [0, 10], "status": "encrypted"},
+ },
+ ),
+ pytest.param(
+ lambda x: np.sum(x, keepdims=True),
+ {
+ "x": {"shape": (3, 2), "range": [0, 10], "status": "encrypted"},
+ },
+ ),
+ pytest.param(
+ lambda x: np.sum(x, axis=0, keepdims=True),
+ {
+ "x": {"shape": (3, 2), "range": [0, 10], "status": "encrypted"},
+ },
+ ),
+ pytest.param(
+ lambda x: np.sum(x, axis=1, keepdims=True),
+ {
+ "x": {"shape": (3, 2), "range": [0, 10], "status": "encrypted"},
+ },
+ ),
+ pytest.param(
+ lambda x: np.sum(x, axis=-1, keepdims=True),
+ {
+ "x": {"shape": (3, 2), "range": [0, 10], "status": "encrypted"},
+ },
+ ),
+ pytest.param(
+ lambda x: np.sum(x, axis=-2, keepdims=True),
+ {
+ "x": {"shape": (3, 2), "range": [0, 10], "status": "encrypted"},
+ },
+ ),
+ pytest.param(
+ lambda x: np.sum(x, axis=(0, 1), keepdims=True),
+ {
+ "x": {"shape": (3, 2), "range": [0, 10], "status": "encrypted"},
+ },
+ ),
+ pytest.param(
+ lambda x: np.sum(x, axis=(-2, -1), keepdims=True),
+ {
+ "x": {"shape": (3, 2), "range": [0, 10], "status": "encrypted"},
+ },
+ ),
+ ],
+)
+def test_sum(function, parameters, helpers):
+ """
+ Test sum.
+ """
+
+ parameter_encryption_statuses = helpers.generate_encryption_statuses(parameters)
+ configuration = helpers.configuration()
+
+ compiler = cnp.Compiler(function, parameter_encryption_statuses)
+
+ inputset = helpers.generate_inputset(parameters)
+ circuit = compiler.compile(inputset, configuration)
+
+ sample = helpers.generate_sample(parameters)
+ helpers.check_execution(circuit, function, sample)
diff --git a/frontends/concrete-python/tests/execution/test_transpose.py b/frontends/concrete-python/tests/execution/test_transpose.py
new file mode 100644
index 000000000..bfe17def7
--- /dev/null
+++ b/frontends/concrete-python/tests/execution/test_transpose.py
@@ -0,0 +1,78 @@
+"""
+Tests of execution of transpose operation.
+"""
+
+import numpy as np
+import pytest
+
+import concrete.numpy as cnp
+
+
+@pytest.mark.parametrize(
+ "function,parameters",
+ [
+ pytest.param(
+ lambda x: np.transpose(x),
+ {
+ "x": {"shape": (3, 2), "range": [0, 10], "status": "encrypted"},
+ },
+ ),
+ pytest.param(
+ lambda x: x.transpose(),
+ {
+ "x": {"shape": (3, 2), "range": [0, 10], "status": "encrypted"},
+ },
+ ),
+ pytest.param(
+ lambda x: x.T,
+ {
+ "x": {"shape": (3, 2), "range": [0, 10], "status": "encrypted"},
+ },
+ ),
+ pytest.param(
+ lambda x: x.transpose((1, 0, 2)),
+ {
+ "x": {"shape": (2, 3, 4), "range": [0, 10], "status": "encrypted"},
+ },
+ ),
+ pytest.param(
+ lambda x: x.transpose((1, 2, 0)),
+ {
+ "x": {"shape": (2, 3, 4), "range": [0, 10], "status": "encrypted"},
+ },
+ ),
+ pytest.param(
+ lambda x: x.transpose((0, 2, 1)),
+ {
+ "x": {"shape": (2, 3, 4), "range": [0, 10], "status": "encrypted"},
+ },
+ ),
+ pytest.param(
+ lambda x: x.transpose((2, 0, 1)),
+ {
+ "x": {"shape": (2, 3, 4), "range": [0, 10], "status": "encrypted"},
+ },
+ ),
+ pytest.param(
+ lambda x: np.transpose(x, (3, 0, 2, 1)),
+ {
+ "x": {"shape": (2, 3, 4, 5), "range": [0, 10], "status": "encrypted"},
+ },
+ ),
+ ],
+)
+def test_transpose(function, parameters, helpers):
+ """
+ Test transpose.
+ """
+
+ parameter_encryption_statuses = helpers.generate_encryption_statuses(parameters)
+ configuration = helpers.configuration()
+
+ compiler = cnp.Compiler(function, parameter_encryption_statuses)
+
+ inputset = helpers.generate_inputset(parameters)
+ circuit = compiler.compile(inputset, configuration)
+
+ sample = helpers.generate_sample(parameters)
+ helpers.check_execution(circuit, function, sample)
diff --git a/frontends/concrete-python/tests/execution/test_zeros.py b/frontends/concrete-python/tests/execution/test_zeros.py
new file mode 100644
index 000000000..6539dc09c
--- /dev/null
+++ b/frontends/concrete-python/tests/execution/test_zeros.py
@@ -0,0 +1,49 @@
+"""
+Tests of execution of zeros operation.
+"""
+
+import random
+
+import pytest
+
+import concrete.numpy as cnp
+
+
+@pytest.mark.parametrize(
+ "function",
+ [
+ pytest.param(
+ lambda x: cnp.zero() + x,
+ id="cnp.zero() + x",
+ ),
+ pytest.param(
+ lambda x: cnp.zeros(()) + x,
+ id="cnp.zeros(()) + x",
+ ),
+ pytest.param(
+ lambda x: cnp.zeros(10) + x,
+ id="cnp.zeros(10) + x",
+ ),
+ pytest.param(
+ lambda x: cnp.zeros((10,)) + x,
+ id="cnp.zeros((10,)) + x",
+ ),
+ pytest.param(
+ lambda x: cnp.zeros((3, 2)) + x,
+ id="cnp.zeros((3, 2)) + x",
+ ),
+ ],
+)
+def test_zeros(function, helpers):
+ """
+ Test zeros.
+ """
+
+ configuration = helpers.configuration()
+ compiler = cnp.Compiler(function, {"x": "encrypted"})
+
+ inputset = range(10)
+ circuit = compiler.compile(inputset, configuration)
+
+ sample = random.randint(0, 11)
+ helpers.check_execution(circuit, function, sample)
diff --git a/frontends/concrete-python/tests/extensions/__init__.py b/frontends/concrete-python/tests/extensions/__init__.py
new file mode 100644
index 000000000..af9e23ae7
--- /dev/null
+++ b/frontends/concrete-python/tests/extensions/__init__.py
@@ -0,0 +1,3 @@
+"""
+Tests of extensions.
+"""
diff --git a/frontends/concrete-python/tests/extensions/test_array.py b/frontends/concrete-python/tests/extensions/test_array.py
new file mode 100644
index 000000000..1489d1552
--- /dev/null
+++ b/frontends/concrete-python/tests/extensions/test_array.py
@@ -0,0 +1,37 @@
+"""
+Tests of 'array' extension.
+"""
+
+import pytest
+
+import concrete.numpy as cnp
+
+
+@pytest.mark.parametrize(
+ "function,parameters,expected_error",
+ [
+ pytest.param(
+ lambda x, y: cnp.array([x, y]),
+ {
+ "x": {"range": [0, 10], "status": "encrypted", "shape": ()},
+ "y": {"range": [0, 10], "status": "encrypted", "shape": (2, 3)},
+ },
+ "Encrypted arrays can only be created from scalars",
+ ),
+ ],
+)
+def test_bad_array(function, parameters, expected_error, helpers):
+ """
+ Test array with bad parameters.
+ """
+
+ parameter_encryption_statuses = helpers.generate_encryption_statuses(parameters)
+ configuration = helpers.configuration()
+
+ compiler = cnp.Compiler(function, parameter_encryption_statuses)
+
+ with pytest.raises(ValueError) as excinfo:
+ inputset = helpers.generate_inputset(parameters)
+ compiler.compile(inputset, configuration)
+
+ assert str(excinfo.value) == expected_error
diff --git a/frontends/concrete-python/tests/extensions/test_table.py b/frontends/concrete-python/tests/extensions/test_table.py
new file mode 100644
index 000000000..46912e35a
--- /dev/null
+++ b/frontends/concrete-python/tests/extensions/test_table.py
@@ -0,0 +1,28 @@
+"""
+Tests of 'LookupTable' extension.
+"""
+
+import pytest
+
+from concrete.numpy import LookupTable
+
+
+@pytest.mark.parametrize(
+ "table, expected_result",
+ [
+ pytest.param(
+ LookupTable([1, 2, 3]),
+ "[1, 2, 3]",
+ ),
+ pytest.param(
+ LookupTable([LookupTable([1, 2, 3]), LookupTable([4, 5, 6])]),
+ "[[1, 2, 3], [4, 5, 6]]",
+ ),
+ ],
+)
+def test_lookup_table_repr(table, expected_result):
+ """
+ Test `__repr__` method of `LookupTable` class.
+ """
+
+ assert repr(table) == expected_result
diff --git a/frontends/concrete-python/tests/extensions/test_tag.py b/frontends/concrete-python/tests/extensions/test_tag.py
new file mode 100644
index 000000000..1a14e2c5f
--- /dev/null
+++ b/frontends/concrete-python/tests/extensions/test_tag.py
@@ -0,0 +1,64 @@
+"""
+Tests of 'tag' extension.
+"""
+
+import numpy as np
+
+import concrete.numpy as cnp
+
+
+def test_tag(helpers):
+ """
+ Test tag extension.
+ """
+
+ def g(z):
+ with cnp.tag("def"):
+ a = 120 - z
+ b = a // 4
+ return b
+
+ @cnp.compiler({"x": "encrypted"})
+ def f(x):
+ with cnp.tag("abc"):
+ x = x * 2
+ with cnp.tag("foo"):
+ y = x + 42
+ z = np.sqrt(y).astype(np.int64)
+
+ return g(z + 3) * 2
+
+ inputset = range(10)
+ circuit = f.trace(inputset, configuration=helpers.configuration())
+
+ helpers.check_str(
+ """
+
+ %0 = x # EncryptedScalar
+ %1 = 2 # ClearScalar @ abc
+ %2 = multiply(%0, %1) # EncryptedScalar @ abc
+ %3 = 42 # ClearScalar @ abc.foo
+ %4 = add(%2, %3) # EncryptedScalar @ abc.foo
+ %5 = subgraph(%4) # EncryptedScalar @ abc
+ %6 = 3 # ClearScalar
+ %7 = add(%5, %6) # EncryptedScalar
+ %8 = 120 # ClearScalar @ def
+ %9 = subtract(%8, %7) # EncryptedScalar @ def
+%10 = 4 # ClearScalar @ def
+%11 = floor_divide(%9, %10) # EncryptedScalar @ def
+%12 = 2 # ClearScalar
+%13 = multiply(%11, %12) # EncryptedScalar
+return %13
+
+Subgraphs:
+
+ %5 = subgraph(%4):
+
+ %0 = input # EncryptedScalar @ abc.foo
+ %1 = sqrt(%0) # EncryptedScalar @ abc
+ %2 = astype(%1, dtype=int_) # EncryptedScalar @ abc
+ return %2
+
+ """.strip(),
+ circuit.format(show_bounds=False),
+ )
diff --git a/frontends/concrete-python/tests/extensions/test_univariate.py b/frontends/concrete-python/tests/extensions/test_univariate.py
new file mode 100644
index 000000000..e60bb57f4
--- /dev/null
+++ b/frontends/concrete-python/tests/extensions/test_univariate.py
@@ -0,0 +1,24 @@
+"""
+Tests of 'univariate' extension.
+"""
+
+import pytest
+
+import concrete.numpy as cnp
+
+
+def test_bad_univariate(helpers):
+ """
+ Test 'univariate' extension with bad parameters.
+ """
+
+ with pytest.raises(ValueError) as excinfo:
+
+ @cnp.circuit({"x": "encrypted"}, helpers.configuration())
+ def function(x: cnp.uint3):
+ return cnp.univariate(lambda x: x**2)(x)
+
+ assert str(excinfo.value) == (
+ "Univariate extension requires `outputs` argument for direct circuit definition "
+ "(e.g., cnp.univariate(function, outputs=cnp.uint4)(x))"
+ )
diff --git a/frontends/concrete-python/tests/internal/__init__.py b/frontends/concrete-python/tests/internal/__init__.py
new file mode 100644
index 000000000..4a8830f30
--- /dev/null
+++ b/frontends/concrete-python/tests/internal/__init__.py
@@ -0,0 +1,3 @@
+"""
+Tests of `concrete.numpy.internal` namespace.
+"""
diff --git a/frontends/concrete-python/tests/internal/test_utils.py b/frontends/concrete-python/tests/internal/test_utils.py
new file mode 100644
index 000000000..57a9fd7de
--- /dev/null
+++ b/frontends/concrete-python/tests/internal/test_utils.py
@@ -0,0 +1,29 @@
+"""
+Tests of utilities related to the entire project.
+"""
+
+import pytest
+
+from concrete.numpy.internal.utils import assert_that, unreachable
+
+
+def test_assert_that():
+ """
+ Test `assert_that` function.
+ """
+
+ with pytest.raises(AssertionError) as excinfo:
+ assert_that(2 + 2 == 3, "no")
+
+ assert str(excinfo.value) == "no"
+
+
+def test_unreachable():
+ """
+ Test `unreachable` function.
+ """
+
+ with pytest.raises(RuntimeError) as excinfo:
+ unreachable()
+
+ assert str(excinfo.value) == "Entered unreachable code"
diff --git a/frontends/concrete-python/tests/mlir/__init__.py b/frontends/concrete-python/tests/mlir/__init__.py
new file mode 100644
index 000000000..dcfe544c8
--- /dev/null
+++ b/frontends/concrete-python/tests/mlir/__init__.py
@@ -0,0 +1,3 @@
+"""
+Tests of `concrete.numpy.mlir` namespace.
+"""
diff --git a/frontends/concrete-python/tests/mlir/test_graph_converter.py b/frontends/concrete-python/tests/mlir/test_graph_converter.py
new file mode 100644
index 000000000..3e48401ee
--- /dev/null
+++ b/frontends/concrete-python/tests/mlir/test_graph_converter.py
@@ -0,0 +1,501 @@
+"""
+Tests of `GraphConverter` class.
+"""
+
+import numpy as np
+import pytest
+
+import concrete.numpy as cnp
+import concrete.onnx as connx
+
+# pylint: disable=line-too-long
+
+
+def assign(x):
+ """
+ Simple assignment to a vector.
+ """
+
+ x[0] = 0
+ return x
+
+
+@pytest.mark.parametrize(
+ "function,encryption_statuses,inputset,expected_error,expected_message",
+ [
+ pytest.param(
+ lambda x, y: (x - y, x + y),
+ {"x": "encrypted", "y": "clear"},
+ [(0, 0), (7, 7), (0, 7), (7, 0)],
+ RuntimeError,
+ """
+
+Function you are trying to compile cannot be converted to MLIR
+
+%0 = x # EncryptedScalar ∈ [0, 7]
+%1 = y # ClearScalar ∈ [0, 7]
+%2 = subtract(%0, %1) # EncryptedScalar ∈ [-7, 7]
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ only a single output is supported
+%3 = add(%0, %1) # EncryptedScalar ∈ [0, 14]
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ only a single output is supported
+return (%2, %3)
+
+ """, # noqa: E501
+ ),
+ pytest.param(
+ lambda x: x,
+ {"x": "clear"},
+ range(-10, 10),
+ RuntimeError,
+ """
+
+Function you are trying to compile cannot be converted to MLIR
+
+%0 = x # ClearScalar ∈ [-10, 9]
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ only encrypted signed integer inputs are supported
+return %0
+
+ """, # noqa: E501
+ ),
+ pytest.param(
+ lambda x: x * 1.5,
+ {"x": "encrypted"},
+ [2.5 * x for x in range(100)],
+ RuntimeError,
+ """
+
+Function you are trying to compile cannot be converted to MLIR
+
+%0 = x # EncryptedScalar ∈ [0.0, 247.5]
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ only integer inputs are supported
+%1 = 1.5 # ClearScalar ∈ [1.5, 1.5]
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ only integer constants are supported
+%2 = multiply(%0, %1) # EncryptedScalar ∈ [0.0, 371.25]
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ only integer operations are supported
+return %2
+
+ """, # noqa: E501
+ ),
+ pytest.param(
+ lambda x: np.sin(x),
+ {"x": "encrypted"},
+ range(100),
+ RuntimeError,
+ """
+
+Function you are trying to compile cannot be converted to MLIR
+
+%0 = x # EncryptedScalar ∈ [0, 99]
+%1 = sin(%0) # EncryptedScalar ∈ [-0.99999, 0.999912]
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ only integer operations are supported
+return %1
+
+ """, # noqa: E501
+ ),
+ pytest.param(
+ lambda x, y: np.concatenate((x, y)),
+ {"x": "encrypted", "y": "clear"},
+ [
+ (
+ np.random.randint(0, 2**3, size=(3, 2)),
+ np.random.randint(0, 2**3, size=(3, 2)),
+ )
+ for _ in range(100)
+ ],
+ RuntimeError,
+ """
+
+Function you are trying to compile cannot be converted to MLIR
+
+%0 = x # EncryptedTensor ∈ [0, 7]
+%1 = y # ClearTensor ∈ [0, 7]
+%2 = concatenate((%0, %1)) # EncryptedTensor ∈ [0, 7]
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ only all encrypted concatenate is supported
+return %2
+
+ """, # noqa: E501
+ ),
+ pytest.param(
+ lambda x, w: connx.conv(x, w),
+ {"x": "encrypted", "w": "encrypted"},
+ [
+ (
+ np.random.randint(0, 2, size=(1, 1, 4)),
+ np.random.randint(0, 2, size=(1, 1, 1)),
+ )
+ for _ in range(100)
+ ],
+ RuntimeError,
+ """
+
+Function you are trying to compile cannot be converted to MLIR
+
+%0 = x # EncryptedTensor ∈ [0, 1]
+%1 = w # EncryptedTensor ∈ [0, 1]
+%2 = conv1d(%0, %1, [0], pads=(0, 0), strides=(1,), dilations=(1,), group=1) # EncryptedTensor ∈ [0, 1]
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ only conv1d with encrypted input and clear weight is supported
+return %2
+
+ """, # noqa: E501
+ ),
+ pytest.param(
+ lambda x, w: connx.conv(x, w),
+ {"x": "encrypted", "w": "encrypted"},
+ [
+ (
+ np.random.randint(0, 2, size=(1, 1, 4, 4)),
+ np.random.randint(0, 2, size=(1, 1, 1, 1)),
+ )
+ for _ in range(100)
+ ],
+ RuntimeError,
+ """
+
+Function you are trying to compile cannot be converted to MLIR
+
+%0 = x # EncryptedTensor ∈ [0, 1]
+%1 = w # EncryptedTensor ∈ [0, 1]
+%2 = conv2d(%0, %1, [0], pads=(0, 0, 0, 0), strides=(1, 1), dilations=(1, 1), group=1) # EncryptedTensor ∈ [0, 1]
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ only conv2d with encrypted input and clear weight is supported
+return %2
+
+ """, # noqa: E501
+ ),
+ pytest.param(
+ lambda x, w: connx.conv(x, w),
+ {"x": "encrypted", "w": "encrypted"},
+ [
+ (
+ np.random.randint(0, 2, size=(1, 1, 4, 4, 4)),
+ np.random.randint(0, 2, size=(1, 1, 1, 1, 1)),
+ )
+ for _ in range(100)
+ ],
+ RuntimeError,
+ """
+
+Function you are trying to compile cannot be converted to MLIR
+
+%0 = x # EncryptedTensor ∈ [0, 1]
+%1 = w # EncryptedTensor ∈ [0, 1]
+%2 = conv3d(%0, %1, [0], pads=(0, 0, 0, 0, 0, 0), strides=(1, 1, 1), dilations=(1, 1, 1), group=1) # EncryptedTensor ∈ [0, 1]
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ only conv3d with encrypted input and clear weight is supported
+return %2
+
+ """, # noqa: E501
+ ),
+ pytest.param(
+ lambda x, y: np.dot(x, y),
+ {"x": "encrypted", "y": "encrypted"},
+ [([0], [0]), ([3], [3]), ([3], [0]), ([0], [3]), ([1], [1])],
+ RuntimeError,
+ """
+
+Function you are trying to compile cannot be converted to MLIR
+
+%0 = x # EncryptedTensor ∈ [0, 3]
+%1 = y # EncryptedTensor ∈ [0, 3]
+%2 = dot(%0, %1) # EncryptedScalar ∈ [0, 9]
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ only dot product between encrypted and clear is supported
+return %2
+
+ """, # noqa: E501
+ ),
+ pytest.param(
+ lambda x: x[0],
+ {"x": "clear"},
+ [[0, 1, 2, 3], [7, 6, 5, 4]],
+ RuntimeError,
+ """
+
+Function you are trying to compile cannot be converted to MLIR
+
+%0 = x # ClearTensor ∈ [0, 7]
+%1 = %0[0] # ClearScalar ∈ [0, 7]
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ only encrypted indexing supported
+return %1
+
+ """, # noqa: E501
+ ),
+ pytest.param(
+ lambda x, y: x @ y,
+ {"x": "encrypted", "y": "encrypted"},
+ [
+ (
+ np.random.randint(0, 2**1, size=(1, 1)),
+ np.random.randint(0, 2**1, size=(1, 1)),
+ )
+ for _ in range(100)
+ ],
+ RuntimeError,
+ """
+
+Function you are trying to compile cannot be converted to MLIR
+
+%0 = x # EncryptedTensor ∈ [0, 1]
+%1 = y # EncryptedTensor ∈ [0, 1]
+%2 = matmul(%0, %1) # EncryptedTensor ∈ [0, 1]
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ only matrix multiplication between encrypted and clear is supported
+return %2
+
+ """, # noqa: E501
+ ),
+ pytest.param(
+ lambda x, y: x * y,
+ {"x": "encrypted", "y": "encrypted"},
+ [(0, 0), (7, 7), (0, 7), (7, 0)],
+ RuntimeError,
+ """
+
+Function you are trying to compile cannot be converted to MLIR
+
+%0 = x # EncryptedScalar ∈ [0, 7]
+%1 = y # EncryptedScalar ∈ [0, 7]
+%2 = multiply(%0, %1) # EncryptedScalar ∈ [0, 49]
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ only multiplication between encrypted and clear is supported
+return %2
+
+ """, # noqa: E501
+ ),
+ pytest.param(
+ lambda x: -x,
+ {"x": "clear"},
+ [0, 7],
+ RuntimeError,
+ """
+
+Function you are trying to compile cannot be converted to MLIR
+
+%0 = x # ClearScalar ∈ [0, 7]
+%1 = negative(%0) # ClearScalar ∈ [-7, 0]
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ only encrypted negation is supported
+return %1
+
+ """, # noqa: E501
+ ),
+ pytest.param(
+ lambda x: x.reshape((3, 2)),
+ {"x": "clear"},
+ [np.random.randint(0, 2**3, size=(2, 3)) for _ in range(100)],
+ RuntimeError,
+ """
+
+Function you are trying to compile cannot be converted to MLIR
+
+%0 = x # ClearTensor ∈ [0, 7]
+%1 = reshape(%0, newshape=(3, 2)) # ClearTensor ∈ [0, 7]
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ only encrypted reshape is supported
+return %1
+
+ """, # noqa: E501
+ ),
+ pytest.param(
+ lambda x: np.sum(x),
+ {"x": "clear"},
+ [np.random.randint(0, 2, size=(1,)) for _ in range(100)],
+ RuntimeError,
+ """
+
+Function you are trying to compile cannot be converted to MLIR
+
+%0 = x # ClearTensor ∈ [0, 1]
+%1 = sum(%0) # ClearScalar ∈ [0, 1]
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ only encrypted sum is supported
+return %1
+
+ """, # noqa: E501
+ ),
+ pytest.param(
+ lambda x: np.maximum(x, np.array([3])),
+ {"x": "clear"},
+ [[0], [1]],
+ RuntimeError,
+ """
+
+Function you are trying to compile cannot be converted to MLIR
+
+%0 = x # ClearTensor ∈ [0, 1]
+%1 = [3] # ClearTensor ∈ [3, 3]
+%2 = maximum(%0, %1) # ClearTensor ∈ [3, 3]
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ one of the operands must be encrypted
+return %2
+
+ """, # noqa: E501
+ ),
+ pytest.param(
+ lambda x: np.transpose(x),
+ {"x": "clear"},
+ [np.random.randint(0, 2, size=(3, 2)) for _ in range(10)],
+ RuntimeError,
+ """
+
+Function you are trying to compile cannot be converted to MLIR
+
+%0 = x # ClearTensor ∈ [0, 1]
+%1 = transpose(%0) # ClearTensor ∈ [0, 1]
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ only encrypted transpose is supported
+return %1
+
+ """, # noqa: E501
+ ),
+ pytest.param(
+ lambda x: np.broadcast_to(x, shape=(3, 2)),
+ {"x": "clear"},
+ [np.random.randint(0, 2, size=(2,)) for _ in range(10)],
+ RuntimeError,
+ """
+
+Function you are trying to compile cannot be converted to MLIR
+
+%0 = x # ClearTensor ∈ [0, 1]
+%1 = broadcast_to(%0, shape=(3, 2)) # ClearTensor ∈ [0, 1]
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ only encrypted broadcasting is supported
+return %1
+
+ """, # noqa: E501
+ ),
+ pytest.param(
+ assign,
+ {"x": "clear"},
+ [np.random.randint(0, 2, size=(3,)) for _ in range(10)],
+ RuntimeError,
+ """
+
+Function you are trying to compile cannot be converted to MLIR
+
+%0 = x # ClearTensor ∈ [0, 1]
+%1 = 0 # ClearScalar ∈ [0, 0]
+%2 = (%0[0] = %1) # ClearTensor ∈ [0, 1]
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ only assignment to encrypted tensors are supported
+return %2
+
+ """, # noqa: E501
+ ),
+ pytest.param(
+ lambda x: np.abs(10 * np.sin(x + 300)).astype(np.int64),
+ {"x": "encrypted"},
+ [200000],
+ RuntimeError,
+ """
+
+Function you are trying to compile cannot be converted to MLIR:
+
+%0 = x # EncryptedScalar ∈ [200000, 200000]
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ this input is 18-bits
+%1 = 300 # ClearScalar ∈ [300, 300]
+%2 = add(%0, %1) # EncryptedScalar ∈ [200300, 200300]
+%3 = subgraph(%2) # EncryptedScalar ∈ [9, 9]
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ table lookups are only supported on circuits with up to 16-bits
+return %3
+
+Subgraphs:
+
+ %3 = subgraph(%2):
+
+ %0 = input # EncryptedScalar
+ %1 = sin(%0) # EncryptedScalar
+ %2 = 10 # ClearScalar
+ %3 = multiply(%2, %1) # EncryptedScalar
+ %4 = absolute(%3) # EncryptedScalar
+ %5 = astype(%4, dtype=int_) # EncryptedScalar
+ return %5
+
+ """, # noqa: E501
+ ),
+ pytest.param(
+ lambda x, y: x << y,
+ {"x": "encrypted", "y": "encrypted"},
+ [(-1, 1), (-2, 3)],
+ RuntimeError,
+ """
+
+Function you are trying to compile cannot be converted to MLIR
+
+%0 = x # EncryptedScalar ∈ [-2, -1]
+%1 = y # EncryptedScalar ∈ [1, 3]
+%2 = left_shift(%0, %1) # EncryptedScalar