Docs updates

This commit is contained in:
Ben Haynes
2020-10-06 09:03:20 -04:00
parent 729c9d1ed0
commit 2ea88addeb
9 changed files with 437 additions and 308 deletions

View File

@@ -173,6 +173,10 @@ const sections: Section[] = [
name: 'Error Codes',
to: '/docs/reference/error-codes',
},
{
name: 'Item Rules',
to: '/docs/reference/item-rules',
},
],
},
];

View File

@@ -285,6 +285,7 @@ export default defineComponent({
}
table {
min-width: 100%;
margin: 40px 0;
padding: 0;
border-collapse: collapse;

View File

@@ -94,21 +94,21 @@ The register function (eg: `module.exports = function registerHook()`) must retu
The `registerHook` function receives a context parameter with the following properties:
* `services` — All API interal services [Learn More](#)
* `exceptions` API exception objects that can be used for throwing "proper" errors [Learn More](#)
* `database` — Knex instance that is connected to the current database [Learn More](#)
* `env` Parsed environment variables [Learn More](#)
* `services` — All API interal services
* `exceptions` API exception objects that can be used for throwing "proper" errors
* `database` — Knex instance that is connected to the current database
* `env` Parsed environment variables
### Event Handler Function
The event handler function (eg: `'items.create': function()`) recieves a context parameter with the following properties:
* `event` — Full event string [Learn More](#)
* `accountability` — Information about the current user [Learn More](#)
* `collection` — Collection that is being modified [Learn More](#)
* `item` Primary key(s) of the item(s) being modified [Learn More](#)
* `action` — Action that is performed [Learn More](#)
* `payload` Payload of the request [Learn More](#)
* `event` — Full event string
* `accountability` — Information about the current user
* `collection` — Collection that is being modified
* `item` Primary key(s) of the item(s) being modified
* `action` — Action that is performed
* `payload` Payload of the request
## 5. Restart the API

View File

@@ -10,19 +10,19 @@
4. Enabling **App Access** allows logging in to the App
5. Enabling **Admin Access** gives full permission to project data and Settings
## Deleting a Role
## Configuring a Role
1. Navigate to **Settings > Roles & Permissions > [Role Name]**
2. Click the red **Delete Role** action button in the header
3. Confirm this decision by clicking **Delete** in the dialog
:::warning Users in a Deleted Role
If you delete a role that still has users in it, those users will be given a `NULL` role, which denies their App access and limits them to the [Public](#) permissions. They can then be reassigned to a new role by an admin.
:::
:::warning Last Admin
You must maintain at least one role/user with Admin Access so that you can still properly manage the project.
:::
* **Permissions** — Defines the role's access permissions, see [Configuring Role Permissions](#) and [Configuring System Permissions](#)
* **Role Name** — This is the name of the role
* **Role Icon** — The icon used throughout the App when referencing this role
* **Description** — A helpful note that explains the role's purpose
* **App Access** — Allows logging in to the App
* **Admin Access** — Gives full permission to project data and Settings
* **IP Access** — An allow-list of IP addresses from which the platform can be accessed, empty allows all
* **Require 2FA** — Forces all users within this role to use two-factor authentication
* **Users in Role** — A list of all users within this role
* **Module Navigation** — Overrides the visible modules, see [Customizing the Module Navigation](#)
* **Collection Navigation** — Overrides the collection module's navigation, see [Customizing the Collection Navigation](#)
### Customizing the Module Navigation
@@ -39,18 +39,28 @@ The options in the [Module Bar](#) can be overridden with custom options per rol
If you are looking to replicate the default modules, paste the following configuration into the Module Navigation field using the [Raw Value](#) field label option.
```json
Collections
box
/collections
User Directory
people_alt
/users
File Library
folder
/files
Documentation
info
/docs
[
{
"icon": "box",
"name": "Collections",
"link": "/collections"
},
{
"icon": "people_alt",
"name": "User Directory",
"link": "/users"
},
{
"icon": "folder",
"name": "File Library",
"link": "/files"
},
{
"icon": "info",
"name": "Documentation",
"link": "/docs"
}
]
```
:::warning Settings Module
@@ -77,8 +87,64 @@ The collections in the [Navigation Bar](#) can be overridden with custom options
7. Choose a **Collection** from the dropdown
8. Use the drag handles to **drag-and-drop** the groups/collections into the desired order
## Configuring Permissions
## Configuring Role Permissions
Directus possesses an extremely granular, yet easy to configure, permissions system. When creating a new role, permissions are disabled for all project collections by default — allowing you to give explicit access to only what is required. Individual permissions are applied to the role, and each is scoped to a specific collection and CRUD action (create, read, update, delete).
:::warning Saves Automatically
Every change made to the permissions of a role is saved automatically and instantly.
:::
1. Navigate to **Settings > Roles & Permissions > [Role Name]**
2. Scroll to the **Permissions** section
3. **Click the icon** for the collection (row) and action (column) you want to set
4. Choose the desired permission level: **All Access**, **No Access**, or **Use Custom**
If you selected _All Access_ or _No Access_ then setup is complete. If you chose to customize the permissions, then a modal will open with additional configuration options. Continue with the appropriate guide below based on the _action_ of the permission.
#### Create (Custom)
5. **Field Permissions** control which fields accept a value on create. Fields are individually toggled.
6. **Field Validation** define the rules for field values on create, as defined by the [Filter Rules](#) entered.
7. **Field Presets** control the field defaults when creating an item, as defined by the [Item Object](#) entered.
#### Read (Custom)
5. **Item Permissions** control which items can be read, as defined by the [Filter Rules](#) entered.
6. **Field Permissions** control which fields can be read. Fields are individually toggled.
#### Update (Custom)
5. **Item Permissions** control which items can be updated, as defined by the [Filter Rules](#) entered.
6. **Field Permissions** control which fields can be updated. Fields are individually toggled.
7. **Field Validation** define the rules for field values on update, as defined by the [Filter Rules](#) entered.
8. **Field Presets** control the field defaults when updating an item, as defined by the [Item Object](#) entered.
#### Delete (Custom)
5. **Item Permissions** control which items can be deleted, as defined by the [Filter Rules](#) entered.
## Configuring System Permissions
In addition to setting permissions for your project's collections, you can also tailor the permissions for system collections. It is important to note that when [App Access](#) is enabled for a role, Directus will automatically add permission for the neccesary system collections. To edit system permissions, simply click the "System Collections" toggle, and then edit permissions using the same steps as with project collections.
::: Resetting System Permissions
To reset the role's system permissions for proper App access, expand the system collections and then click "Reset System Permissions" at the bottom of the listing.
:::
## Deleting a Role
1. Navigate to **Settings > Roles & Permissions > [Role Name]**
2. Click the red **Delete Role** action button in the header
3. Confirm this decision by clicking **Delete** in the dialog
:::warning Users in a Deleted Role
If you delete a role that still has users in it, those users will be given a `NULL` role, which denies their App access and limits them to the [Public](#) permissions. They can then be reassigned to a new role by an admin.
:::
:::warning Last Admin
You must maintain at least one role/user with Admin Access so that you can still properly manage the project.
:::
## Creating a User

View File

@@ -1,249 +1,250 @@
# Project Environment Variables
# Environment Variables
> Each Directus project supports a number of environment variables for configuration. These variables are added to the `/api/.env` file, with an example file at `/api/example.env` for easier boilerplate setup.
## General
| Env. Var. | Description | Default Value |
|--------------|----------------------------------------------------------------------------------------------------|---------------|
| `PORT` | What port to run the API under | 8055 |
| `PUBLIC_URL` | URL where your API can be reached on the web | `/` |
| `LOG_LEVEL` | What level of detail to log. One of `fatal`, `error`, `warn`, `info`, `debug`, `trace` or `silent` | `info` |
| `LOG_STYLE` | Render the logs human readable (pretty) or as JSON. One of `pretty`, `raw`. | `pretty` |
### `PORT`
What port to run the API under.<br>**Default: `8055`**
### `PUBLIC_URL`
URL where your API can be reached on the web.<br>**Default: `/`**
### `LOG_LEVEL`
What level of detail to log. One of `fatal`, `error`, `warn`, `info`, `debug`, `trace` or `silent`.<br>**Default: `info`**
### `LOG_STYLE`
Render the logs human readable (pretty) or as JSON. One of `pretty`, `raw`.<br>**Default: `pretty`**
---
## Database
| Env. Var. | Description | Default Value |
|-------------|---------------------------------------------------------------------------------------------|---------------|
| `DB_CLIENT` | What database client to use. One of `pg`, `mysql`, `mysql2`, `sqlite3`, `oracledb`, `mssql` | -- |
### `DB_CLIENT`
Depending on what DB client you use, the config is one of:
What database client to use. One of `pg`, `mysql`, `mysql2`, `oracledb`, `mssql`, or `sqlite3`. For all database clients except SQLite, you will also need to configure the following variables:
### `sqlite3`
### `DB_HOST`
| Env. Var. | Description | Default Value |
|---------------|-----------------------------------------|---------------|
| `DB_FILENAME` | Where to read/write the SQLite database | -- |
Database host. Required when using `pg`, `mysql`, `mysql2`, `oracledb`, or `mssql`.
### `pg`, `mysql`, `mysql2`, `oracledb`, `mssql`
### `DB_PORT`
| Env. Var. | Description | Default Value |
|---------------|--------------------------|---------------|
| `DB_HOST` | Database host | -- |
| `DB_PORT` | Database port | -- |
| `DB_DATABASE` | Database name | -- |
| `DB_USER` | Database user | -- |
| `DB_PASSWORD` | Database user's password | -- |
Database port. Required when using `pg`, `mysql`, `mysql2`, `oracledb`, or `mssql`.
::: tip
### `DB_DATABASE`
Database name. Required when using `pg`, `mysql`, `mysql2`, `oracledb`, or `mssql`.
### `DB_USER`
Database user. Required when using `pg`, `mysql`, `mysql2`, `oracledb`, or `mssql`.
### `DB_PASSWORD`
Database user's password. Required when using `pg`, `mysql`, `mysql2`, `oracledb`, or `mssql`.
### `DB_FILENAME` (SQLite Only)
Where to read/write the SQLite database. Required when using `sqlite3`.
::: Additional Database Variables
All `DB_*` environment variables are passed to the `connection` configuration of a [`Knex` instance](http://knexjs.org).
Based on your exact needs, you can extend the `DB_*` environment variables with any config you need to pass to the database instance.
Based on your project's needs, you can extend the `DB_*` environment variables with any config you need to pass to the database instance.
:::
---
## Security
| Env. Var. | Description | Default Value |
|----------------------------------|----------------------------------------------------------------------------|---------------|
| `KEY` | Unique identifier for the current instance | -- |
| `SECRET` | Secret string | -- |
| `ACCESS_TOKEN_TTL` | The time the access token is valid | `15m` |
| `REFRESH_TOKEN_TTL` | The time the refresh token is valid | `7d` |
| `REFRESH_TOKEN_COOKIE_SECURE` | Whether or not to use a secure cookie for the refresh token in cookie mode | `false` |
| `REFRESH_TOKEN_COOKIE_SAME_SITE` | Value for the `sameSite` value in the refresh token cookie in cookie mode | `lax` |
### `KEY`
::: tip
`REFRESH_TOKEN_TTL` also controls how long a user stays logged in to the app
:::
Unique identifier for the project.
### `SECRET`
Secret string for the project. Generated on installation.
### `ACCESS_TOKEN_TTL`
The duration that the access token is valid.<br>**Default: `15m`**
### `REFRESH_TOKEN_TTL`
The duration that the refresh token is valid, and also how long users stay logged-in to the App.<br>**Default: `7d`**
### `REFRESH_TOKEN_COOKIE_SECURE`
Whether or not to use a secure cookie for the refresh token in cookie mode.<br>**Default: `false`**
### `REFRESH_TOKEN_COOKIE_SAME_SITE`
Value for `sameSite` in the refresh token cookie when in cookie mode.<br>**Default: `lax`**
---
## CORS
| Env. Var. | Description | Default Value |
|------------------------|----------------------------------------------------------------------|------------------------------|
| `CORS_ENABLED` | Whether or not to enable the CORS headers | `true` |
| `CORS_METHODS` | Value for the `Access-Control-Allow-Methods` header | `GET,POST,PATCH,DELETE` |
| `CORS_ALLOWED_HEADERS` | Value for the `Access-Control-Allow-Headers` header | `Content-Type,Authorization` |
| `CORS_EXPOSED_HEADERS` | Value for the `Access-Control-Expose-Headers` header | `Content-Range` |
| `CORS_CREDENTIALS` | Whether or not to send the `Access-Control-Allow-Credentials` header | `true` |
| `CORS_MAX_AGE` | Value for the `Access-Control-Max-Age` header | 18000 |
### `CORS_ENABLED`
Whether or not to enable the CORS headers.<br>**Default: `true`**
### `CORS_METHODS`
Value for the `Access-Control-Allow-Methods` header.<br>**Default: `GET,POST,PATCH,DELETE`**
### `CORS_ALLOWED_HEADERS`
Value for the `Access-Control-Allow-Headers` header.<br>**Default: `Content-Type,Authorization`**
### `CORS_EXPOSED_HEADERS`
Value for the `Access-Control-Expose-Headers` header.<br>**Default: `Content-Range`**
### `CORS_CREDENTIALS`
Whether or not to send the `Access-Control-Allow-Credentials` header.<br>**Default: `true`**
### `CORS_MAX_AGE`
Value for the `Access-Control-Max-Age` header.<br>**Default: `18000`**
---
## Rate Limiting
| Env. Var. | Description | Default Value |
|-------------------------|------------------------------------------------------------------------------|---------------|
| `RATE_LIMITER_ENABLED` | Whether or not to enable rate limiting on the API | `false` |
| `RATE_LIMITER_POINTS` | The amount of allowed hits per duration | 50 |
| `RATE_LIMITER_DURATION` | The time window in seconds in which the points are counted | 1 |
| `RATE_LIMITER_STORE` | Where to store the rate limiter counts. One of `memory`, `redis`, `memcache` | `memory` |
### `RATE_LIMITER_ENABLED`
::: tip
Every other `RATE_LIMITER_*` environment variable is passed directly to a `rate-limiter-flexible` instance. Depending on your
exact needs, you can extend the above environment variables to configure any of [the `rate-limiter-flexible` options](https://github.com/animir/node-rate-limiter-flexible/wiki/Options)
Whether or not to enable rate limiting on the API.<br>**Default: `false`**
### `RATE_LIMITER_POINTS`
The amount of allowed hits per duration.<br>**Default: `50`**
### `RATE_LIMITER_DURATION`
The time window in seconds in which the points are counted.<br>**Default: `1`**
### `RATE_LIMITER_STORE`
Where to store the rate limiter counts. Either `memory`, `redis`, or `memcache`. Based on the rate limiter used, you must also provide the following configurations.<br>**Default: `memory`**
* **Memory**
* No additional configuration required
* **Redis**
* **`RATE_LIMITER_REDIS`** — Redis connection string
* eg: `redis://:authpassword@127.0.0.1:6380/4`
* Alternatively, you can enter individual connection parameters:
* **`RATE_LIMITER_REDIS_HOST`**
* **`RATE_LIMITER_REDIS_PORT`**
* **`RATE_LIMITER_REDIS_PASSWORD`**
* **`RATE_LIMITER_REDIS_DB`**
* **Memcache**
* **`RATE_LIMITER_MEMCACHE`** — Location of your memcache instance
::: Additional Rate Limiter Variables
All `RATE_LIMITER_*` variables are passed directly to a `rate-limiter-flexible` instance. Depending on your
project's needs, you can extend the above environment variables to configure any of [the `rate-limiter-flexible` options](https://github.com/animir/node-rate-limiter-flexible/wiki/Options).
:::
Based on your used store, you might need to add the following configuration:
### Redis
The connection to Redis can be configured in two ways. As a connection string, or as individual parameters:
#### Connection String
| Env. Var. | Description | Default Value |
|----------------------|-------------------------------------------------------------------------------|---------------|
| `RATE_LIMITER_REDIS` | Redis connection string, for example `redis://:authpassword@127.0.0.1:6380/4` | -- |
#### Individual Connection Parameters
| Env. Var. | Description | Default Value |
|-------------------------------|----------------|---------------|
| `RATE_LIMITER_REDIS_HOST` | Redis Host | -- |
| `RATE_LIMITER_REDIS_PORT` | Redis Port | -- |
| `RATE_LIMITER_REDIS_PASSWORD` | Redis Password | -- |
| `RATE_LIMITER_REDIS_DB` | Redis DB | -- |
### Memcache
| Env. Var. | Description | Default Value |
|-------------------------|------------------------------------|---------------|
| `RATE_LIMITER_MEMCACHE` | Location of your memcache instance | -- |
---
## Cache
| Env. Var. | Description | Default Value |
|-------------------------|---------------------------------------------------------------------|-----------------|
| `CACHE_ENABLED` | Whether or not caching is enabled | `false` |
| `CACHE_TTL`<sup>1</sup> | How long the cache is persisted | `30m` |
| `CACHE_NAMESPACE` | How to scope the cache data | `directus-cache |
| `CACHE_STORE` | Where to store the cache data. One of `memory`, `redis`, `memcache` | `memory` |
### `CACHE_ENABLED`
<small><sup>1</sup> The cache is flushed on every edit (create/update/delete) regardless of TTL.</small>
Whether or not caching is enabled.<br>**Default: `false`**
Based on your used store, you might need to add the following configuration:
### `CACHE_TTL`
### Redis
How long the cache is persisted.<br>**Default: `30m`**
The connection to Redis can be configured in two ways. As a connection string, or as individual parameters:
:::warning Forced Flush
Regardless of TTL, the cache is always flushed for every create, update, and delete action.
:::
#### Connection String
### `CACHE_NAMESPACE`
| Env. Var. | Description | Default Value |
|----------------------|-------------------------------------------------------------------------------|---------------|
| `RATE_LIMITER_REDIS` | Redis connection string, for example `redis://:authpassword@127.0.0.1:6380/4` | -- |
How to scope the cache data.<br>**Default: `directus-cache`**
#### Individual Connection Parameters
### `CACHE_STORE`
| Env. Var. | Description | Default Value |
|-------------------------------|----------------|---------------|
| `RATE_LIMITER_REDIS_HOST` | Redis Host | -- |
| `RATE_LIMITER_REDIS_PORT` | Redis Port | -- |
| `RATE_LIMITER_REDIS_PASSWORD` | Redis Password | -- |
| `RATE_LIMITER_REDIS_DB` | Redis DB | -- |
Where to store the cache data. Either `memory`, `redis`, or `memcache`. Based on the cache used, you must also provide the following configurations.<br>**Default: `memory`**
### Memcache
* **Memory**
* No additional configuration required
* **Redis**
* **`CACHE_REDIS`** — Redis connection string
* eg: `redis://:authpassword@127.0.0.1:6380/4`
* Alternatively, you can enter individual connection parameters:
* **`CACHE_REDIS_HOST`**
* **`CACHE_REDIS_PORT`**
* **`CACHE_REDIS_PASSWORD`**
* **`CACHE_REDIS_DB`**
* **Memcache**
* **`CACHE_MEMCACHE`** — Location of your memcache instance
| Env. Var. | Description | Default Value |
|-------------------------|------------------------------------|---------------|
| `RATE_LIMITER_MEMCACHE` | Location of your memcache instance | -- |
---
## File Storage
| Env. Var. | Description | Default Value |
|---------------------|------------------------------------------|---------------|
| `STORAGE_LOCATIONS` | CSV of storage locations you want to use | `local` |
### `STORAGE_LOCATIONS`
The names for the individual storage locations is up to you. The value accepts a CSV to enable multiple
storage locations at the same time (for example `STORAGE_LOCATIONS="local,digitalocean,amazon"`).
A CSV of storage locations (eg: `local,digitalocean,amazon`) to use. You can use any names you'd like for these keys, but each must have a matching `<LOCATION>` configuration.<br>**Default: `local`**
For each of the storage locations you have listed, you have to provide the following configuration:
For each of the storage locations listed, you must provide the following configuration:
| Env. Var. | Description | Default Value |
|---------------------------------|---------------------------------------------------------------|---------------|
| `STORAGE_<LOCATION>_PUBLIC_URL` | Location on the internet where the files are accessible | -- |
| `STORAGE_<LOCATION>_DRIVER` | What driver to use for the files. One of `local`, `s3`, `gcl` | -- |
* **`STORAGE_<LOCATION>_PUBLIC_URL`** — Location on the internet where the files are accessible
* **`STORAGE_<LOCATION>_DRIVER`** — Which driver to use, either `local`, `s3`, or `gcl`
Based on your configured driver, you'll have to add the following configuration:
Based on your configured driver, you must also provide the following configurations.
### `local`
* **Local**
* `STORAGE_<LOCATION>_ROOT` — Where to store the files on disk
* **S3**
* **`STORAGE_<LOCATION>_KEY`** — User key
* **`STORAGE_<LOCATION>_SECRET`** — User secret
* **`STORAGE_<LOCATION>_ENDPOINT`** — S3 Endpoint
* **`STORAGE_<LOCATION>_BUCKET`** — S3 Bucket
* **`STORAGE_<LOCATION>_REGION`** — S3 Region
* **Google Cloud**
* **`STORAGE_<LOCATION>_KEY_FILENAME`** — Path to key file on disk
* **`STORAGE_<LOCATION>_BUCKET`** — Google Cloud Storage bucket
| Env. Var. | Description | Default Value |
|---------------------------|----------------------------------|---------------|
| `STORAGE_<LOCATION>_ROOT` | Where to store the files on disk | -- |
### `s3`
| Env. Var. | Description | Default Value |
|-------------------------------|-------------|---------------|
| `STORAGE_<LOCATION>_KEY` | User key | -- |
| `STORAGE_<LOCATION>_SECRET` | User secret | -- |
| `STORAGE_<LOCATION>_ENDPOINT` | S3 Endpoint | -- |
| `STORAGE_<LOCATION>_BUCKET` | S3 Bucket | -- |
| `STORAGE_<LOCATION>_REGION` | S3 Region | -- |
### `gcl`
| Env. Var. | Description | Default Value |
|-----------------------------------|-----------------------------|---------------|
| `STORAGE_<LOCATION>_KEY_FILENAME` | Path to key file on disk | -- |
| `STORAGE_<LOCATION>_BUCKET` | Google Cloud Storage bucket | -- |
---
## oAuth
| Env. Var. | Description | Default Value |
|-------------------|----------------------------------------|---------------|
| `OAUTH_PROVIDERS` | CSV of oAuth providers you want to use | -- |
### `OAUTH_PROVIDERS`
For each of the oAuth providers you have listed, you have to provide the following config:
CSV of oAuth providers you want to use. For each of the oAuth providers you list, you must also provide the following configurations.
| Env. Var. | Description | Default Value |
|---------------------------|---------------------------------------|---------------|
| `OAUTH_<PROVIDER>_KEY` | oAuth key for the external service | -- |
| `OAUTH_<PROVIDER>_SECRET` | oAuth secret for the external service | -- |
* **`OAUTH_<PROVIDER>_KEY`** — oAuth key for the external service
* **`OAUTH_<PROVIDER>_SECRET`** — oAuth secret for the external service.
---
## Extensions
| Env. Var. | Description | Default Value |
|-------------------|--------------------------------------|----------------|
| `EXTENSIONS_PATH` | Path to your local extensions folder | `./extensions` |
### `EXTENSIONS_PATH`
Path to your local extensions folder.<br>**Default: `./extensions`**
---
## Email
| Env. Var. | Description | Default Value |
|-------------------|-------------------------------------------------------|------------------------|
| `EMAIL_FROM` | Email address from which emails are sent | `no-reply@directus.io` |
| `EMAIL_TRANSPORT` | What to use to send emails. One of `sendmail`, `smtp` | `sendmail` |
### `EMAIL_FROM`
Based on your used transport, you will have to configure the following:
Email address from which emails are sent.<br>**Default: `no-reply@directus.io`**
### `sendmail`
### `EMAIL_TRANSPORT`
| Env. Var. | Description | Default Value |
|---------------------------|----------------------------------------|----------------------|
| `EMAIL_SENDMAIL_NEW_LINE` | What new line style to use in sendmail | `unix` |
| `EMAIL_SENDMAIL_PATH` | Path to your sendmail executable | `/usr/sbin/sendmail` |
What to use to send emails. One of `sendmail`, `smtp`. Based on the transport used, you must also provide the following configurations.<br>**Default: `sendmail`**
### `smtp`
| Env. Var. | Description | Default Value |
|-----------------------|------------------|---------------|
| `EMAIL_SMTP_HOST` | SMTP Host | -- |
| `EMAIL_SMTP_PORT` | SMTP Port | -- |
| `EMAIL_SMTP_USER` | SMTP User | -- |
| `EMAIL_SMTP_PASSWORD` | SMTP Password | -- |
| `EMAIL_SMTP_POOL` | Use SMTP pooling | -- |
| `EMAIL_SMTP_SECURE` | Enable TLS | -- |
* **Sendmail** (`sendgrid`)
* **`EMAIL_SENDMAIL_NEW_LINE`** — What new line style to use in sendmail. **Default: `unix`**
* **`EMAIL_SENDMAIL_PATH`** — Path to your sendmail executable. **Default: `/usr/sbin/sendmail`**
* **SMTP** (`smtp`)
* **`EMAIL_SMTP_HOST`** — SMTP Host
* **`EMAIL_SMTP_PORT`** — SMTP Port
* **`EMAIL_SMTP_USER`** — SMTP User
* **`EMAIL_SMTP_PASSWORD`** — SMTP Password
* **`EMAIL_SMTP_POOL`** — Use SMTP pooling
* **`EMAIL_SMTP_SECURE`** — Enable TLS

View File

@@ -1,16 +1,17 @@
# Error Codes
| Error Code | Description |
|-----------------------|----------------------------------------------|
| `FAILED_VALIDATION` | Validation for this particular item failed |
| `FORBIDDEN` | You aren't allowed to do the current action |
| `INVALID_CREDENTIALS` | Username / password or access token is wrong |
| `INVALID_OTP` | Wrong OTP was provided |
| `INVALID_PAYLOAD` | Provided payload is invalid |
| `INVALID_QUERY` | The requested query parameters can't be used |
| `REQUESTS_EXCEEDED` | Hit rate limit. Too many requests |
| `ROUTE_NOT_FOUND` | Endpoint doesn't exist |
| `SERVICE_UNAVAILABLE` | Couldn't use external service |
| Error Code | Description |
|-----------------------|------------------------------------------------|
| `FAILED_VALIDATION` | Validation for this particular item failed |
| `FORBIDDEN` | You are not allowed to do the current action |
| `INVALID_CREDENTIALS` | Username / password or access token is wrong |
| `INVALID_OTP` | Wrong OTP was provided |
| `INVALID_PAYLOAD` | Provided payload is invalid |
| `INVALID_QUERY` | The requested query parameters can not be used |
| `REQUESTS_EXCEEDED` | Hit rate limit; Too many requests |
| `ROUTE_NOT_FOUND` | Endpoint does not exist |
| `SERVICE_UNAVAILABLE` | Could not use external service |
In order to prevent leaking what items exist, any action to a non-existing item will
return a FORBIDDEN error.
:::warning Security
To prevent leaking which items exist, all actions for non-existing items will return a `FORBIDDEN` error.
:::

View File

@@ -1,81 +0,0 @@
@TODO put this somewhere appropriate
The `filter` parameter, permissions, and validation all rely on a JSON structure to define the permissions:
```
{
<field>: {
<operator>: <value>
}
}
```
For example:
```json
{
"title": {
"_contains": "Directus"
}
}
```
## Relational
You can target related values by nesting the field name of the related collection under the field name
of the relational field:
```json
{
"author": {
"name": {
"_eq": "Rijk"
}
}
}
```
## And / Or
You can nest multiple filter objects under a `_and` or `_or` key to group filters under more advanced
logical operations:
```json
{
"_or": [
{
"_and": [
{
"owner": {
"_eq": "$CURRENT_USER"
}
},
{
"status": {
"_in": [
"published",
"draft"
]
}
}
]
},
{
"_and": [
{
"owner": {
"_neq": "$CURRENT_USER"
}
},
{
"status": {
"_in": [
"published"
]
}
}
]
}
]
}
```

View File

@@ -0,0 +1,94 @@
# Filter Rules
> Permissions, validation, and the API's `filter` parameter all rely on a specific JSON structure to define their rules. This page describes the syntax for creating flat, relational, or complex filter rules.
## Syntax
* **Field** — Any valid root field, [relational field](#), or [logical operator](#)
* **Operator** — Any valid [API operator](#) prefaced with an underscore
* **Value** — Any valid static value, or [dynamic variable](#)
```
{
<field>: {
<operator>: <value>
}
}
```
### Example
```json
{
"title": {
"_contains": "Directus"
}
}
```
## Relational
You can target related values by nesting field names. For example, if you have a relational [Many-to-One](#)
`author` field, you can set a rule for the `author.name` field using the following syntax.
```json
{
"author": {
"name": {
"_eq": "Rijk van Zanten"
}
}
}
```
## Logical Operators
You can nest or group multiple rules using the `_and` or `_or` logical operators. Each operator holds an array of rules, allowing for more complex filtering.
```json
{
"_or": [
{
"_and": [
{
"owner": {
"_eq": "$CURRENT_USER"
}
},
{
"status": {
"_in": [
"published",
"draft"
]
}
}
]
},
{
"_and": [
{
"owner": {
"_neq": "$CURRENT_USER"
}
},
{
"status": {
"_in": [
"published"
]
}
}
]
}
]
}
```
## Dynamic Variables
In addition to static values, you can also filter against _dynamic_ values using the following variables.
* `$CURRENT_USER` — The primary key of the currently authenticated user
* `$CURRENT_ROLE` — The primary key of the role for the currently authenticated user
* `$NOW` — The current timestamp

View File

@@ -0,0 +1,43 @@
# Item Objects
> TK
## Syntax
* **Field** — Any valid root field or [relational field](#)
* **Value** — Any valid static value, or [dynamic variable](#)
```
{
<operator>: <value>
}
```
### Example
```json
{
"title": "Directus"
}
```
## Relational
You can set related values by nesting field names. For example, if you have a relational [Many-to-One](#)
`author` field, you can set a rule for the `author.name` field using the following syntax.
```json
{
"author": {
"name": "Rijk van Zanten"
}
}
```
## Dynamic Variables
In addition to static values, you can also set _dynamic_ values using the following variables.
* `$CURRENT_USER` — The primary key of the currently authenticated user
* `$CURRENT_ROLE` — The primary key of the role for the currently authenticated user
* `$NOW` — The current timestamp