Compare commits

...

42 Commits

Author SHA1 Message Date
FoxxMD
2a282a0d6f Merge branch 'edge' 2021-12-21 09:35:21 -05:00
FoxxMD
0d087521a7 feat(docs): Specify yaml can be validated in editor 2021-12-21 09:35:06 -05:00
FoxxMD
fb5fc961cc feat(docs): Add youtube credentials requirements for repost rule 2021-12-21 09:33:24 -05:00
FoxxMD
c04b305881 feat(docs): Add link-spamming regex config 2021-12-21 09:22:35 -05:00
FoxxMD
5c5e9a26aa refactor(ui): Unify editor look-and-feel
* Use named entry files for webpack so we can access them from config.ejs
* Move some of the validation logic to config.ejs so it can be used by json editor as well
* Use same controls for loading files/schemas/urls for both editors
* Add validation errors box for both editors
2021-12-20 23:30:41 -05:00
Matt Foxx
477d1a10ae Merge pull request #60 from rysie/edge
Support setting user flair by flair_template_id
2021-12-20 18:37:36 -05:00
Marcin Macinski
bbee92699c Support setting user flair by flair_template_id 2021-12-21 00:10:40 +01:00
FoxxMD
7f09043cdf fix(ui): Add missing yaml asset files
Need to unignore them...
2021-12-20 15:56:36 -05:00
FoxxMD
768a199c40 feat(ui): Add quick-n-dirty YAML editor
Using one-off project cm-yaml http://github.com/foxxmd/cm-yaml to run a static version of monaco-yaml inside CM
2021-12-20 15:42:37 -05:00
FoxxMD
6e4b0c7719 feat(regex): Add match sample to rule results so it can be used in content template 2021-12-20 12:33:40 -05:00
FoxxMD
89b21e6073 fix(repost): Use average of all algos to get high score
Ensures no anomalous scores dominate
2021-12-18 18:44:52 -05:00
FoxxMD
da611c5894 fix(repost): Fix max weighted score and formatting 2021-12-18 18:22:38 -05:00
FoxxMD
2c90a260c0 fix(stream): Remove processing clear behavior for polling to try to fix weird polling behavior 2021-12-18 18:08:00 -05:00
FoxxMD
f081598da6 feat(repost): Improve text matching and default values
* Use 3 different matching algorithms using the highest score out of the three
* Weight score based on length of the sentence
* Increase minimum number of words to 3
* Enforce min word count on external (youtube) comments
2021-12-18 17:42:43 -05:00
FoxxMD
55f45163a4 feat(repost): Fix some bugs and improve result data
* Wrong hash check
* Add closest match summary and percent for use in content
2021-12-18 14:54:01 -05:00
FoxxMD
e4dfa9dde3 refactor(docker): reduce image size with best practices
* use --no-cache so repository index isn't included in image
* use FROM instructions to build project with dev dependencies, then copy built project to new stage and install production-only libraries
* properly ignore docs and node_modules with dockerignore
2021-12-17 15:10:22 -05:00
FoxxMD
0e395792db refactor: Remove unusused ts-auto-guard package
Has @ts-morph as a dependency which is a huge package
2021-12-17 14:36:03 -05:00
FoxxMD
dcbeb784e8 refactor: Remove set-random-interval
* depended on and always downloaded an entire, older typescript version (even with production install) -- was not necessary for one function
* refactor project to use newer TS version (specify any type for catch blocks to fix compiler errors)
2021-12-17 14:25:48 -05:00
FoxxMD
aeaeb6ce27 refactor(docker): Remove monaco editor local dependency to reduce image size
Wasn't necessary to have it installed locally (can use CDN) and is not a part of the core experience
2021-12-17 14:00:54 -05:00
FoxxMD
d6a29c5914 fix(bot): Actually get all moderated subreddits
Previously was only getting first page of moderated subreddits (listing). Make sure to get all pages.
2021-12-17 13:24:09 -05:00
FoxxMD
c1224121d4 feat(subreddit): Implement user profile filtering for SubredditState
* Add convenience property for filtering subreddits by user profile prefix (u_)
* Do some smart property comparison to check if SubredditState name is regex or not -- use additional user prof check if it is so we don't clobber existing name regex

Closes #56
2021-12-17 13:01:47 -05:00
FoxxMD
9790e681ea docs(regex): Add subreddit-ready example for removing discord link spam 2021-12-16 16:27:06 -05:00
FoxxMD
a48a850c98 Update winston repository package format/information 2021-12-16 14:58:43 -05:00
FoxxMD
b8369a9e9f Package bump for security
https://github.com/advisories/GHSA-93q8-gq69-wqmw
2021-12-14 12:06:48 -05:00
FoxxMD
0c31bdf25e refactor(repost): Improve repost criteria configuration and add documentation
* Simplify usage b/w comment-submission by removing "criteria" in main criteria property -- comment checks can just use additional properties
* Consolidated occurrence count/time into one property to allow and/or operands on both conditions (more powerful!)
* Added documentation describing repost configuration
* Added repost configuration examples
2021-12-07 16:30:05 -05:00
FoxxMD
4b14e581dd fix: Add more staggering on heartbeat check
Always stagger subreddit checks
2021-12-06 17:31:00 -05:00
FoxxMD
b2846efd2b fix: Improve falloff behavior when reddit api error are encountered
* Turn off snoowrap request queuing
* Aggregate errors from all managers at bot-level and force all to stop (and clear queue/polling) if a small threshold is met
* Add activity refresh on check into try-catch so delayed activities in queue don't cause a loop if they fail due to api issues
2021-12-06 17:16:04 -05:00
FoxxMD
a787e4515b feat(repost): Implement occurrence count and time comparisons
* User-defined set of comparisons for testing how many reposts were found
* User-defined set of comparisons for testing when a slice/all of reposts were created
2021-12-06 13:28:02 -05:00
FoxxMD
f63e2a0ec4 Merge branch 'topComments' into edge 2021-12-01 17:34:51 -05:00
FoxxMD
9d0e098db1 feat(rule): Implement repost rule
* checks for both submission/comment
* search by submission title, url, dups/crossposts, and external (just youtube for now)
* user-defined string sameness for all search facets
* user-defined case-sensitivity and regex-based transformations for activity/repost item values
* cache comment checks
* implemented youtube client for retrieving video comments
2021-12-01 17:34:25 -05:00
FoxxMD
181390f0eb fix: Correct error description for reading a config file
Was using copy-pasted statement from wiki error which was confusing
2021-12-01 09:33:03 -05:00
FoxxMD
a8c7b1dac9 Interim implementation of repost rule 2021-11-30 20:42:40 -05:00
FoxxMD
fd5a92758d Merge branch 'edge' 2021-11-28 19:43:20 -05:00
FoxxMD
027199d788 docs(heroku): Add information on dyno usage 2021-11-15 17:27:42 -05:00
FoxxMD
2a9f01b928 fix(heroku): Add OPERATOR to heroku deploy template 2021-11-15 17:26:34 -05:00
FoxxMD
cf54502f0d fix(heroku): Add redirect uri env to heroku deploy template 2021-11-15 17:26:34 -05:00
FoxxMD
2a3663ccc9 refactor(heroku): heroku-specific docker change
* Add image for web and worker
* Refactor port usage to conform to heroku's usage
2021-11-15 17:26:34 -05:00
FoxxMD
dc2eeffcb5 fix(bot): Remove own profile "sub" from default moderated subreddits to run on
The user profile for the bot account is returning as a moderated "subreddit" EX (/u/ContextMotBot) but its not useable by CM because it does not have a wiki and users don't use this as a sub. Filter it out of moderated subreddits when using the default behavior.
2021-11-15 13:19:21 -05:00
FoxxMD
39daa11f2d Merge branch 'edge' 2021-11-15 12:53:28 -05:00
FoxxMD
93de38a845 fix(filter): Missing 'op' property test on itemIs for comment 2021-11-15 12:47:34 -05:00
FoxxMD
43caaca1f2 fix(config): Fix wrong provider data used when bot has no caching config
Use previously built defaultProvider, instead of hardcoded memory config, so any cache config from operator waterfalls to bot/subreddit

Closes #46
2021-11-02 15:55:03 -04:00
FoxxMD
7bcc0195fe fix(author): change include/exclude on RULE to be optional
One of these properties must be present for the rule to be valid. This is enforced at runtime. Change the schema so that both are optional from the config-side though.
2021-11-02 09:41:55 -04:00
148 changed files with 6304 additions and 2562 deletions

View File

@@ -1,8 +1,8 @@
node_modules
Dockerfile
.dockerignore
.gitignore
.git
src/logs
/docs
.github
/docs/
/node_modules/

1
.gitignore vendored
View File

@@ -381,4 +381,5 @@ dist
.pnp.*
**/src/**/*.js
!src/Web/assets/public/yaml/*
**/src/**/*.map

View File

@@ -1,15 +1,17 @@
FROM node:16-alpine3.14
FROM node:16-alpine3.14 as base
ENV TZ=Etc/GMT
# vips required to run sharp library for image comparison
RUN echo "http://dl-4.alpinelinux.org/alpine/v3.14/community" >> /etc/apk/repositories \
&& apk --update add vips
&& apk --no-cache add vips
RUN ln -snf /usr/share/zoneinfo/$TZ /etc/localtime && echo $TZ > /etc/timezone
WORKDIR /usr/app
FROM base as build
COPY package*.json ./
COPY tsconfig.json .
@@ -17,7 +19,13 @@ RUN npm install
ADD . /usr/app
RUN npm run build
RUN npm run build && rm -rf node_modules
FROM base as app
COPY --from=build /usr/app /usr/app
RUN npm install --production
ENV NPM_CONFIG_LOGLEVEL debug

View File

@@ -27,6 +27,7 @@ Some feature highlights:
* Activity state (removed, locked, distinguished, etc.)
* Rules and Actions support named references (write once, reference anywhere)
* [**Image Comparisons**](/docs/imageComparison.md) via fingerprinting and/or pixel differences
* [**Repost detection**](/docs/examples/repost) with support for external services (youtube, etc...)
* Global/subreddit-level **API caching**
* Support for [Toolbox User Notes](https://www.reddit.com/r/toolbox/wiki/docs/usernotes) as criteria or Actions (writing notes)
* Docker container support
@@ -125,7 +126,7 @@ Moderator view/invite and authorization:
A built-in editor using [monaco-editor](https://microsoft.github.io/monaco-editor/) makes editing configurations easy:
* Automatic JSON syntax validation and formatting
* Automatic JSON or YAML syntax validation and formatting
* Automatic Schema (subreddit or operator) validation
* All properties are annotated via hover popups
* Unauthenticated view via `yourdomain.com/config`

View File

@@ -17,12 +17,22 @@
"REFRESH_TOKEN": {
"description": "Refresh token retrieved from authenticating an account with your Reddit Application",
"value": "",
"required": true
"required": false
},
"ACCESS_TOKEN": {
"description": "Access token retrieved from authenticating an account with your Reddit Application",
"value": "",
"required": true
"required": false
},
"REDIRECT_URI": {
"description": "Redirect URI you specified when creating your Reddit Application. Required if you want to use the web interface. In the provided example replace 'your-heroku-app-name' with the name of your HEROKU app.",
"value": "https://your-heroku-6app-name.herokuapp.com/callback",
"required": false
},
"OPERATOR": {
"description": "Your reddit username WITHOUT any prefixes EXAMPLE /u/FoxxMD => FoxxMD. Specified user will be recognized as an admin.",
"value": "",
"required": false
},
"WIKI_CONFIG": {
"description": "Relative url to contextbot wiki page EX https://reddit.com/r/subreddit/wiki/<path>",

View File

@@ -102,6 +102,7 @@ Find detailed descriptions of all the Rules, with examples, below:
* [History](/docs/examples/history)
* [Author](/docs/examples/author)
* [Regex](/docs/examples/regex)
* [Repost](/docs/examples/repost)
### Rule Set

View File

@@ -17,6 +17,7 @@ This directory contains example of valid, ready-to-go configurations for Context
* [History](/docs/examples/history)
* [Author](/docs/examples/author)
* [Regex](/docs/examples/regex)
* [Repost](/docs/examples/repost)
* [Toolbox User Notes](/docs/examples/userNotes)
* [Advanced Concepts](/docs/examples/advancedConcepts)
* [Rule Sets](/docs/examples/advancedConcepts/ruleSets.json5)

View File

@@ -18,3 +18,5 @@ Which can then be used in conjunction with a [`window`](https://github.com/FoxxM
* [Trigger if regex matches at least 3 of Author's last 10 activities](/docs/examples/regex/matchActivityThresholdHistory.json5)
* [Trigger if there are 5 regex matches in the Author's last 10 activities](/docs/examples/regex/matchTotalHistoryActivity.json5)
* [Trigger if there are 5 regex matches in the Author's last 10 comments](/docs/examples/regex/matchSubsetHistoryActivity.json5)
* [Remove comments that are spamming discord links](/docs/examples/regex/removeDiscordSpam.json5)
* Differs from just using automod because this config can allow one-off/organic links from users who DO NOT spam discord links but will still remove the comment if the user is spamming them

View File

@@ -0,0 +1,73 @@
{
"checks": [
{
"name": "remove discord spam",
"notifyOnTrigger": true,
"description": "remove comments from users who are spamming discord links",
"kind": "comment",
"authorIs": {
"exclude": [
{
"isMod": true
}
]
},
"itemIs": [
{
"removed": false,
"approved": false,
}
],
"condition": "OR",
"rules": [
{
// set to false if you want to allow comments with a discord link ONLY IF
// the author doesn't have a history of spamming discord links
// -- basically allows one-off/organic discord links
"enable": true,
"name": "linkOnlySpam",
"kind": "regex",
"criteria": [
{
"name": "only link",
"regex": "/^.*(discord\\.gg\\/[\\w\\d]+)$/i",
}
]
},
{
"condition": "AND",
"rules": [
{
"name": "linkAnywhereSpam",
"kind": "regex",
"criteria": [
{
"name": "contains link anywhere",
"regex": "/^.*(discord\\.gg\\/[\\w\\d]+).*$/i",
}
]
},
{
"name": "linkAnywhereHistoricalSpam",
"kind": "regex",
"criteria": [
{
"name": "contains links anywhere historically",
"regex": "/^.*(discord\\.gg\\/[\\w\\d]+).*$/i",
"totalMatchThreshold": ">= 3",
"lookAt": "comments",
"window": 10
}
]
}
]
}
],
"actions": [
{
"kind": "remove"
}
]
}
]
}

View File

@@ -0,0 +1,703 @@
The **Repost** rule is used to find reposts for both **Submissions** and **Comments**, depending on what type of **Check** it is used on.
Note: This rule is for searching **all of Reddit** for reposts, as opposed to just the Author of the Activity being checked. If you only want to check for reposts by the Author of the Activity being checked you should use the [Repeat Activity](/docs/examples/repeatActivity) rule.
# TLDR
Out of the box CM generates a repost rule with sensible default behavior without any configuration. You do not need to configure any of below options (facets, modifiers, criteria) yourself in order to have a working repost rule. Default behavior is as follows...
* When looking for Submission reposts CM will find any Submissions with
* a very similar title
* or independent of title...
* any crossposts/duplicates
* any submissions with the exact URL
* When looking for Comment reposts CM will do the above AND THEN
* compare the top 50 most-upvoted comments from the top 10 most-upvoted Submissions against the comment being checked
* compare any items found from external source (Youtube comments, etc...) against the comment being checked
# Configuration
## Search Facets
ContextMod has several ways to search for reposts -- all of which look at different elements of a Submission in order to find repost candidates. You can define any/all of these **Search Facets** you want to use to search Reddit inside the configuration for the Repost Rule in the `searchOn` property.
### Usage
Facets are specified in the `searchOn` array property within the rule's configuration.
**String**
Specify one or more types of facets as a string to use their default configurations
<details>
```json5
{
"kind": "repost",
"criteria": [
{
// ...
"searchOn": ["title", "url", "crossposts"],
// ....
}
]
}
```
</details>
**Object**
**string** and object configurations can be mixed
<details>
```json5
{
"kind": "repost",
"criteria": [
{
// ...
"searchOn": [
"title",
{
"kind": "url",
// could also specify multiple types to use the same config for all
//"kind": ["url", "duplicates"]
"matchScore": 90,
//...
},
"external"
],
// ....
}
]
}
```
</details>
### Facet Types
* **title** -- search reddit for Submissions with a similar title
* **url** -- search reddit for Submissions with the same URL
* **duplicates** -- get all Submissions **reddit has identified** as duplicates that are **NOT** crossposts
* these are found under *View discussions in other communities* (new reddit) or *other discussions* (old reddit) on the Submission
* **crossposts** -- get all Submissions where the current Submission is the source of an **official** crosspost
* this differs from duplicates in that crossposts use reddit's built-in crosspost functionality, respect subreddit crosspost rules, and link back to the original Submission
* **external** -- get items from the Submission's link source that may be reposted (currently implemented for **Comment Checks** only)
* When the Submission link is for...
* **Youtube** -- get top comments on video by replies/like count
* **NOTE:** An **API Key** for the [Youtube Data API](https://developers.google.com/youtube/v3) must be provided for this facet to work. This can be provided by the operator alongside [bot credentials](/docs/operatorConfiguration.md) or in the top-level `credentials` property for a [subreddit configuration.](https://json-schema.app/view/%23?url=https%3A%2F%2Fraw.githubusercontent.com%2FFoxxMD%2Freddit-context-bot%2Fmaster%2Fsrc%2FSchema%2FApp.json)
### Facet Modifiers
For all **Facets**, except for **external**, there are options that be configured to determine if the found Submissions is a "valid" repost IE filtering. These options can be configured **per facet**.
* **matchScore** -- The percentage, as a whole number, of a repost title that must match the title being checked in order to consider both a match
* **minWordCount** -- The minimum number of words a title must have
* **caseSensitive** -- If the match comparison should be case-sensitive (defaults to `false`)
Additionally, the current Activity's title and/or each repost's title can be transformed before matching:
* **transformations** -- An array of SearchAndReplace objects used to transform the repost's title
* **transformationsActivity** -- An array of SearchAndReplace objects used to transform the current Activity's title
#### Modifier Defaults
To make facets easier to use without configuration sensible defaults are applied to each when no other configuration is defined...
* **title**
* `matchScore: 85` -- The candidate repost's title must be at least 85% similar to the current Activity's title
* `minWordCount: 2` -- The candidate repost's title must have at least 2 words
For `url`,`duplicates`, and `crossposts` the only default is `matchScore: 0` because the assumption is you want to treat any actual dups/x-posts or exact URLs as reposts, regardless of their title.
## Additional Criteria Properties
A **criteria** object may also specify some additional tests to run against the reposts found from searching.
### For Submissions and Comments
#### Occurrences
Define a set of criteria to test against the **number of reposts**, **time reposts were created**, or both.
##### Count
<details>
```json5
{
"kind": "repost",
"criteria": [
{
// ...
"searchOn": ["title", "url", "crossposts"],
"occurrences": {
"criteria": [
{
// passes if BOTH tests are true
"count": {
"condition": "AND", // default is AND
"test": [
"> 3", // TRUE if there are GREATER THAN 3 reposts found
"<= 5" // TRUE if there are LESS THAN OR EQUAL TO 5 reposts found
]
}
}
],
}
}
]
}
```
</details>
##### Time
Define a test or array of tests to run against **when reposts were created**
<details>
```json5
{
"kind": "repost",
"criteria": [
{
// ...
"searchOn": [
"title",
"url",
"crossposts"
],
"occurrences": {
"criteria": [
{
time: {
// how to test array of comparisons. AND => all must pass, OR => any must pass
"condition": "AND",
"test": [
{
// which of the found reposts to test the time comparison on
//
// "all" => ALL reposts must pass time comparison
// "any" => ANY repost must pass time comparison
// "newest" => The newest (closest in time to now) repost must pass time comparison
// "oldest" => The oldest (furthest in time from now) repost must pass time comparison
//
"testOn": "all",
// Tested items must be OLDER THAN 3 months
"condition": "> 3 months"
}
]
}
}
]
},
}
]
}
```
</details>
### For Comments
When the rule is run in a **Comment Check** you may specify text comparisons (like those found in Search Facets) to run on the contents of the repost comments *against* the contents of the comment being checked.
* **matchScore** -- The percentage, as a whole number, of a repost comment that must match the comment being checked in order to consider both a match (defaults to 85% IE `85`)
* **minWordCount** -- The minimum number of words a comment must have
* **caseSensitive** -- If the match comparison should be case-sensitive (defaults to `false`)
# Examples
Examples of a *full* CM configuration, including the Repost Rule, in various scenarios. In each scenario the parts of the configuration that affect the rule are indicated.
## Submissions
When the Repost Rule is run on a **Submission Check** IE the activity being checked is a Submission.
### Default Behavior (No configuration)
This is the same behavior described in the [TLDR](#TLDR) section above -- find any submissions with:
* a very similar title (85% or more the same)
* or ignoring title...
* any crossposts/duplicates
* any submissions with the exact URL
<details>
```json5
{
"polling": [
"unmoderated"
],
"checks": [
{
"name": "subRepost",
"description": "Check if submission has been reposted",
// kind specifies this check is for SUBMISSIONS
"kind": "submission",
"condition": "AND",
"rules": [
// repost rule configuration is below
//
{
"kind": "repost"
},
//
// repost rule configuration is above
],
"actions": [
{
"kind": "report",
"content": "This submission was reposted"
}
]
}
]
}
```
</details>
### Search by Title Only
Find any submissions with:
* a very similar title (85% or more the same)
<details>
```json5
{
"polling": [
"unmoderated"
],
"checks": [
{
"name": "subRepost",
"description": "Check if submission has been reposted",
// kind specifies this check is for SUBMISSIONS
"kind": "submission",
"condition": "AND",
"rules": [
// repost rule configuration is below
//
{
"kind": "repost",
"criteria": [
{
// specify only title to search on
"searchOn": [
"title" // uses default configuration since only string is specified
]
}
]
},
//
// repost rule configuration is above
],
"actions": [
{
"kind": "report",
"content": "This submission was reposted"
}
]
}
]
}
```
</details>
### Search by Title only and specify similarity percentage
* a very similar title (95% or more the same)
<details>
```json5
{
"polling": [
"unmoderated"
],
"checks": [
{
"name": "subRepost",
"description": "Check if submission has been reposted",
// kind specifies this check is for SUBMISSIONS
"kind": "submission",
"condition": "AND",
"rules": [
// repost rule configuration is below
//
{
"kind": "repost",
"criteria": [
{
// specify only title to search on
"searchOn": [
{
"kind": "title",
// titles must be 95% or more similar
"matchScore": "95"
}
]
}
]
},
//
// repost rule configuration is above
],
"actions": [
{
"kind": "report",
"content": "This submission was reposted"
}
]
}
]
}
```
</details>
### Search by Title, specify similarity percentage, AND any duplicates
<details>
```json5
{
"polling": [
"unmoderated"
],
"checks": [
{
"name": "subRepost",
"description": "Check if submission has been reposted",
// kind specifies this check is for SUBMISSIONS
"kind": "submission",
"condition": "AND",
"rules": [
// repost rule configuration is below
//
{
"kind": "repost",
"criteria": [
{
"searchOn": [
// look for duplicates (NON crossposts) using default configuration
"duplicates",
// search by title
{
"kind": "title",
// titles must be 95% or more similar
"matchScore": "95"
}
]
}
]
},
//
// repost rule configuration is above
],
"actions": [
{
"kind": "report",
"content": "This submission was reposted"
}
]
}
]
}
```
</details>
### Approve Submission if not reposted in the last month, by title
<details>
```json5
{
"polling": [
"unmoderated"
],
"checks": [
{
"name": "subRepost",
"description": "Check there are no reposts with same title in the last month",
// kind specifies this check is for SUBMISSIONS
"kind": "submission",
"condition": "AND",
"rules": [
// repost rule configuration is below
//
{
"kind": "repost",
"criteria": [
{
"searchOn": [
"title"
],
"occurrences": {
// if EITHER criteria is TRUE then it "passes"
"condition": "OR",
"criteria": [
// first criteria:
// TRUE if there are LESS THAN 1 reposts (no reposts found)
{
"count": {
"test": ["< 1"]
}
},
// second criteria:
// TRUE if the newest repost is older than one month
{
"time": {
"test": [
{
"testOn": "newest",
"condition": "> 1 month"
}
]
}
}
]
},
}
]
},
//
// repost rule configuration is above
],
"actions": [
{
// approve this post since we know it is not a repost of anything within the last month
"kind": "approve",
}
]
}
]
}
```
</details>
## Comments
### Default Behavior (No configuration)
This is the same behavior described in the [TLDR](#TLDR) section above -- find any submissions with:
* a very similar title (85% or more the same)
* or ignoring title...
* any crossposts/duplicates
* any submissions with the exact URL
* If comment being checked is on a Submission for Youtube then get top 50 comments on youtube video as well...
AND THEN
* sort submissions by votes
* take top 20 (upvoted) comments from top 10 (upvoted) submissions
* sort comments by votes, take top 50 + top 50 external items
FINALLY
* filter all gathered comments by default `matchScore: 85` to find very similar matches
* rules is triggered if any are found
<details>
```json5
{
"polling": [
"newComm"
],
"checks": [
{
"name": "commRepost",
"description": "Check if comment has been reposted",
// kind specifies this check is for COMMENTS
"kind": "common",
"condition": "AND",
"rules": [
// repost rule configuration is below
//
{
"kind": "repost"
},
//
// repost rule configuration is above
],
"actions": [
{
"kind": "report",
"content": "This comment was reposted"
}
]
}
]
}
```
</details>
### Search by external (youtube) comments only
<details>
```json5
{
"polling": [
"newComm"
],
"checks": [
{
"name": "commRepost",
"description": "Check if comment has been reposted from youtube",
// kind specifies this check is for SUBMISSIONS
"kind": "comment",
"condition": "AND",
"rules": [
// repost rule configuration is below
//
{
"kind": "repost",
"criteria": [
{
// specify only external (youtube) to search on
"searchOn": [
"external"
]
}
]
},
//
// repost rule configuration is above
],
"actions": [
{
"kind": "report",
"content": "This comment was reposted from youtube"
}
]
}
]
}
```
</details>
### Search by external (youtube) comments only, with higher comment match percentage
<details>
```json5
{
"polling": [
"newComm"
],
"checks": [
{
"name": "commRepost",
"description": "Check if comment has been reposted from youtube",
// kind specifies this check is for SUBMISSIONS
"kind": "comment",
"condition": "AND",
"rules": [
// repost rule configuration is below
//
{
"kind": "repost",
"criteria": [
{
// specify only external (youtube) to search on
"searchOn": [
"external"
],
"matchScore": 95 // matchScore for comments is on criteria instead of searchOn config...
},
]
},
//
// repost rule configuration is above
],
"actions": [
{
"kind": "report",
"content": "This comment was reposted from youtube"
}
]
}
]
}
```
</details>
### Search by external (youtube) comments and submission URL, with higher comment match percentage
<details>
```json5
{
"polling": [
"newComm"
],
"checks": [
{
"name": "commRepost",
"description": "Check if comment has been reposted",
// kind specifies this check is for SUBMISSIONS
"kind": "comment",
"condition": "AND",
"rules": [
// repost rule configuration is below
//
{
"kind": "repost",
"criteria": [
{
// specify only external (youtube) to search on
"searchOn": [
"external",
// can specify any/all submission search facets to acquire comments from
"url"
],
"matchScore": 95 // matchScore for comments is on criteria instead of searchOn config...
},
]
},
//
// repost rule configuration is above
],
"actions": [
{
"kind": "report",
"content": "This comment was reposted from youtube or from submission with the same URL"
}
]
}
]
}
```
</details>

View File

@@ -39,3 +39,26 @@ then remove the submission
### [Remove comment if the user has posted the same comment 4 or more times in a row](/docs/examples/subredditReady/commentSpam.json5)
If the user made the same comment (with some fuzzy matching) 4 or more times in a row in the past (50 activities or 6 months) then remove the comment.
### [Remove comment if it is discord invite link spam](/docs/examples/subredditReady/discordSpam.json5)
This rule goes a step further than automod can by being more discretionary about how it handles this type of spam.
* Remove the comment and **ban a user** if:
* Comment being checked contains **only** a discord link (no other text) AND
* Discord links appear **anywhere** in three or more of the last 10 comments the Author has made
otherwise...
* Remove the comment if:
* Comment being checked contains **only** a discord link (no other text) OR
* Comment contains a discord link **anywhere** AND
* Discord links appear **anywhere** in three or more of the last 10 comments the Author has made
Using these checks ContextMod can more easily distinguish between these use cases for a user commenting with a discord link:
* actual spammers who only spam a discord link
* users who may comment with a link but have context for it either in the current comment or in their history
* users who many comment with a link but it's a one-off event (no other links historically)
Additionally, you could modify both/either of these checks to not remove one-off discord link comments but still remove if the user has a historical trend for spamming links

View File

@@ -0,0 +1,75 @@
{
"polling": ["newComm"],
"checks": [
{
"name": "ban discord only spammer",
"description": "ban a user who spams only a discord link many times historically",
"kind": "comment",
"condition": "AND",
"rules": [
"linkOnlySpam",
"linkAnywhereHistoricalSpam",
],
"actions": [
{
"kind": "remove"
},
{
"kind": "ban",
"content": "spamming discord links"
}
]
},
{
"name": "remove discord spam",
"description": "remove comments from users who only link to discord or mention discord link many times historically",
"kind": "comment",
"condition": "OR",
"rules": [
{
"name": "linkOnlySpam",
"kind": "regex",
"criteria": [
{
"name": "only link",
"regex": "/^.*(discord\\.gg\\/[\\w\\d]+)$/i",
}
]
},
{
"condition": "AND",
"rules": [
{
"name": "linkAnywhereSpam",
"kind": "regex",
"criteria": [
{
"name": "contains link anywhere",
"regex": "/^.*(discord\\.gg\\/[\\w\\d]+).*$/i",
}
]
},
{
"name": "linkAnywhereHistoricalSpam",
"kind": "regex",
"criteria": [
{
"name": "contains links anywhere historically",
"regex": "/^.*(discord\\.gg\\/[\\w\\d]+).*$/i",
"totalMatchThreshold": ">= 3",
"lookAt": "comments",
"window": 10
}
]
}
]
}
],
"actions": [
{
"kind": "remove"
}
]
}
]
}

View File

@@ -50,6 +50,18 @@ tsc -p .
### [Heroku Quick Deploy](https://heroku.com/about)
[![Deploy](https://www.herokucdn.com/deploy/button.svg)](https://dashboard.heroku.com/new?template=https://github.com/FoxxMD/context-mod)
This template provides a **web** and **worker** dyno for heroku.
* **Web** -- Will run the bot **and** the web interface for ContextMod.
* **Worker** -- Will run **just** the bot.
Be aware that Heroku's [free dyno plan](https://devcenter.heroku.com/articles/free-dyno-hours#dyno-sleeping) enacts some limits:
* A **Web** dyno will go to sleep (pause) after 30 minutes without web activity -- so your bot will ALSO go to sleep at this time
* The **Worker** dyno **will not** go to sleep but you will NOT be able to access the web interface. You can, however, still see how Cm is running by reading the logs for the dyno.
If you want to use a free dyno it is recommended you perform first-time setup (bot authentication and configuration, testing, etc...) with the **Web** dyno, then SWITCH to a **Worker** dyno so it can run 24/7.
# Bot Authentication
Next you need to create a bot and authenticate it with Reddit. Follow the [bot authentication guide](/docs/botAuthentication.md) to complete this step.

29
heroku.Dockerfile Normal file
View File

@@ -0,0 +1,29 @@
FROM node:16-alpine3.14
ENV TZ=Etc/GMT
# vips required to run sharp library for image comparison
RUN echo "http://dl-4.alpinelinux.org/alpine/v3.14/community" >> /etc/apk/repositories \
&& apk --update add vips
RUN ln -snf /usr/share/zoneinfo/$TZ /etc/localtime && echo $TZ > /etc/timezone
WORKDIR /usr/app
COPY package*.json ./
COPY tsconfig.json .
RUN npm install
ADD . /usr/app
RUN npm run build
ENV NPM_CONFIG_LOGLEVEL debug
ARG log_dir=/home/node/logs
RUN mkdir -p $log_dir
VOLUME $log_dir
ENV LOG_DIR=$log_dir
CMD [ "node", "src/index.js", "run", "all", "--port $PORT"]

View File

@@ -1,3 +1,4 @@
build:
docker:
worker: Dockerfile
web: heroku.Dockerfile
worker: heroku.Dockerfile

3992
package-lock.json generated

File diff suppressed because it is too large Load Diff

View File

@@ -7,7 +7,6 @@
"test": "echo \"Error: no tests installed\" && exit 1",
"build": "tsc",
"start": "node src/index.js run",
"guard": "ts-auto-guard src/JsonConfig.ts",
"schema": "npm run -s schema-app & npm run -s schema-ruleset & npm run -s schema-rule & npm run -s schema-action & npm run -s schema-config",
"schema-app": "typescript-json-schema tsconfig.json JSONConfig --out src/Schema/App.json --required --tsNodeRegister --refs",
"schema-ruleset": "typescript-json-schema tsconfig.json RuleSetJson --out src/Schema/RuleSet.json --required --tsNodeRegister --refs",
@@ -26,8 +25,10 @@
"license": "ISC",
"dependencies": {
"@awaitjs/express": "^0.8.0",
"@googleapis/youtube": "^2.0.0",
"@stdlib/regexp-regexp": "^0.0.6",
"ajv": "^7.2.4",
"ansi-regex": ">=5.0.1",
"async": "^3.2.0",
"autolinker": "^3.14.3",
"body-parser": "^1.19.0",
@@ -57,7 +58,6 @@
"leven": "^3.1.0",
"lodash": "^4.17.21",
"lru-cache": "^6.0.0",
"monaco-editor": "^0.27.0",
"mustache": "^4.2.0",
"node-fetch": "^2.6.1",
"normalize-url": "^6.1.0",
@@ -70,15 +70,15 @@
"pixelmatch": "^5.2.1",
"pretty-print-json": "^1.0.3",
"safe-stable-stringify": "^1.1.1",
"set-random-interval": "^1.1.0",
"snoostorm": "^1.5.2",
"snoowrap": "^1.23.0",
"socket.io": "^4.1.3",
"string-similarity": "^4.0.4",
"tcp-port-used": "^1.0.2",
"triple-beam": "^1.3.0",
"typescript": "^4.3.4",
"webhook-discord": "^3.7.7",
"winston": "FoxxMD/winston#fbab8de969ecee578981c77846156c7f43b5f01e",
"winston": "github:FoxxMD/winston#fbab8de969ecee578981c77846156c7f43b5f01e",
"winston-daily-rotate-file": "^4.5.5",
"winston-duplex": "^0.1.1",
"winston-transport": "^4.4.0",
@@ -108,9 +108,9 @@
"@types/passport-jwt": "^3.0.6",
"@types/pixelmatch": "^5.2.4",
"@types/sharp": "^0.29.2",
"@types/string-similarity": "^4.0.0",
"@types/tcp-port-used": "^1.0.0",
"@types/triple-beam": "^1.3.2",
"ts-auto-guard": "*",
"ts-json-schema-generator": "^0.93.0",
"typescript-json-schema": "^0.50.1"
},

View File

@@ -58,13 +58,13 @@ export class MessageAction extends Action {
if(this.to !== undefined) {
// parse to value
try {
const entityData = parseRedditEntity(this.to);
const entityData = parseRedditEntity(this.to, 'user');
if(entityData.type === 'user') {
recipient = entityData.name;
} else {
recipient = `/r/${entityData.name}`;
}
} catch (err) {
} catch (err: any) {
this.logger.error(`'to' field for message was not in a valid format. See ${REDDIT_ENTITY_REGEX_URL} for valid examples`);
this.logger.error(err);
err.logged = true;

View File

@@ -98,7 +98,7 @@ export abstract class Action {
actRes.run = true;
const results = await this.process(item, ruleResults, runtimeDryrun);
return {...actRes, ...results};
} catch (err) {
} catch (err: any) {
if(!(err instanceof LoggedError)) {
this.logger.error(`Encountered error while running`, err);
}

View File

@@ -74,7 +74,7 @@ export class App {
this.logger.error(err);
}
});
} catch (err) {
} catch (err: any) {
if (b.error === undefined) {
b.error = err.message;
}

View File

@@ -20,6 +20,8 @@ import {ModQueueStream, UnmoderatedStream} from "../Subreddit/Streams";
import {BotResourcesManager} from "../Subreddit/SubredditResources";
import LoggedError from "../Utils/LoggedError";
import pEvent from "p-event";
import SimpleError from "../Utils/SimpleError";
import {isRateLimitError} from "../Utils/Errors";
class Bot {
@@ -42,6 +44,7 @@ class Bot {
nannyRunning: boolean = false;
nextNannyCheck: Dayjs = dayjs().add(10, 'second');
nannyRetryHandler: Function;
managerRetryHandler: Function;
nextExpiration: Dayjs = dayjs();
botName?: string;
botLink?: string;
@@ -51,6 +54,8 @@ class Bot {
sharedModqueue: boolean = false;
streamListedOnce: string[] = [];
stagger: number;
apiSample: number[] = [];
apiRollingAvg: number = 0;
apiEstDepletion?: Duration;
@@ -81,10 +86,12 @@ class Bot {
heartbeatInterval,
},
credentials: {
clientId,
clientSecret,
refreshToken,
accessToken,
reddit: {
clientId,
clientSecret,
refreshToken,
accessToken,
},
},
snoowrap: {
proxy,
@@ -92,7 +99,7 @@ class Bot {
},
polling: {
sharedMod,
stagger,
stagger = 2000,
},
queue: {
maxWorkers,
@@ -174,9 +181,9 @@ class Bot {
maxRetryAttempts: 5,
debug,
logger: snooLogWrapper(this.logger.child({labels: ['Snoowrap']}, mergeArr)),
continueAfterRatelimitError: true,
continueAfterRatelimitError: false,
});
} catch (err) {
} catch (err: any) {
if(this.error === undefined) {
this.error = err.message;
this.logger.error(err);
@@ -185,6 +192,9 @@ class Bot {
const retryHandler = createRetryHandler({maxRequestRetry: 8, maxOtherRetry: 1}, this.logger);
this.nannyRetryHandler = createRetryHandler({maxRequestRetry: 5, maxOtherRetry: 1}, this.logger);
this.managerRetryHandler = createRetryHandler({maxRequestRetry: 5, maxOtherRetry: 10, waitOnRetry: false, clearRetryCountAfter: 2}, this.logger);
this.stagger = stagger ?? 2000;
const modStreamErrorListener = (name: string) => async (err: any) => {
this.logger.error('Polling error occurred', err);
@@ -259,19 +269,23 @@ class Bot {
}
}
async testClient() {
async testClient(initial = true) {
try {
// @ts-ignore
await this.client.getMe();
this.logger.info('Test API call successful');
} catch (err) {
this.logger.error('An error occurred while trying to initialize the Reddit API Client which would prevent the entire application from running.');
if(err.name === 'StatusCodeError') {
} catch (err: any) {
if (initial) {
this.logger.error('An error occurred while trying to initialize the Reddit API Client which would prevent the entire application from running.');
}
if (err.name === 'StatusCodeError') {
const authHeader = err.response.headers['www-authenticate'];
if (authHeader !== undefined && authHeader.includes('insufficient_scope')) {
this.logger.error('Reddit responded with a 403 insufficient_scope. Please ensure you have chosen the correct scopes when authorizing your account.');
} else if(err.statusCode === 401) {
} else if (err.statusCode === 401) {
this.logger.error('It is likely a credential is missing or incorrect. Check clientId, clientSecret, refreshToken, and accessToken');
} else if(err.statusCode === 400) {
this.logger.error('Credentials may have been invalidated due to prior behavior. The error message may contain more information.');
}
this.logger.error(`Error Message: ${err.message}`);
} else {
@@ -298,10 +312,12 @@ class Bot {
}
this.logger.info(`Bot Name${botNameFromConfig ? ' (from config)' : ''}: ${this.botName}`);
for (const sub of await this.client.getModeratedSubreddits()) {
// TODO don't know a way to check permissions yet
availSubs.push(sub);
let subListing = await this.client.getModeratedSubreddits({count: 100});
while(!subListing.isFinished) {
subListing = await subListing.fetchMore({amount: 100});
}
availSubs = subListing;
this.logger.info(`u/${user.name} is a moderator of these subreddits: ${availSubs.map(x => x.display_name_prefixed).join(', ')}`);
let subsToRun: Subreddit[] = [];
@@ -324,8 +340,8 @@ class Bot {
const normalExcludes = this.excludeSubreddits.map(x => x.toLowerCase());
subsToRun = availSubs.filter(x => !normalExcludes.includes(x.display_name.toLowerCase()));
} else {
this.logger.info('No user-defined subreddit constraints detected, will run on all moderated subreddits');
subsToRun = availSubs;
this.logger.info(`No user-defined subreddit constraints detected, will run on all moderated subreddits EXCEPT own profile (${this.botAccount})`);
subsToRun = availSubs.filter(x => x.display_name_prefixed !== this.botAccount);
}
}
@@ -335,17 +351,28 @@ class Bot {
const manager = new Manager(sub, this.client, this.logger, this.cacheManager, {dryRun: this.dryRun, sharedModqueue: this.sharedModqueue, wikiLocation: this.wikiLocation, botName: this.botName, maxWorkers: this.maxWorkers});
try {
await manager.parseConfiguration('system', true, {suppressNotification: true});
} catch (err) {
} catch (err: any) {
if (!(err instanceof LoggedError)) {
this.logger.error(`Config was not valid:`, {subreddit: sub.display_name_prefixed});
this.logger.error(err, {subreddit: sub.display_name_prefixed});
}
}
// all errors from managers will count towards bot-level retry count
manager.on('error', async (err) => await this.panicOnRetries(err));
subSchedule.push(manager);
}
this.subManagers = subSchedule;
}
// if the cumulative errors exceeds configured threshold then stop ALL managers as there is most likely something very bad happening
async panicOnRetries(err: any) {
if(!this.managerRetryHandler(err)) {
for(const m of this.subManagers) {
await m.stop();
}
}
}
async destroy(causedBy: Invokee) {
this.logger.info('Stopping heartbeat and nanny processes, may take up to 5 seconds...');
const processWait = pEvent(this.emitter, 'healthStopped');
@@ -382,7 +409,7 @@ class Bot {
for (const manager of this.subManagers) {
if (manager.validConfigLoaded && manager.botState.state !== RUNNING) {
await manager.start(causedBy, {reason: 'Caused by application startup'});
await sleep(2000);
await sleep(this.stagger);
}
}
@@ -404,15 +431,15 @@ class Bot {
try {
await this.runApiNanny();
this.nextNannyCheck = dayjs().add(10, 'second');
} catch (err) {
this.logger.info('Delaying next nanny check for 1 minute due to emitted error');
} catch (err: any) {
this.logger.info('Delaying next nanny check for 2 minutes due to emitted error');
this.nextNannyCheck = dayjs().add(120, 'second');
}
}
if(dayjs().isSameOrAfter(this.nextHeartbeat)) {
try {
await this.heartbeat();
} catch (err) {
} catch (err: any) {
this.logger.error(`Error occurred during heartbeat check: ${err.message}`);
}
this.nextHeartbeat = dayjs().add(this.heartbeatInterval, 'second');
@@ -424,20 +451,39 @@ class Bot {
async heartbeat() {
const heartbeat = `HEARTBEAT -- API Remaining: ${this.client.ratelimitRemaining} | Usage Rolling Avg: ~${formatNumber(this.apiRollingAvg)}/s | Est Depletion: ${this.apiEstDepletion === undefined ? 'N/A' : this.apiEstDepletion.humanize()} (${formatNumber(this.depletedInSecs, {toFixed: 0})} seconds)`
this.logger.info(heartbeat);
// run sanity check to see if there is a service issue
try {
await this.testClient(false);
} catch (err: any) {
throw new SimpleError(`Something isn't right! This could be a Reddit API issue (service is down? buggy??) or an issue with the Bot account. Will not run heartbeat operations and will wait until next heartbeat (${dayjs.duration(this.nextHeartbeat.diff(dayjs())).humanize()}) to try again`);
}
let startedAny = false;
for (const s of this.subManagers) {
if(s.botState.state === STOPPED && s.botState.causedBy === USER) {
this.logger.debug('Skipping config check/restart on heartbeat due to previously being stopped by user', {subreddit: s.displayLabel});
continue;
}
try {
// ensure calls to wiki page are also staggered so we aren't hitting api hard when bot has a ton of subreddits to check
await sleep(this.stagger);
const newConfig = await s.parseConfiguration();
if(newConfig || (s.queueState.state !== RUNNING && s.queueState.causedBy === SYSTEM))
{
await s.startQueue('system', {reason: newConfig ? 'Config updated on heartbeat triggered reload' : 'Heartbeat detected non-running queue'});
}
if(newConfig || (s.eventsState.state !== RUNNING && s.eventsState.causedBy === SYSTEM))
{
await s.startEvents('system', {reason: newConfig ? 'Config updated on heartbeat triggered reload' : 'Heartbeat detected non-running events'});
const willStart = newConfig || (s.queueState.state !== RUNNING && s.queueState.causedBy === SYSTEM) || (s.eventsState.state !== RUNNING && s.eventsState.causedBy === SYSTEM);
if(willStart) {
// stagger restart
if (startedAny) {
await sleep(this.stagger);
}
startedAny = true;
if(newConfig || (s.queueState.state !== RUNNING && s.queueState.causedBy === SYSTEM))
{
await s.startQueue('system', {reason: newConfig ? 'Config updated on heartbeat triggered reload' : 'Heartbeat detected non-running queue'});
}
if(newConfig || (s.eventsState.state !== RUNNING && s.eventsState.causedBy === SYSTEM))
{
await s.startEvents('system', {reason: newConfig ? 'Config updated on heartbeat triggered reload' : 'Heartbeat detected non-running events'});
}
}
if(s.botState.state !== RUNNING && s.eventsState.state === RUNNING && s.queueState.state === RUNNING) {
s.botState = {
@@ -445,7 +491,7 @@ class Bot {
causedBy: 'system',
}
}
} catch (err) {
} catch (err: any) {
this.logger.info('Stopping event polling to prevent activity processing queue from backing up. Will be restarted when config update succeeds.')
await s.stopEvents('system', {reason: 'Invalid config will cause events to pile up in queue. Will be restarted when config update succeeds (next heartbeat).'});
if(!(err instanceof LoggedError)) {
@@ -472,7 +518,10 @@ class Bot {
// @ts-ignore
await this.client.getMe();
shouldRetry = false;
} catch (err) {
} catch (err: any) {
if(isRateLimitError(err)) {
throw err;
}
shouldRetry = await this.nannyRetryHandler(err);
if (!shouldRetry) {
throw err;
@@ -590,7 +639,7 @@ class Bot {
this.nannyMode = undefined;
}
} catch (err) {
} catch (err: any) {
this.logger.error(`Error occurred during nanny loop: ${err.message}`);
throw err;
}

View File

@@ -268,7 +268,7 @@ export abstract class Check implements ICheck {
// otherwise AND and did not return already so all passed
this.logger.info(`${PASS} => Rules: ${resultsSummary(allResults, this.condition)}`);
return [true, allRuleResults];
} catch (e) {
} catch (e: any) {
e.logged = true;
this.logger.warn(`Running rules failed due to uncaught exception`, e);
throw e;

View File

@@ -83,7 +83,7 @@ class ImageData {
}
} catch (err) {
} catch (err: any) {
if(!(err instanceof SimpleError)) {
throw new Error(`Error occurred while fetching response from URL: ${err.message}`);
} else {

View File

@@ -832,6 +832,8 @@ export interface ManagerOptions {
nickname?: string
notifications?: NotificationConfig
credentials?: ThirdPartyCredentialsJsonConfig
}
/**
@@ -948,6 +950,14 @@ export interface CommentState extends ActivityState {
* A list of SubmissionState attributes to test the Submission this comment is in
* */
submissionState?: SubmissionState[]
/**
* The (nested) level of a comment.
*
* * 0 mean the comment is at top-level (replying to submission)
* * non-zero, Nth value means the comment has N parent comments
* */
depth?: DurationComparor
}
/**
@@ -979,6 +989,8 @@ export interface SubredditState {
* A friendly description of what this State is trying to parse
* */
stateDescription?: string
isUserProfile?: boolean
}
export interface StrongSubredditState extends SubredditState {
@@ -1006,6 +1018,28 @@ export const STOPPED = 'stopped';
export const RUNNING = 'running';
export const PAUSED = 'paused';
export interface SearchAndReplaceRegExp {
/**
* The search value to test for
*
* Can be a normal string (converted to a case-sensitive literal) or a valid regular expression
*
* EX `["find this string", "/some string*\/ig"]`
*
* @examples ["find this string", "/some string*\/ig"]
* */
search: string
/**
* The replacement string/value to use when search is found
*
* This can be a literal string like `'replace with this`, an empty string to remove the search value (`''`), or a special regex value
*
* See replacement here for more information: https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/String/replace
* */
replace: string
}
export interface NamedGroup {
[name: string]: string
}
@@ -1264,7 +1298,7 @@ export interface WebCredentials {
*
* */
export interface BotInstanceJsonConfig {
credentials?: RedditCredentials
credentials?: BotCredentialsJsonConfig | RedditCredentials
/*
* The name to display for the bot. If not specified will use the name of the reddit account IE `u/TheBotName`
* */
@@ -1644,6 +1678,8 @@ export interface OperatorJsonConfig {
* */
friendly?: string,
}
credentials?: ThirdPartyCredentialsJsonConfig
}
export interface RequiredOperatorRedditCredentials extends RedditCredentials {
@@ -1657,8 +1693,23 @@ export interface RequiredWebRedditCredentials extends RedditCredentials {
redirectUri: string
}
export interface ThirdPartyCredentialsJsonConfig {
youtube?: {
apiKey: string
}
[key: string]: any
}
export interface BotCredentialsJsonConfig extends ThirdPartyCredentialsJsonConfig {
reddit: RedditCredentials
}
export interface BotCredentialsConfig extends ThirdPartyCredentialsJsonConfig {
reddit: RequiredOperatorRedditCredentials
}
export interface BotInstanceConfig extends BotInstanceJsonConfig {
credentials: RequiredOperatorRedditCredentials
credentials: BotCredentialsJsonConfig
snoowrap: {
proxy?: string,
debug?: boolean,
@@ -1720,6 +1771,7 @@ export interface OperatorConfig extends OperatorJsonConfig {
friendly?: string,
}
bots: BotInstanceConfig[]
credentials: ThirdPartyCredentialsJsonConfig
}
//export type OperatorConfig = Required<OperatorJsonConfig>;
@@ -1878,3 +1930,27 @@ export interface HistoricalStatUpdateData {
rulesCachedTotal: number
rulesTriggered: string[] | string
}
export type SearchFacetType = 'title' | 'url' | 'duplicates' | 'crossposts' | 'external';
export interface RepostItem {
value: string
createdOn?: number
source: string
sourceUrl?: string
score?: number
id: string
itemType: string
acquisitionType: SearchFacetType | 'comment'
sourceObj?: any
reqSameness?: number
}
export interface RepostItemResult extends RepostItem {
sameness: number
}
export interface StringComparisonOptions {
lengthWeight?: number,
transforms?: ((str: string) => string)[]
}

View File

@@ -13,9 +13,17 @@ import {ApproveActionJson} from "../Action/ApproveAction";
import {BanActionJson} from "../Action/BanAction";
import {RegexRuleJSONConfig} from "../Rule/RegexRule";
import {MessageActionJson} from "../Action/MessageAction";
import {RepostRuleJSONConfig} from "../Rule/RepostRule";
export type RuleJson = RecentActivityRuleJSONConfig | RepeatActivityJSONConfig | AuthorRuleJSONConfig | AttributionJSONConfig | HistoryJSONConfig | RegexRuleJSONConfig | string;
export type RuleJson = RecentActivityRuleJSONConfig | RepeatActivityJSONConfig | AuthorRuleJSONConfig | AttributionJSONConfig | HistoryJSONConfig | RegexRuleJSONConfig | RepostRuleJSONConfig | string;
export type RuleObjectJson = Exclude<RuleJson, string>
export type ActionJson = CommentActionJson | FlairActionJson | ReportActionJson | LockActionJson | RemoveActionJson | ApproveActionJson | BanActionJson | UserNoteActionJson | MessageActionJson | string;
export type ActionObjectJson = Exclude<ActionJson, string>;
// borrowed from https://github.com/jabacchetta/set-random-interval/blob/master/src/index.ts
export type SetRandomInterval = (
intervalFunction: () => void,
minDelay: number,
maxDelay: number,
) => { clear: () => void };

View File

@@ -31,7 +31,7 @@ import {
CacheOptions,
BotInstanceJsonConfig,
BotInstanceConfig,
RequiredWebRedditCredentials
RequiredWebRedditCredentials, RedditCredentials, BotCredentialsJsonConfig, BotCredentialsConfig
} from "./Common/interfaces";
import {isRuleSetJSON, RuleSetJson, RuleSetObjectJson} from "./Rule/RuleSet";
import deepEqual from "fast-deep-equal";
@@ -387,10 +387,13 @@ const parseListFromEnv = (val: string | undefined) => {
export const parseDefaultBotInstanceFromEnv = (): BotInstanceJsonConfig => {
const data = {
credentials: {
clientId: process.env.CLIENT_ID,
clientSecret: process.env.CLIENT_SECRET,
accessToken: process.env.ACCESS_TOKEN,
refreshToken: process.env.REFRESH_TOKEN,
reddit: {
clientId: process.env.CLIENT_ID,
clientSecret: process.env.CLIENT_SECRET,
accessToken: process.env.ACCESS_TOKEN,
refreshToken: process.env.REFRESH_TOKEN,
},
youtube: process.env.YOUTUBE_API_KEY
},
subreddits: {
names: parseListFromEnv(process.env.SUBREDDITS),
@@ -443,6 +446,11 @@ export const parseOpConfigFromEnv = (): OperatorJsonConfig => {
clientSecret: process.env.CLIENT_SECRET,
redirectUri: process.env.REDIRECT_URI,
},
},
credentials: {
youtube: {
apiKey: process.env.YOUTUBE_API_KEY
}
}
}
@@ -476,7 +484,7 @@ export const parseOperatorConfigFromSources = async (args: any): Promise<Operato
process.env[k] = v;
}
}
} catch (err) {
} catch (err: any) {
let msg = 'No .env file found at default location (./env)';
if (envPath !== undefined) {
msg = `${msg} or OPERATOR_ENV path (${envPath})`;
@@ -492,14 +500,14 @@ export const parseOperatorConfigFromSources = async (args: any): Promise<Operato
let rawConfig;
try {
rawConfig = await readConfigFile(operatorConfig, {log: initLogger}) as object;
} catch (err) {
} catch (err: any) {
initLogger.error('Cannot continue app startup because operator config file was not parseable.');
err.logged = true;
throw err;
}
try {
configFromFile = validateJson(rawConfig, operatorSchema, initLogger) as OperatorJsonConfig;
} catch (err) {
} catch (err: any) {
initLogger.error('Cannot continue app startup because operator config file was not valid.');
throw err;
}
@@ -565,6 +573,7 @@ export const buildOperatorConfigWithDefaults = (data: OperatorJsonConfig): Opera
secret: apiSecret = randomId(),
friendly,
} = {},
credentials = {},
bots = [],
} = data;
@@ -632,11 +641,7 @@ export const buildOperatorConfigWithDefaults = (data: OperatorJsonConfig): Opera
hardLimit = 50
} = {},
snoowrap = {},
credentials: {
clientId: ci,
clientSecret: cs,
...restCred
} = {},
credentials = {},
subreddits: {
names = [],
exclude = [],
@@ -646,7 +651,6 @@ export const buildOperatorConfigWithDefaults = (data: OperatorJsonConfig): Opera
} = {},
} = x;
let botCache: StrongCache;
let botActionedEventsDefault: number;
@@ -656,10 +660,7 @@ export const buildOperatorConfigWithDefaults = (data: OperatorJsonConfig): Opera
...cacheTTLDefaults,
actionedEventsDefault: opActionedEventsDefault,
actionedEventsMax: opActionedEventsMax,
provider: {
store: 'memory',
...cacheOptDefaults
}
provider: {...defaultProvider}
};
} else {
const {
@@ -700,11 +701,42 @@ export const buildOperatorConfigWithDefaults = (data: OperatorJsonConfig): Opera
}
}
const botCreds = {
clientId: (ci as string),
clientSecret: (cs as string),
...restCred,
};
let botCreds: BotCredentialsConfig;
if((credentials as any).clientId !== undefined) {
const creds = credentials as RedditCredentials;
const {
clientId: ci,
clientSecret: cs,
...restCred
} = creds;
botCreds = {
reddit: {
clientId: (ci as string),
clientSecret: (cs as string),
...restCred,
}
}
} else {
const creds = credentials as BotCredentialsJsonConfig;
const {
reddit: {
clientId: ci,
clientSecret: cs,
...restRedditCreds
},
...rest
} = creds;
botCreds = {
reddit: {
clientId: (ci as string),
clientSecret: (cs as string),
...restRedditCreds,
},
...rest
}
}
if (botCache.provider.prefix === undefined || botCache.provider.prefix === defaultProvider.prefix) {
// need to provide unique prefix to bot
botCache.provider.prefix = buildCachePrefix([botCache.provider.prefix, 'bot', (botName || objectHash.sha1(botCreds))]);
@@ -776,6 +808,7 @@ export const buildOperatorConfigWithDefaults = (data: OperatorJsonConfig): Opera
friendly
},
bots: hydratedBots,
credentials,
};
return config;

View File

@@ -12,11 +12,11 @@ export interface AuthorRuleConfig {
/**
* Will "pass" if any set of AuthorCriteria passes
* */
include: AuthorCriteria[];
include?: AuthorCriteria[];
/**
* Only runs if include is not present. Will "pass" if any of set of the AuthorCriteria does not pass
* */
exclude: AuthorCriteria[];
exclude?: AuthorCriteria[];
}
export interface AuthorRuleOptions extends AuthorRuleConfig, RuleOptions {
@@ -34,8 +34,13 @@ export class AuthorRule extends Rule {
constructor(options: AuthorRuleOptions) {
super(options);
this.include = options.include.map(x => new Author(x));
this.exclude = options.exclude.map(x => new Author(x));
const {
include,
exclude,
} = options;
this.include = include !== undefined ? include.map(x => new Author(x)) : [];
this.exclude = exclude !== undefined ? exclude.map(x => new Author(x)) : [];
if(this.include.length === 0 && this.exclude.length === 0) {
throw new Error('At least one of the properties [include,exclude] on Author Rule must not be empty');

View File

@@ -165,7 +165,7 @@ export class RecentActivityRule extends Rule {
// if (referenceImage.preferredResolution !== undefined) {
// await (referenceImage.getSimilarResolutionVariant(...referenceImage.preferredResolution) as ImageData).sharp();
// }
} catch (err) {
} catch (err: any) {
this.logger.verbose(err.message);
}
}
@@ -241,11 +241,11 @@ export class RecentActivityRule extends Rule {
if (sameImage) {
return x;
}
} catch (err) {
} catch (err: any) {
this.logger.warn(`Unexpected error encountered while pixel-comparing images, will skip comparison => ${err.message}`);
}
}
} catch (err) {
} catch (err: any) {
if(!err.message.includes('did not end with a valid image extension')) {
this.logger.warn(`Will not compare image from Submission ${x.id} due to error while parsing image URL => ${err.message}`);
}

View File

@@ -296,6 +296,7 @@ export class RegexRule extends Rule {
const logSummary: string[] = [];
let index = 0;
let matchSample = undefined;
for (const c of criteriaResults) {
index++;
let msg = `Criteria ${c.criteria.name || `#${index}`} ${triggeredIndicator(c.triggered)}`;
@@ -309,8 +310,8 @@ export class RegexRule extends Rule {
}
msg = `${msg} (Window: ${c.criteria.window})`;
if(c.matches.length > 0) {
let matchSample = `-- Matched Values: ${c.matches.slice(0, 3).map(x => `"${x}"`).join(', ')}${c.matches.length > 3 ? `, and ${c.matches.length - 3} more...` : ''}`;
logSummary.push(`${msg} ${matchSample}`);
matchSample = `${c.matches.slice(0, 3).map(x => `"${x}"`).join(', ')}${c.matches.length > 3 ? `, and ${c.matches.length - 3} more...` : ''}`;
logSummary.push(`${msg} -- Matched Values: ${matchSample}`);
} else {
logSummary.push(msg);
}
@@ -319,7 +320,7 @@ export class RegexRule extends Rule {
const result = `${triggeredIndicator(criteriaMet)} ${logSummary.join(' || ')}`;
this.logger.verbose(result);
return Promise.resolve([criteriaMet, this.getResult(criteriaMet, {result, data: criteriaResults})]);
return Promise.resolve([criteriaMet, this.getResult(criteriaMet, {result, data: {results: criteriaResults, matchSample }})]);
}
protected getMatchesFromActivity(a: (Submission | Comment), testOn: string[], reg: RegExp): string[] {

897
src/Rule/RepostRule.ts Normal file
View File

@@ -0,0 +1,897 @@
import {Rule, RuleJSONConfig, RuleOptions, RuleResult} from "./index";
import {Listing, SearchOptions} from "snoowrap";
import Submission from "snoowrap/dist/objects/Submission";
import Comment from "snoowrap/dist/objects/Comment";
import {
compareDurationValue,
comparisonTextOp,
FAIL, formatNumber,
isRepostItemResult, parseDurationComparison, parseGenericValueComparison,
parseUsableLinkIdentifier,
PASS, searchAndReplace, stringSameness, triggeredIndicator, windowToActivityWindowCriteria, wordCount
} from "../util";
import {
ActivityWindow,
ActivityWindowType,
CompareValue, DurationComparor,
JoinOperands,
RepostItem,
RepostItemResult,
SearchAndReplaceRegExp,
SearchFacetType,
} from "../Common/interfaces";
import objectHash from "object-hash";
import {getActivities, getAttributionIdentifier} from "../Utils/SnoowrapUtils";
import Fuse from "fuse.js";
import leven from "leven";
import {YoutubeClient, commentsAsRepostItems} from "../Utils/ThirdParty/YoutubeClient";
import dayjs from "dayjs";
import {rest} from "lodash";
const parseYtIdentifier = parseUsableLinkIdentifier();
export interface TextMatchOptions {
/**
* The percentage, as a whole number, of a repost title/comment that must match the title/comment being checked in order to consider both a match
*
* Note: Setting to 0 will make every candidate considered a match -- useful if you want to match if the URL has been reposted anywhere
*
* Defaults to `85` (85%)
*
* @default 85
* @example [85]
* */
matchScore?: number
/**
* The minimum number of words in the activity being checked for which this rule will run on
*
* If the word count is below the minimum the rule fails
*
* Defaults to 2
*
* @default 2
* @example [2]
* */
minWordCount?: number
/**
* Should text matching be case sensitive?
*
* Defaults to false
*
* @default false
* @example [false]
**/
caseSensitive?: boolean
}
export interface TextTransformOptions {
/**
* A set of search-and-replace operations to perform on text values before performing a match. Transformations are performed in the order they are defined.
*
* * If `transformationsActivity` IS NOT defined then these transformations will be performed on BOTH the activity text (submission title or comment) AND the repost candidate text
* * If `transformationsActivity` IS defined then these transformations are only performed on repost candidate text
* */
transformations?: SearchAndReplaceRegExp[]
/**
* Specify a separate set of transformations for the activity text (submission title or comment)
*
* To perform no transformations when `transformations` is defined set this to an empty array (`[]`)
* */
transformationsActivity?: SearchAndReplaceRegExp[]
}
export interface SearchFacetJSONConfig extends TextMatchOptions, TextTransformOptions, ActivityWindow {
kind: SearchFacetType | SearchFacetType[]
}
export interface SearchFacet extends SearchFacetJSONConfig {
kind: SearchFacetType
}
export type TimeBasedSelector = "newest" | "oldest" | "any" | "all";
export interface OccurredAt {
/**
* Which repost to test on
*
* * `any` -- ANY repost passing `condition` will cause this criteria to be true
* * `all` -- ALL reposts must pass `condition` for this criteria to be true
* */
"testOn": TimeBasedSelector,
"condition": DurationComparor
}
export interface OccurrenceTests {
count?: {
condition?: JoinOperands
/**
* An array of strings containing a comparison operator and the number of repost occurrences to compare against
*
* Examples:
*
* * `">= 7"` -- TRUE if 7 or more reposts were found
* * `"< 1"` -- TRUE if less than 0 reposts were found
* */
test: CompareValue[]
}
/**
* Test the time the reposts occurred at
* */
time?: {
/**
* How to test all the specified comparisons
*
* * AND -- All criteria must be true
* * OR -- Any criteria must be true
*
* Defaults to AND
*
* @default AND
* @example ["AND", "OR"]
* */
condition?: JoinOperands
/**
* An array of time-based conditions to test against found reposts (test when a repost was made)
* */
test: OccurredAt[]
}
}
/**
* A set of criteria used to find reposts
*
* Contains options and conditions used to define how candidate reposts are retrieved and if they are a match.
*
* */
export interface RepostCriteria extends ActivityWindow, TextMatchOptions, TextTransformOptions {
/**
* Define how to find candidate reposts
*
* * **title** -- search reddit for submissions with the same title
* * **url** -- search reddit for submissions with the same url
* * **external** -- WHEN ACTIVITY IS A COMMENT - tries to get comments from external source (youtube, twitter, etc...)
* */
searchOn?: (SearchFacetType | SearchFacetJSONConfig)[]
/**
* A set of comparisons to test against the number of reposts found
*
* If not specified the default is "AND [occurrences] > 0" IE any reposts makes this test pass
* */
occurrences?: {
/**
* How to test all the specified comparisons
*
* * AND -- All criteria must be true
* * OR -- Any criteria must be true
*
* Defaults to AND
*
* @default AND
* @example ["AND", "OR"]
* */
condition?: JoinOperands
criteria?: OccurrenceTests[]
}
/**
* Test the time the reposts occurred at
* */
occurredAt?: {
/**
* How to test all the specified comparisons
*
* * AND -- All criteria must be true
* * OR -- Any criteria must be true
*
* Defaults to AND
*
* @default AND
* @example ["AND", "OR"]
* */
condition?: JoinOperands
/**
* An array of time-based conditions to test against found reposts (test when a repost was made)
* */
criteria: OccurredAt[]
}
/**
* The maximum number of comments/submissions to check
*
* In both cases this list is gathered from sorting all submissions or all comments from all submission by number of votes and taking the "top" maximum specified
*
* For comment checks this is the number of comments cached
*
* @default 50
* @example [50]
* */
maxRedditItems?: number
/**
* The maximum number of external items (youtube comments) to check (and cache for comment checks)
*
* @default 50
* @example [50]
* */
maxExternalItems?: number
}
export interface CriteriaResult {
passed: boolean
conditionsSummary: string
items: RepostItemResult[]
}
const parentSubmissionSearchFacetDefaults = {
title: {
matchScore: 85,
minWordCount: 3
},
url: {
matchScore: 0, // when looking for submissions to find repost comments on automatically include any with exact same url
},
duplicates: {
matchScore: 0, // when looking for submissions to find repost comments on automatically include any that reddit thinks are duplicates
},
crossposts: {
matchScore: 0, // when looking for submissions to find repost comments on automatically include any that reddit thinks are crossposts
},
external: {}
}
const isSearchFacetType = (val: any): val is SearchFacetType => {
if (typeof val === 'string') {
return ['title', 'url', 'duplicates', 'crossposts', 'external'].includes(val);
}
return false;
}
const generateSearchFacet = (val: SearchFacetType | SearchFacetJSONConfig): SearchFacet[] => {
let facets: SearchFacet[] = [];
if (isSearchFacetType(val)) {
facets.push({
kind: val
});
} else if (Array.isArray(val.kind)) {
facets.concat(val.kind.map(x => ({...val, kind: x})));
} else {
facets.push(val as SearchFacet);
}
return facets.map(x => {
return {
...parentSubmissionSearchFacetDefaults[x.kind],
...x,
}
});
}
export class RepostRule extends Rule {
criteria: RepostCriteria[]
condition: JoinOperands;
submission?: Submission;
constructor(options: RepostRuleOptions) {
super(options);
const {
criteria = [{}],
condition = 'OR'
} = options || {};
if (criteria.length < 1) {
throw new Error('Must provide at least one RepostCriteria');
}
this.criteria = criteria;
this.condition = condition;
}
getKind(): string {
return 'Repost';
}
protected getSpecificPremise(): object {
return {
criteria: this.criteria,
condition: this.condition
}
}
// @ts-ignore
protected async getSubmission(item: Submission | Comment) {
if (item instanceof Comment) {
// @ts-ignore
return await this.client.getSubmission(item.link_id).fetch();
}
return item;
}
protected async process(item: Submission | Comment): Promise<[boolean, RuleResult]> {
let criteriaResults: CriteriaResult[] = [];
let ytClient: YoutubeClient | undefined = undefined;
let criteriaMatchedResults: RepostItemResult[] = [];
let totalSubs = 0;
let totalCommentSubs = 0;
let totalComments = 0;
let totalExternal = new Map<string,number>();
let fromCache = false;
let andFail = false;
for (const rCriteria of this.criteria) {
criteriaMatchedResults = [];
const {
searchOn = (item instanceof Submission ? ['title', 'url', 'duplicates', 'crossposts'] : ['external', 'title', 'url', 'duplicates', 'crossposts']),
//criteria = {},
maxRedditItems = 50,
maxExternalItems = 50,
window = 20,
...restCriteria
} = rCriteria;
const searchFacets = searchOn.map(x => generateSearchFacet(x)).flat(1) as SearchFacet[];
const includeCrossposts = searchFacets.some(x => x.kind === 'crossposts');
// in getDuplicate() options add "crossposts_only=1" to get only crossposts https://www.reddit.com/r/redditdev/comments/b4t5g4/get_all_the_subreddits_that_a_post_has_been/
// if a submission is a crosspost it has "crosspost_parent" attribute https://www.reddit.com/r/redditdev/comments/l46y2l/check_if_post_is_a_crosspost/
const strongWindow = windowToActivityWindowCriteria(window);
const candidateHash = `repostItems-${item instanceof Submission ? item.id : item.link_id}-${objectHash.sha1({
window,
searchOn
})}`;
let items: (RepostItem|RepostItemResult)[] = [];
let cacheRes = undefined;
if (item instanceof Comment) {
cacheRes = await this.resources.cache.get(candidateHash) as ((RepostItem|RepostItemResult)[] | undefined | null);
}
if (cacheRes === undefined || cacheRes === null) {
const sub = await this.getSubmission(item);
let dups: (Submission[] | undefined) = undefined;
for (const sf of searchFacets) {
const {
matchScore = 85,
minWordCount = 3,
transformations = [],
} = sf;
if (sf.kind === 'external') {
const attribution = getAttributionIdentifier(sub);
switch (attribution.provider) {
case 'YouTube':
const ytCreds = this.resources.getThirdPartyCredentials('youtube')
if (ytCreds === undefined) {
throw new Error('Cannot extract comments from Youtube because a Youtube Data API key was not provided in configuration');
}
if (ytClient === undefined) {
ytClient = new YoutubeClient(ytCreds.apiKey);
}
const ytComments = commentsAsRepostItems(await ytClient.getVideoTopComments(sub.url, maxExternalItems));
items = items.concat(ytComments)
totalExternal.set('Youtube comments', (totalExternal.get('Youtube comments') ?? 0) + ytComments.length);
break;
default:
if (attribution.provider === undefined) {
this.logger.debug('Unable to determine external provider');
continue;
} else {
this.logger.debug(`External parsing of ${attribution} is not supported yet.`);
continue;
}
}
} else {
let subs: Submission[];
if (['title', 'url'].includes(sf.kind)) {
let query: string;
let searchFunc: (limit: number) => Promise<Listing<Submission | Comment>>;
if (sf.kind === 'title') {
query = (await this.getSubmission(item)).title;
searchFunc = (limit: number) => {
let opts: SearchOptions = {
query,
limit,
sort: 'relevance'
};
if (strongWindow.subreddits?.include !== undefined && strongWindow.subreddits?.include.length > 0) {
opts.restrictSr = true;
opts.subreddit = strongWindow.subreddits?.include.join('+');
}
return this.client.search(opts);
}
} else {
const attr = getAttributionIdentifier(sub);
if (attr.provider === 'YouTube') {
const ytId = parseYtIdentifier(sub.url);
query = `url:https://youtu.be/${ytId}`;
} else {
query = `url:${sub.url}`;
}
searchFunc = (limit: number) => {
let opts: SearchOptions = {
query,
limit,
sort: 'top'
};
if (strongWindow.subreddits?.include !== undefined && strongWindow.subreddits?.include.length > 0) {
opts.restrictSr = true;
opts.subreddit = strongWindow.subreddits?.include.join('+');
}
return this.client.search(opts);
}
}
subs = await getActivities(searchFunc, {window: strongWindow}) as Submission[];
} else {
if (dups === undefined) {
let searchFunc: (limit: number) => Promise<Listing<Submission | Comment>> = (limit: number) => {
// this does not work correctly
// see https://github.com/not-an-aardvark/snoowrap/issues/320
// searchFunc = (limit: number) => {
// return sub.getDuplicates({crossposts_only: 0, limit});
// };
return this.client.oauthRequest({
uri: `duplicates/${sub.id}`,
qs: {
limit,
}
}).then(x => {
return Promise.resolve(x.comments) as Promise<Listing<Submission>>
});
};
subs = await getActivities(searchFunc, {window: strongWindow}) as Submission[];
dups = subs;
} else {
subs = dups;
}
if (sf.kind === 'duplicates') {
// @ts-ignore
subs = subs.filter(x => x.crosspost_parent === undefined)
} else {
// @ts-ignore
subs = subs.filter(x => x.crosspost_parent !== undefined && x.crosspost_parent === sub.id)
}
}
// filter by minimum word count
subs = subs.filter(x => wordCount(x.title) > minWordCount);
items = items.concat(subs.map(x => ({
value: searchAndReplace(x.title, transformations),
createdOn: x.created,
source: 'reddit',
sourceUrl: x.permalink,
id: x.id,
score: x.score,
itemType: 'submission',
acquisitionType: sf.kind,
sourceObj: x,
reqSameness: matchScore,
})));
}
}
if (!includeCrossposts) {
const sub = await this.getSubmission(item);
// remove submissions if they are official crossposts of the submission being checked and searchOn did not include 'crossposts'
items = items.filter(x => x.itemType !== 'submission' || !(x.sourceObj.crosspost_parent !== undefined && x.sourceObj.crosspost_parent === sub.id))
}
let sourceTitle = searchAndReplace(sub.title, restCriteria.transformationsActivity ?? []);
// do submission scoring BEFORE pruning duplicates bc...
// might end up in a situation where we get same submission for both title and url
// -- url is always a repost but title is not guaranteed and we if remove the url item but not the title we could potentially filter the title submission out and miss this repost
items = items.reduce((acc: (RepostItem|RepostItemResult)[], x) => {
if(x.itemType === 'submission') {
totalSubs++;
const sf = searchFacets.find(y => y.kind === x.acquisitionType) as SearchFacet;
let cleanTitle = x.value;
if (!(sf.caseSensitive ?? false)) {
cleanTitle = cleanTitle.toLowerCase();
}
const strMatchResults = stringSameness(sourceTitle, cleanTitle);
if(strMatchResults.highScoreWeighted >= (x.reqSameness as number)) {
return acc.concat({
...x,
sameness: Math.min(strMatchResults.highScoreWeighted, 100),
});
}
return acc;
}
return acc.concat(x);
}, []);
// now remove duplicate submissions
items = items.reduce((acc: RepostItem[], curr) => {
if(curr.itemType !== 'submission') {
return acc.concat(curr);
}
const subId = curr.sourceObj.id;
if (sub.id !== subId && !acc.some(x => x.itemType === 'submission' && x.sourceObj.id === subId)) {
return acc.concat(curr);
}
return acc;
}, []);
if (item instanceof Comment) {
// we need to gather comments from submissions
// first cut down the number of submissions to retrieve because we don't care about have ALL submissions,
// just most popular comments (which will be in the most popular submissions)
let subs = items.filter(x => x.itemType === 'submission').map(x => x.sourceObj) as Submission[];
totalCommentSubs += subs.length;
const nonSubItems = items.filter(x => x.itemType !== 'submission' && wordCount(x.value) > (restCriteria.minWordCount ?? 3));
subs.sort((a, b) => a.score - b.score).reverse();
// take top 10 submissions
subs = subs.slice(0, 10);
let comments: Comment[] = [];
for (const sub of subs) {
const commFunc = (limit: number) => {
return this.client.oauthRequest({
uri: `${sub.subreddit_name_prefixed}/comments/${sub.id}`,
// get ONLY top-level comments, sorted by Top
qs: {
sort: 'top',
depth: 0,
limit,
}
}).then(x => {
return x.comments as Promise<Listing<Comment>>
});
}
// and return the top 20 most popular
const subComments = await getActivities(commFunc, {window: {count: 20}, skipReplies: true}) as Listing<Comment>;
comments = comments.concat(subComments);
}
// sort by highest scores
comments.sort((a, b) => a.score - b.score).reverse();
// filter out all comments with fewer words than required (prevent false negatives)
comments.filter(x => wordCount(x.body) > (restCriteria.minWordCount ?? 3));
totalComments += Math.min(comments.length, maxRedditItems);
// and take the user-defined maximum number of items
items = nonSubItems.concat(comments.slice(0, maxRedditItems).map(x => ({
value: searchAndReplace(x.body, restCriteria.transformations ?? []),
createdOn: x.created,
source: 'reddit',
id: x.id,
sourceUrl: x.permalink,
score: x.score,
itemType: 'comment',
acquisitionType: 'comment'
})));
}
// cache items for 20 minutes
await this.resources.cache.set(candidateHash, items, {ttl: 1200});
} else {
items = cacheRes;
totalExternal = items.reduce((acc, curr) => {
if(curr.acquisitionType === 'external') {
acc.set(`${curr.source} comments`, (acc.get(`${curr.source} comments`) ?? 0 ) + 1);
return acc;
}
return acc;
}, new Map<string, number>());
//totalSubs = items.filter(x => x.itemType === 'submission').length;
//totalCommentSubs = totalSubs;
totalComments = items.filter(x => x.itemType === 'comment' && x.source === 'reddit').length;
fromCache = true;
}
const {
matchScore = 85,
caseSensitive = false,
transformations = [],
transformationsActivity = transformations,
occurrences = {
condition: 'AND',
criteria: [
{
count: {
test: ['> 0']
}
}
]
},
} = restCriteria;
if(item instanceof Submission) {
// we've already done difference calculations in the searchFacet phase
// and when the check is for a sub it means we are only checking if the submissions has been reposted which means either:
// * very similar title (default sameness of 85% or more)
// * duplicate/same URL -- which is a repost, duh
// so just add all items to critMatches at this point
criteriaMatchedResults = criteriaMatchedResults.concat(items.filter(x => "sameness" in x) as RepostItemResult[]);
} else {
let sourceContent = searchAndReplace(item.body, transformationsActivity);
if (!caseSensitive) {
sourceContent = sourceContent.toLowerCase();
}
for (const i of items) {
const itemContent = !caseSensitive ? i.value.toLowerCase() : i.value;
const strMatchResults = stringSameness(sourceContent, itemContent);
if(strMatchResults.highScoreWeighted >= matchScore) {
criteriaMatchedResults.push({
...i,
// @ts-ignore
reqSameness: matchScore,
sameness: Math.min(strMatchResults.highScoreWeighted, 100)
});
}
}
}
// now do occurrence and time tests
const {
condition: occCondition = 'AND',
criteria: occCriteria = [
{
count: {
test: ['> 0']
}
}
]
} = occurrences;
let orPass = false;
let occurrenceReason = null;
for(const occurrenceTest of occCriteria) {
const {
count:{
condition: oCondition = 'AND',
test: oCriteria = []
} = {},
time: {
condition: tCondition = 'AND',
test: tCriteria = [],
} = {}
} = occurrenceTest;
let conditionFailSummaries = [];
const passedConditions = [];
const failedConditions = [];
for (const oc of oCriteria) {
const ocCompare = parseGenericValueComparison(oc);
const ocMatch = comparisonTextOp(criteriaMatchedResults.length, ocCompare.operator, ocCompare.value);
if (ocMatch) {
passedConditions.push(oc);
} else {
failedConditions.push(oc);
if (oCondition === 'AND') {
conditionFailSummaries.push(`(AND) ${oc} occurrences was not true`);
break;
}
}
}
if (passedConditions.length === 0 && oCriteria.length > 0) {
conditionFailSummaries.push('(OR) No occurrence tests passed');
}
const existingPassed = passedConditions.length;
if (conditionFailSummaries.length === 0) {
const timeAwareReposts = [...criteriaMatchedResults].filter(x => x.createdOn !== undefined).sort((a, b) => (a.createdOn as number) - (b.createdOn as number));
for (const tc of tCriteria) {
let toTest: RepostItemResult[] = [];
const durationCompare = parseDurationComparison(tc.condition);
switch (tc.testOn) {
case 'newest':
case 'oldest':
if (tc.testOn === 'newest') {
toTest = timeAwareReposts.slice(-1);
} else {
toTest = timeAwareReposts.slice(0, 1);
}
break;
case 'any':
case 'all':
toTest = timeAwareReposts;
break;
}
const timePass = tc.testOn === 'any' ? toTest.some(x => compareDurationValue(durationCompare, dayjs.unix(x.createdOn as number))) : toTest.every(x => compareDurationValue(durationCompare, dayjs.unix(x.createdOn as number)));
if (timePass) {
passedConditions.push(tc.condition);
} else {
failedConditions.push(tc.condition);
if (tCondition === 'AND') {
conditionFailSummaries.push(`(AND) ${tc.condition} was not true`);
break;
}
}
}
if (tCriteria.length > 0 && passedConditions.length === existingPassed) {
conditionFailSummaries.push('(OR) No time-based tests passed');
}
}
if(conditionFailSummaries.length !== 0 && occCondition === 'AND') {
// failed occurrence tests (high-level)
occurrenceReason = conditionFailSummaries.join(' | ');
break;
}
if(passedConditions.length > 0 && occCondition === 'OR') {
occurrenceReason = passedConditions.join(' | ');
orPass = true;
break;
}
}
let passed = occCriteria.length === 0;
if(occCriteria.length > 0) {
if(occCondition === 'OR') {
passed = orPass;
occurrenceReason = occurrenceReason === null ? 'No occurrence test sets passed' : occurrenceReason;
} else if(occCondition === 'AND') {
passed = occurrenceReason === null;
occurrenceReason = occurrenceReason === null ? 'All tests passed' : occurrenceReason;
}
//passed = (occCondition === 'OR' && orPass) || (occurrenceFailureReason === null && occCondition === 'AND')
}
const results = {
passed,
conditionsSummary: occurrenceReason as string,
items: criteriaMatchedResults
};
criteriaResults.push(results)
if(!results.passed) {
if(this.condition === 'AND') {
andFail = true;
break;
}
} else if(this.condition === 'OR') {
break;
}
if (!results.passed && this.condition === 'AND') {
andFail = true;
break;
}
}
// get all repost items for stats and SCIENCE
const repostItemResults = [...criteriaResults
// only want reposts from criteria that passed
.filter(x => x.passed).map(x => x.items)
.flat()
// make sure we are only accumulating unique reposts
.reduce((acc, curr) => {
const hash = `${curr.source}-${curr.itemType}-${curr.id}`;
if (!acc.has(hash)) {
acc.set(hash, curr);
}
return acc;
}, new Map<string, RepostItemResult>()).values()];
repostItemResults.sort((a, b) => a.sameness - b.sameness).reverse();
const foundRepost = criteriaResults.length > 0;
let avgSameness = null;
let closestSummary = null;
let closestSameness = null;
let searchCandidateSummary = '';
if(item instanceof Comment) {
searchCandidateSummary = `Searched top ${totalComments} comments in top 10 ${fromCache ? '' : `of ${totalCommentSubs} `}most popular submissions`;
if(totalExternal.size > 0) {
searchCandidateSummary += ", ";
const extSumm: string[] = [];
totalExternal.forEach((v, k) => {
extSumm.push(`${v} ${k}`);
});
searchCandidateSummary += extSumm.join(', ');
}
} else {
searchCandidateSummary = `Searched ${totalSubs}`
}
let summary = `${searchCandidateSummary} and found ${repostItemResults.length} reposts.`;
if(repostItemResults.length > 0) {
avgSameness = formatNumber(repostItemResults.reduce((acc, curr) => acc + curr.sameness, 0) / criteriaResults.length);
const closest = repostItemResults[0];
summary += ` --- Closest Match => >> ${closest.value} << from ${closest.source} (${closest.sourceUrl}) with ${formatNumber(closest.sameness)}% sameness.`
closestSummary = `matched a ${closest.itemType} from ${closest.source}`;
closestSameness = closest.sameness;
if(criteriaResults.length > 1) {
summary += ` Avg ${formatNumber(avgSameness)}%`;
}
}
let passed;
if(this.condition === 'AND') {
const failedCrit = criteriaResults.find(x => !x.passed);
if(failedCrit !== undefined) {
summary += `BUT a criteria failed >> ${failedCrit.conditionsSummary} << and rule has AND condition.`;
passed = false;
} else {
passed = true;
}
} else {
const passedCrit = criteriaResults.find(x => x.passed);
if(passedCrit === undefined) {
summary += `BUT all criteria failed`;
passed = false;
} else {
passed = true;
}
}
const result = `${passed ? PASS : FAIL} ${summary}`;
this.logger.verbose(result);
return [passed, this.getResult(passed, {
result,
data: {
allResults: criteriaResults,
closestSameness: passed ? formatNumber(closestSameness as number) : undefined,
closestSummary: passed ? closestSummary : undefined,
}
})];
}
}
interface RepostConfig {
/**
* A list of Regular Expressions and conditions under which tested Activity(ies) are matched
* @minItems 1
* @examples [{"regex": "/reddit/", "matchThreshold": "> 3"}]
* */
criteria?: RepostCriteria[]
/**
* * If `OR` then any set of Criteria that pass will trigger the Rule
* * If `AND` then all Criteria sets must pass to trigger the Rule
*
* @default "OR"
* */
condition?: 'AND' | 'OR'
}
export interface RepostRuleOptions extends RepostConfig, RuleOptions {
}
/**
* Search for reposts of a Submission or Comment
*
* * For submissions the title or URL can searched and matched against
* * For comments, candidate comments are gathered from similar reddit submissions and/or external sources (youtube, twitter, etc..) and then matched against
*
* */
export interface RepostRuleJSONConfig extends RepostConfig, RuleJSONConfig {
/**
* @examples ["repost"]
* */
kind: 'repost'
}

View File

@@ -8,6 +8,7 @@ import HistoryRule, {HistoryJSONConfig} from "./HistoryRule";
import RegexRule, {RegexRuleJSONConfig} from "./RegexRule";
import {SubredditResources} from "../Subreddit/SubredditResources";
import Snoowrap from "snoowrap";
import {RepostRule, RepostRuleJSONConfig} from "./RepostRule";
export function ruleFactory
(config: RuleJSONConfig, logger: Logger, subredditName: string, resources: SubredditResources, client: Snoowrap): Rule {
@@ -31,6 +32,9 @@ export function ruleFactory
case 'regex':
cfg = config as RegexRuleJSONConfig;
return new RegexRule({...cfg, logger, subredditName, resources, client});
case 'repost':
cfg = config as RepostRuleJSONConfig;
return new RepostRule({...cfg, logger, subredditName, resources, client});
default:
throw new Error('rule "kind" was not recognized.');
}

View File

@@ -117,13 +117,13 @@ export abstract class Rule implements IRule, Triggerable {
this.logger.verbose('(Skipped) Exclusive author criteria not matched');
return Promise.resolve([null, this.getResult(null, {result: 'Exclusive author criteria not matched'})]);
}
} catch (err) {
} catch (err: any) {
this.logger.error('Error occurred during Rule pre-process checks');
throw err;
}
try {
return this.process(item);
} catch (err) {
} catch (err: any) {
this.logger.error('Error occurred while processing rule');
throw err;
}
@@ -240,6 +240,6 @@ export interface RuleJSONConfig extends IRule {
* The kind of rule to run
* @examples ["recentActivity", "repeatActivity", "author", "attribution", "history"]
*/
kind: 'recentActivity' | 'repeatActivity' | 'author' | 'attribution' | 'history' | 'regex'
kind: 'recentActivity' | 'repeatActivity' | 'author' | 'attribution' | 'history' | 'regex' | 'repost'
}

View File

@@ -167,6 +167,11 @@
"deleted": {
"type": "boolean"
},
"depth": {
"description": "The (nested) level of a comment.\n\n* 0 mean the comment is at top-level (replying to submission)\n* non-zero, Nth value means the comment has N parent comments",
"pattern": "^\\s*(>|>=|<|<=)\\s*(\\d+)\\s*(days|weeks|months|years|hours|minutes|seconds|milliseconds)\\s*$",
"type": "string"
},
"distinguished": {
"type": "boolean"
},

View File

@@ -671,8 +671,6 @@
}
},
"required": [
"exclude",
"include",
"kind"
],
"type": "object"
@@ -1258,6 +1256,9 @@
{
"$ref": "#/definitions/RegexRuleJSONConfig"
},
{
"$ref": "#/definitions/RepostRuleJSONConfig"
},
{
"$ref": "#/definitions/RuleSetJson"
},
@@ -1296,6 +1297,11 @@
"deleted": {
"type": "boolean"
},
"depth": {
"description": "The (nested) level of a comment.\n\n* 0 mean the comment is at top-level (replying to submission)\n* non-zero, Nth value means the comment has N parent comments",
"pattern": "^\\s*(>|>=|<|<=)\\s*(\\d+)\\s*(days|weeks|months|years|hours|minutes|seconds|milliseconds)\\s*$",
"type": "string"
},
"distinguished": {
"type": "boolean"
},
@@ -2006,6 +2012,76 @@
],
"type": "object"
},
"OccurredAt": {
"properties": {
"condition": {
"description": "A duration and how to compare it against a value\n\nThe syntax is `(< OR > OR <= OR >=) <number> <unit>` EX `> 100 days`, `<= 2 months`\n\n* EX `> 100 days` => Passes if the date being compared is before 100 days ago\n* EX `<= 2 months` => Passes if the date being compared is after or equal to 2 months\n\nUnit must be one of [DayJS Duration units](https://day.js.org/docs/en/durations/creating)\n\n[See] https://regexr.com/609n8 for example",
"pattern": "^\\s*(>|>=|<|<=)\\s*(\\d+)\\s*(days|weeks|months|years|hours|minutes|seconds|milliseconds)\\s*$",
"type": "string"
},
"testOn": {
"$ref": "#/definitions/TimeBasedSelector",
"description": "Which repost to test on\n\n* `any` -- ANY repost passing `condition` will cause this criteria to be true\n* `all` -- ALL reposts must pass `condition` for this criteria to be true"
}
},
"required": [
"condition",
"testOn"
],
"type": "object"
},
"OccurrenceTests": {
"properties": {
"count": {
"properties": {
"condition": {
"enum": [
"AND",
"OR"
],
"type": "string"
},
"test": {
"description": "An array of strings containing a comparison operator and the number of repost occurrences to compare against\n\nExamples:\n\n* `\">= 7\"` -- TRUE if 7 or more reposts were found\n* `\"< 1\"` -- TRUE if less than 0 reposts were found",
"items": {
"type": "string"
},
"type": "array"
}
},
"required": [
"test"
],
"type": "object"
},
"time": {
"description": "Test the time the reposts occurred at",
"properties": {
"condition": {
"default": "AND",
"description": "How to test all the specified comparisons\n\n* AND -- All criteria must be true\n* OR -- Any criteria must be true\n\nDefaults to AND",
"enum": [
"AND",
"OR"
],
"type": "string"
},
"test": {
"description": "An array of time-based conditions to test against found reposts (test when a repost was made)",
"items": {
"$ref": "#/definitions/OccurredAt"
},
"type": "array"
}
},
"required": [
"test"
],
"type": "object"
}
},
"type": "object"
},
"PollingOptions": {
"description": "A configuration for where, how, and when to poll Reddit for Activities to process",
"examples": [
@@ -2718,6 +2794,224 @@
],
"type": "object"
},
"RepostCriteria": {
"description": "A set of criteria used to find reposts\n\nContains options and conditions used to define how candidate reposts are retrieved and if they are a match.",
"properties": {
"caseSensitive": {
"default": false,
"description": "Should text matching be case sensitive?\n\nDefaults to false",
"type": "boolean"
},
"matchScore": {
"default": 85,
"description": "The percentage, as a whole number, of a repost title/comment that must match the title/comment being checked in order to consider both a match\n\nNote: Setting to 0 will make every candidate considered a match -- useful if you want to match if the URL has been reposted anywhere\n\nDefaults to `85` (85%)",
"type": "number"
},
"maxExternalItems": {
"default": 50,
"description": "The maximum number of external items (youtube comments) to check (and cache for comment checks)",
"type": "number"
},
"maxRedditItems": {
"default": 50,
"description": "The maximum number of comments/submissions to check\n\nIn both cases this list is gathered from sorting all submissions or all comments from all submission by number of votes and taking the \"top\" maximum specified\n\nFor comment checks this is the number of comments cached",
"type": "number"
},
"minWordCount": {
"default": 2,
"description": "The minimum number of words in the activity being checked for which this rule will run on\n\nIf the word count is below the minimum the rule fails\n\nDefaults to 2",
"type": "number"
},
"occurredAt": {
"description": "Test the time the reposts occurred at",
"properties": {
"condition": {
"default": "AND",
"description": "How to test all the specified comparisons\n\n* AND -- All criteria must be true\n* OR -- Any criteria must be true\n\nDefaults to AND",
"enum": [
"AND",
"OR"
],
"type": "string"
},
"criteria": {
"description": "An array of time-based conditions to test against found reposts (test when a repost was made)",
"items": {
"$ref": "#/definitions/OccurredAt"
},
"type": "array"
}
},
"required": [
"criteria"
],
"type": "object"
},
"occurrences": {
"description": "A set of comparisons to test against the number of reposts found\n\nIf not specified the default is \"AND [occurrences] > 0\" IE any reposts makes this test pass",
"properties": {
"condition": {
"default": "AND",
"description": "How to test all the specified comparisons\n\n* AND -- All criteria must be true\n* OR -- Any criteria must be true\n\nDefaults to AND",
"enum": [
"AND",
"OR"
],
"type": "string"
},
"criteria": {
"items": {
"$ref": "#/definitions/OccurrenceTests"
},
"type": "array"
}
},
"type": "object"
},
"searchOn": {
"description": "Define how to find candidate reposts\n\n* **title** -- search reddit for submissions with the same title\n* **url** -- search reddit for submissions with the same url\n* **external** -- WHEN ACTIVITY IS A COMMENT - tries to get comments from external source (youtube, twitter, etc...)",
"items": {
"anyOf": [
{
"$ref": "#/definitions/SearchFacetJSONConfig"
},
{
"enum": [
"crossposts",
"duplicates",
"external",
"title",
"url"
],
"type": "string"
}
]
},
"type": "array"
},
"transformations": {
"description": "A set of search-and-replace operations to perform on text values before performing a match. Transformations are performed in the order they are defined.\n\n* If `transformationsActivity` IS NOT defined then these transformations will be performed on BOTH the activity text (submission title or comment) AND the repost candidate text\n* If `transformationsActivity` IS defined then these transformations are only performed on repost candidate text",
"items": {
"$ref": "#/definitions/SearchAndReplaceRegExp"
},
"type": "array"
},
"transformationsActivity": {
"description": "Specify a separate set of transformations for the activity text (submission title or comment)\n\nTo perform no transformations when `transformations` is defined set this to an empty array (`[]`)",
"items": {
"$ref": "#/definitions/SearchAndReplaceRegExp"
},
"type": "array"
},
"window": {
"anyOf": [
{
"$ref": "#/definitions/ActivityWindowCriteria"
},
{
"$ref": "#/definitions/DurationObject"
},
{
"type": [
"string",
"number"
]
}
],
"description": "A value to define the range of Activities to retrieve.\n\nAcceptable values:\n\n**`ActivityWindowCriteria` object**\n\nAllows specify multiple range properties and more specific behavior\n\n**A `number` of Activities to retrieve**\n\n* EX `100` => 100 Activities\n\n*****\n\nAny of the below values that specify the amount of time to subtract from `NOW` to create a time range IE `NOW <---> [duration] ago`\n\nAcceptable values:\n\n**A `string` consisting of a value and a [Day.js](https://day.js.org/docs/en/durations/creating#list-of-all-available-units) time UNIT**\n\n* EX `9 days` => Range is `NOW <---> 9 days ago`\n\n**A [Day.js](https://day.js.org/docs/en/durations/creating) `object`**\n\n* EX `{\"days\": 90, \"minutes\": 15}` => Range is `NOW <---> 90 days and 15 minutes ago`\n\n**An [ISO 8601 duration](https://en.wikipedia.org/wiki/ISO_8601#Durations) `string`**\n\n* EX `PT15M` => 15 minutes => Range is `NOW <----> 15 minutes ago`",
"examples": [
"90 days"
]
}
},
"type": "object"
},
"RepostRuleJSONConfig": {
"description": "Search for reposts of a Submission or Comment\n\n* For submissions the title or URL can searched and matched against\n* For comments, candidate comments are gathered from similar reddit submissions and/or external sources (youtube, twitter, etc..) and then matched against",
"properties": {
"authorIs": {
"$ref": "#/definitions/AuthorOptions",
"description": "If present then these Author criteria are checked before running the rule. If criteria fails then the rule is skipped.",
"examples": [
{
"include": [
{
"flairText": [
"Contributor",
"Veteran"
]
},
{
"isMod": true
}
]
}
]
},
"condition": {
"default": "OR",
"description": "* If `OR` then any set of Criteria that pass will trigger the Rule\n* If `AND` then all Criteria sets must pass to trigger the Rule",
"enum": [
"AND",
"OR"
],
"type": "string"
},
"criteria": {
"description": "A list of Regular Expressions and conditions under which tested Activity(ies) are matched",
"examples": [
{
"matchThreshold": "> 3",
"regex": "/reddit/"
}
],
"items": {
"$ref": "#/definitions/RepostCriteria"
},
"minItems": 1,
"type": "array"
},
"itemIs": {
"anyOf": [
{
"items": {
"$ref": "#/definitions/SubmissionState"
},
"type": "array"
},
{
"items": {
"$ref": "#/definitions/CommentState"
},
"type": "array"
}
],
"description": "A list of criteria to test the state of the `Activity` against before running the Rule.\n\nIf any set of criteria passes the Rule will be run. If the criteria fails then the Rule is skipped."
},
"kind": {
"description": "The kind of rule to run",
"enum": [
"repost"
],
"examples": [
"repost"
],
"type": "string"
},
"name": {
"description": "An optional, but highly recommended, friendly name for this rule. If not present will default to `kind`.\n\nCan only contain letters, numbers, underscore, spaces, and dashes\n\nname is used to reference Rule result data during Action content templating. See CommentAction or ReportAction for more details.",
"examples": [
"myNewRule"
],
"pattern": "^[a-zA-Z]([\\w -]*[\\w])?$",
"type": "string"
}
},
"required": [
"kind"
],
"type": "object"
},
"RuleSetJson": {
"description": "A RuleSet is a \"nested\" set of `Rule` objects that can be used to create more complex AND/OR behavior. Think of the outcome of a `RuleSet` as the result of all of its run `Rule` objects (based on `condition`)",
"properties": {
@@ -2755,6 +3049,9 @@
{
"$ref": "#/definitions/RegexRuleJSONConfig"
},
{
"$ref": "#/definitions/RepostRuleJSONConfig"
},
{
"type": "string"
}
@@ -2769,6 +3066,111 @@
],
"type": "object"
},
"SearchAndReplaceRegExp": {
"properties": {
"replace": {
"description": "The replacement string/value to use when search is found\n\nThis can be a literal string like `'replace with this`, an empty string to remove the search value (`''`), or a special regex value\n\nSee replacement here for more information: https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/String/replace",
"type": "string"
},
"search": {
"description": "The search value to test for\n\nCan be a normal string (converted to a case-sensitive literal) or a valid regular expression\n\nEX `[\"find this string\", \"/some string*\\/ig\"]`",
"examples": [
"find this string",
"/some string*/ig"
],
"type": "string"
}
},
"required": [
"replace",
"search"
],
"type": "object"
},
"SearchFacetJSONConfig": {
"properties": {
"caseSensitive": {
"default": false,
"description": "Should text matching be case sensitive?\n\nDefaults to false",
"type": "boolean"
},
"kind": {
"anyOf": [
{
"items": {
"enum": [
"crossposts",
"duplicates",
"external",
"title",
"url"
],
"type": "string"
},
"type": "array"
},
{
"enum": [
"crossposts",
"duplicates",
"external",
"title",
"url"
],
"type": "string"
}
]
},
"matchScore": {
"default": 85,
"description": "The percentage, as a whole number, of a repost title/comment that must match the title/comment being checked in order to consider both a match\n\nNote: Setting to 0 will make every candidate considered a match -- useful if you want to match if the URL has been reposted anywhere\n\nDefaults to `85` (85%)",
"type": "number"
},
"minWordCount": {
"default": 2,
"description": "The minimum number of words in the activity being checked for which this rule will run on\n\nIf the word count is below the minimum the rule fails\n\nDefaults to 2",
"type": "number"
},
"transformations": {
"description": "A set of search-and-replace operations to perform on text values before performing a match. Transformations are performed in the order they are defined.\n\n* If `transformationsActivity` IS NOT defined then these transformations will be performed on BOTH the activity text (submission title or comment) AND the repost candidate text\n* If `transformationsActivity` IS defined then these transformations are only performed on repost candidate text",
"items": {
"$ref": "#/definitions/SearchAndReplaceRegExp"
},
"type": "array"
},
"transformationsActivity": {
"description": "Specify a separate set of transformations for the activity text (submission title or comment)\n\nTo perform no transformations when `transformations` is defined set this to an empty array (`[]`)",
"items": {
"$ref": "#/definitions/SearchAndReplaceRegExp"
},
"type": "array"
},
"window": {
"anyOf": [
{
"$ref": "#/definitions/ActivityWindowCriteria"
},
{
"$ref": "#/definitions/DurationObject"
},
{
"type": [
"string",
"number"
]
}
],
"description": "A value to define the range of Activities to retrieve.\n\nAcceptable values:\n\n**`ActivityWindowCriteria` object**\n\nAllows specify multiple range properties and more specific behavior\n\n**A `number` of Activities to retrieve**\n\n* EX `100` => 100 Activities\n\n*****\n\nAny of the below values that specify the amount of time to subtract from `NOW` to create a time range IE `NOW <---> [duration] ago`\n\nAcceptable values:\n\n**A `string` consisting of a value and a [Day.js](https://day.js.org/docs/en/durations/creating#list-of-all-available-units) time UNIT**\n\n* EX `9 days` => Range is `NOW <---> 9 days ago`\n\n**A [Day.js](https://day.js.org/docs/en/durations/creating) `object`**\n\n* EX `{\"days\": 90, \"minutes\": 15}` => Range is `NOW <---> 90 days and 15 minutes ago`\n\n**An [ISO 8601 duration](https://en.wikipedia.org/wiki/ISO_8601#Durations) `string`**\n\n* EX `PT15M` => 15 minutes => Range is `NOW <----> 15 minutes ago`",
"examples": [
"90 days"
]
}
},
"required": [
"kind"
],
"type": "object"
},
"SubmissionCheckJson": {
"properties": {
"actions": {
@@ -2928,6 +3330,9 @@
{
"$ref": "#/definitions/RegexRuleJSONConfig"
},
{
"$ref": "#/definitions/RepostRuleJSONConfig"
},
{
"$ref": "#/definitions/RuleSetJson"
},
@@ -3028,6 +3433,9 @@
}
],
"properties": {
"isUserProfile": {
"type": "boolean"
},
"name": {
"anyOf": [
{
@@ -3058,6 +3466,33 @@
},
"type": "object"
},
"ThirdPartyCredentialsJsonConfig": {
"additionalProperties": {
},
"properties": {
"youtube": {
"properties": {
"apiKey": {
"type": "string"
}
},
"required": [
"apiKey"
],
"type": "object"
}
},
"type": "object"
},
"TimeBasedSelector": {
"enum": [
"all",
"any",
"newest",
"oldest"
],
"type": "string"
},
"UserNoteActionJson": {
"description": "Add a Toolbox User Note to the Author of this Activity",
"properties": {
@@ -3241,6 +3676,9 @@
"minItems": 1,
"type": "array"
},
"credentials": {
"$ref": "#/definitions/ThirdPartyCredentialsJsonConfig"
},
"dryRun": {
"default": "undefined",
"description": "Use this option to override the `dryRun` setting for all `Checks`",

View File

@@ -19,6 +19,28 @@
],
"type": "object"
},
"BotCredentialsJsonConfig": {
"properties": {
"reddit": {
"$ref": "#/definitions/RedditCredentials"
},
"youtube": {
"properties": {
"apiKey": {
"type": "string"
}
},
"required": [
"apiKey"
],
"type": "object"
}
},
"required": [
"reddit"
],
"type": "object"
},
"BotInstanceJsonConfig": {
"description": "The configuration for an **individual reddit account** ContextMod will run as a bot.\n\nMultiple bot configs may be specified (one per reddit account).\n\n**NOTE:** If `bots` is not specified in a `FILE` then a default `bot` is generated using `ENV/ARG` values IE `CLIENT_ID`, etc...but if `bots` IS specified the default is not generated.",
"properties": {
@@ -27,15 +49,12 @@
"description": "Settings to configure the default caching behavior for this bot\n\nEvery setting not specified will default to what is specified by the global operator caching config"
},
"credentials": {
"$ref": "#/definitions/RedditCredentials",
"description": "Credentials required for the bot to interact with Reddit's API\n\nThese credentials will provided to both the API and Web interface unless otherwise specified with the `web.credentials` property\n\nRefer to the [required credentials table](https://github.com/FoxxMD/context-mod/blob/master/docs/operatorConfiguration.md#minimum-required-configuration) to see what is necessary to run the bot.",
"examples": [
"anyOf": [
{
"accessToken": "p75_1c467b2",
"clientId": "f4b4df1_9oiu",
"clientSecret": "34v5q1c564_yt7",
"redirectUri": "http://localhost:8085/callback",
"refreshToken": "34_f1w1v4"
"$ref": "#/definitions/RedditCredentials"
},
{
"$ref": "#/definitions/BotCredentialsJsonConfig"
}
]
},
@@ -510,6 +529,24 @@
},
"type": "object"
},
"ThirdPartyCredentialsJsonConfig": {
"additionalProperties": {
},
"properties": {
"youtube": {
"properties": {
"apiKey": {
"type": "string"
}
},
"required": [
"apiKey"
],
"type": "object"
}
},
"type": "object"
},
"WebCredentials": {
"description": "Separate credentials for the web interface can be provided when also running the api.\n\nAll properties not specified will default to values given in ENV/ARG credential properties\n\nRefer to the [required credentials table](https://github.com/FoxxMD/context-mod/blob/master/docs/operatorConfiguration.md#minimum-required-configuration) to see what is necessary for the web interface.",
"examples": [
@@ -579,6 +616,9 @@
"$ref": "#/definitions/OperatorCacheConfig",
"description": "Settings to configure the default caching behavior globally\n\nThese settings will be used by each bot, and subreddit, that does not specify their own"
},
"credentials": {
"$ref": "#/definitions/ThirdPartyCredentialsJsonConfig"
},
"logging": {
"description": "Settings to configure global logging defaults",
"properties": {

View File

@@ -19,6 +19,9 @@
{
"$ref": "#/definitions/RegexRuleJSONConfig"
},
{
"$ref": "#/definitions/RepostRuleJSONConfig"
},
{
"type": "string"
}
@@ -617,8 +620,6 @@
}
},
"required": [
"exclude",
"include",
"kind"
],
"type": "object"
@@ -643,6 +644,11 @@
"deleted": {
"type": "boolean"
},
"depth": {
"description": "The (nested) level of a comment.\n\n* 0 mean the comment is at top-level (replying to submission)\n* non-zero, Nth value means the comment has N parent comments",
"pattern": "^\\s*(>|>=|<|<=)\\s*(\\d+)\\s*(days|weeks|months|years|hours|minutes|seconds|milliseconds)\\s*$",
"type": "string"
},
"distinguished": {
"type": "boolean"
},
@@ -989,6 +995,76 @@
},
"type": "object"
},
"OccurredAt": {
"properties": {
"condition": {
"description": "A duration and how to compare it against a value\n\nThe syntax is `(< OR > OR <= OR >=) <number> <unit>` EX `> 100 days`, `<= 2 months`\n\n* EX `> 100 days` => Passes if the date being compared is before 100 days ago\n* EX `<= 2 months` => Passes if the date being compared is after or equal to 2 months\n\nUnit must be one of [DayJS Duration units](https://day.js.org/docs/en/durations/creating)\n\n[See] https://regexr.com/609n8 for example",
"pattern": "^\\s*(>|>=|<|<=)\\s*(\\d+)\\s*(days|weeks|months|years|hours|minutes|seconds|milliseconds)\\s*$",
"type": "string"
},
"testOn": {
"$ref": "#/definitions/TimeBasedSelector",
"description": "Which repost to test on\n\n* `any` -- ANY repost passing `condition` will cause this criteria to be true\n* `all` -- ALL reposts must pass `condition` for this criteria to be true"
}
},
"required": [
"condition",
"testOn"
],
"type": "object"
},
"OccurrenceTests": {
"properties": {
"count": {
"properties": {
"condition": {
"enum": [
"AND",
"OR"
],
"type": "string"
},
"test": {
"description": "An array of strings containing a comparison operator and the number of repost occurrences to compare against\n\nExamples:\n\n* `\">= 7\"` -- TRUE if 7 or more reposts were found\n* `\"< 1\"` -- TRUE if less than 0 reposts were found",
"items": {
"type": "string"
},
"type": "array"
}
},
"required": [
"test"
],
"type": "object"
},
"time": {
"description": "Test the time the reposts occurred at",
"properties": {
"condition": {
"default": "AND",
"description": "How to test all the specified comparisons\n\n* AND -- All criteria must be true\n* OR -- Any criteria must be true\n\nDefaults to AND",
"enum": [
"AND",
"OR"
],
"type": "string"
},
"test": {
"description": "An array of time-based conditions to test against found reposts (test when a repost was made)",
"items": {
"$ref": "#/definitions/OccurredAt"
},
"type": "array"
}
},
"required": [
"test"
],
"type": "object"
}
},
"type": "object"
},
"RecentActivityRuleJSONConfig": {
"description": "Checks a user's history for any Activity (Submission/Comment) in the subreddits specified in thresholds\n\nAvailable data for [Action templating](https://github.com/FoxxMD/context-mod#action-templating):\n\n```\nsummary => comma-deliminated list of subreddits that hit the threshold and their count EX subredditA(1), subredditB(4),...\nsubCount => Total number of subreddits that hit the threshold\ntotalCount => Total number of all activity occurrences in subreddits\n```",
"properties": {
@@ -1488,6 +1564,329 @@
],
"type": "object"
},
"RepostCriteria": {
"description": "A set of criteria used to find reposts\n\nContains options and conditions used to define how candidate reposts are retrieved and if they are a match.",
"properties": {
"caseSensitive": {
"default": false,
"description": "Should text matching be case sensitive?\n\nDefaults to false",
"type": "boolean"
},
"matchScore": {
"default": 85,
"description": "The percentage, as a whole number, of a repost title/comment that must match the title/comment being checked in order to consider both a match\n\nNote: Setting to 0 will make every candidate considered a match -- useful if you want to match if the URL has been reposted anywhere\n\nDefaults to `85` (85%)",
"type": "number"
},
"maxExternalItems": {
"default": 50,
"description": "The maximum number of external items (youtube comments) to check (and cache for comment checks)",
"type": "number"
},
"maxRedditItems": {
"default": 50,
"description": "The maximum number of comments/submissions to check\n\nIn both cases this list is gathered from sorting all submissions or all comments from all submission by number of votes and taking the \"top\" maximum specified\n\nFor comment checks this is the number of comments cached",
"type": "number"
},
"minWordCount": {
"default": 2,
"description": "The minimum number of words in the activity being checked for which this rule will run on\n\nIf the word count is below the minimum the rule fails\n\nDefaults to 2",
"type": "number"
},
"occurredAt": {
"description": "Test the time the reposts occurred at",
"properties": {
"condition": {
"default": "AND",
"description": "How to test all the specified comparisons\n\n* AND -- All criteria must be true\n* OR -- Any criteria must be true\n\nDefaults to AND",
"enum": [
"AND",
"OR"
],
"type": "string"
},
"criteria": {
"description": "An array of time-based conditions to test against found reposts (test when a repost was made)",
"items": {
"$ref": "#/definitions/OccurredAt"
},
"type": "array"
}
},
"required": [
"criteria"
],
"type": "object"
},
"occurrences": {
"description": "A set of comparisons to test against the number of reposts found\n\nIf not specified the default is \"AND [occurrences] > 0\" IE any reposts makes this test pass",
"properties": {
"condition": {
"default": "AND",
"description": "How to test all the specified comparisons\n\n* AND -- All criteria must be true\n* OR -- Any criteria must be true\n\nDefaults to AND",
"enum": [
"AND",
"OR"
],
"type": "string"
},
"criteria": {
"items": {
"$ref": "#/definitions/OccurrenceTests"
},
"type": "array"
}
},
"type": "object"
},
"searchOn": {
"description": "Define how to find candidate reposts\n\n* **title** -- search reddit for submissions with the same title\n* **url** -- search reddit for submissions with the same url\n* **external** -- WHEN ACTIVITY IS A COMMENT - tries to get comments from external source (youtube, twitter, etc...)",
"items": {
"anyOf": [
{
"$ref": "#/definitions/SearchFacetJSONConfig"
},
{
"enum": [
"crossposts",
"duplicates",
"external",
"title",
"url"
],
"type": "string"
}
]
},
"type": "array"
},
"transformations": {
"description": "A set of search-and-replace operations to perform on text values before performing a match. Transformations are performed in the order they are defined.\n\n* If `transformationsActivity` IS NOT defined then these transformations will be performed on BOTH the activity text (submission title or comment) AND the repost candidate text\n* If `transformationsActivity` IS defined then these transformations are only performed on repost candidate text",
"items": {
"$ref": "#/definitions/SearchAndReplaceRegExp"
},
"type": "array"
},
"transformationsActivity": {
"description": "Specify a separate set of transformations for the activity text (submission title or comment)\n\nTo perform no transformations when `transformations` is defined set this to an empty array (`[]`)",
"items": {
"$ref": "#/definitions/SearchAndReplaceRegExp"
},
"type": "array"
},
"window": {
"anyOf": [
{
"$ref": "#/definitions/ActivityWindowCriteria"
},
{
"$ref": "#/definitions/DurationObject"
},
{
"type": [
"string",
"number"
]
}
],
"description": "A value to define the range of Activities to retrieve.\n\nAcceptable values:\n\n**`ActivityWindowCriteria` object**\n\nAllows specify multiple range properties and more specific behavior\n\n**A `number` of Activities to retrieve**\n\n* EX `100` => 100 Activities\n\n*****\n\nAny of the below values that specify the amount of time to subtract from `NOW` to create a time range IE `NOW <---> [duration] ago`\n\nAcceptable values:\n\n**A `string` consisting of a value and a [Day.js](https://day.js.org/docs/en/durations/creating#list-of-all-available-units) time UNIT**\n\n* EX `9 days` => Range is `NOW <---> 9 days ago`\n\n**A [Day.js](https://day.js.org/docs/en/durations/creating) `object`**\n\n* EX `{\"days\": 90, \"minutes\": 15}` => Range is `NOW <---> 90 days and 15 minutes ago`\n\n**An [ISO 8601 duration](https://en.wikipedia.org/wiki/ISO_8601#Durations) `string`**\n\n* EX `PT15M` => 15 minutes => Range is `NOW <----> 15 minutes ago`",
"examples": [
"90 days"
]
}
},
"type": "object"
},
"RepostRuleJSONConfig": {
"description": "Search for reposts of a Submission or Comment\n\n* For submissions the title or URL can searched and matched against\n* For comments, candidate comments are gathered from similar reddit submissions and/or external sources (youtube, twitter, etc..) and then matched against",
"properties": {
"authorIs": {
"$ref": "#/definitions/AuthorOptions",
"description": "If present then these Author criteria are checked before running the rule. If criteria fails then the rule is skipped.",
"examples": [
{
"include": [
{
"flairText": [
"Contributor",
"Veteran"
]
},
{
"isMod": true
}
]
}
]
},
"condition": {
"default": "OR",
"description": "* If `OR` then any set of Criteria that pass will trigger the Rule\n* If `AND` then all Criteria sets must pass to trigger the Rule",
"enum": [
"AND",
"OR"
],
"type": "string"
},
"criteria": {
"description": "A list of Regular Expressions and conditions under which tested Activity(ies) are matched",
"examples": [
{
"matchThreshold": "> 3",
"regex": "/reddit/"
}
],
"items": {
"$ref": "#/definitions/RepostCriteria"
},
"minItems": 1,
"type": "array"
},
"itemIs": {
"anyOf": [
{
"items": {
"$ref": "#/definitions/SubmissionState"
},
"type": "array"
},
{
"items": {
"$ref": "#/definitions/CommentState"
},
"type": "array"
}
],
"description": "A list of criteria to test the state of the `Activity` against before running the Rule.\n\nIf any set of criteria passes the Rule will be run. If the criteria fails then the Rule is skipped."
},
"kind": {
"description": "The kind of rule to run",
"enum": [
"repost"
],
"examples": [
"repost"
],
"type": "string"
},
"name": {
"description": "An optional, but highly recommended, friendly name for this rule. If not present will default to `kind`.\n\nCan only contain letters, numbers, underscore, spaces, and dashes\n\nname is used to reference Rule result data during Action content templating. See CommentAction or ReportAction for more details.",
"examples": [
"myNewRule"
],
"pattern": "^[a-zA-Z]([\\w -]*[\\w])?$",
"type": "string"
}
},
"required": [
"kind"
],
"type": "object"
},
"SearchAndReplaceRegExp": {
"properties": {
"replace": {
"description": "The replacement string/value to use when search is found\n\nThis can be a literal string like `'replace with this`, an empty string to remove the search value (`''`), or a special regex value\n\nSee replacement here for more information: https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/String/replace",
"type": "string"
},
"search": {
"description": "The search value to test for\n\nCan be a normal string (converted to a case-sensitive literal) or a valid regular expression\n\nEX `[\"find this string\", \"/some string*\\/ig\"]`",
"examples": [
"find this string",
"/some string*/ig"
],
"type": "string"
}
},
"required": [
"replace",
"search"
],
"type": "object"
},
"SearchFacetJSONConfig": {
"properties": {
"caseSensitive": {
"default": false,
"description": "Should text matching be case sensitive?\n\nDefaults to false",
"type": "boolean"
},
"kind": {
"anyOf": [
{
"items": {
"enum": [
"crossposts",
"duplicates",
"external",
"title",
"url"
],
"type": "string"
},
"type": "array"
},
{
"enum": [
"crossposts",
"duplicates",
"external",
"title",
"url"
],
"type": "string"
}
]
},
"matchScore": {
"default": 85,
"description": "The percentage, as a whole number, of a repost title/comment that must match the title/comment being checked in order to consider both a match\n\nNote: Setting to 0 will make every candidate considered a match -- useful if you want to match if the URL has been reposted anywhere\n\nDefaults to `85` (85%)",
"type": "number"
},
"minWordCount": {
"default": 2,
"description": "The minimum number of words in the activity being checked for which this rule will run on\n\nIf the word count is below the minimum the rule fails\n\nDefaults to 2",
"type": "number"
},
"transformations": {
"description": "A set of search-and-replace operations to perform on text values before performing a match. Transformations are performed in the order they are defined.\n\n* If `transformationsActivity` IS NOT defined then these transformations will be performed on BOTH the activity text (submission title or comment) AND the repost candidate text\n* If `transformationsActivity` IS defined then these transformations are only performed on repost candidate text",
"items": {
"$ref": "#/definitions/SearchAndReplaceRegExp"
},
"type": "array"
},
"transformationsActivity": {
"description": "Specify a separate set of transformations for the activity text (submission title or comment)\n\nTo perform no transformations when `transformations` is defined set this to an empty array (`[]`)",
"items": {
"$ref": "#/definitions/SearchAndReplaceRegExp"
},
"type": "array"
},
"window": {
"anyOf": [
{
"$ref": "#/definitions/ActivityWindowCriteria"
},
{
"$ref": "#/definitions/DurationObject"
},
{
"type": [
"string",
"number"
]
}
],
"description": "A value to define the range of Activities to retrieve.\n\nAcceptable values:\n\n**`ActivityWindowCriteria` object**\n\nAllows specify multiple range properties and more specific behavior\n\n**A `number` of Activities to retrieve**\n\n* EX `100` => 100 Activities\n\n*****\n\nAny of the below values that specify the amount of time to subtract from `NOW` to create a time range IE `NOW <---> [duration] ago`\n\nAcceptable values:\n\n**A `string` consisting of a value and a [Day.js](https://day.js.org/docs/en/durations/creating#list-of-all-available-units) time UNIT**\n\n* EX `9 days` => Range is `NOW <---> 9 days ago`\n\n**A [Day.js](https://day.js.org/docs/en/durations/creating) `object`**\n\n* EX `{\"days\": 90, \"minutes\": 15}` => Range is `NOW <---> 90 days and 15 minutes ago`\n\n**An [ISO 8601 duration](https://en.wikipedia.org/wiki/ISO_8601#Durations) `string`**\n\n* EX `PT15M` => 15 minutes => Range is `NOW <----> 15 minutes ago`",
"examples": [
"90 days"
]
}
},
"required": [
"kind"
],
"type": "object"
},
"SubmissionState": {
"description": "Different attributes a `Submission` can be in. Only include a property if you want to check it.",
"examples": [
@@ -1570,6 +1969,9 @@
}
],
"properties": {
"isUserProfile": {
"type": "boolean"
},
"name": {
"anyOf": [
{
@@ -1600,6 +2002,15 @@
},
"type": "object"
},
"TimeBasedSelector": {
"enum": [
"all",
"any",
"newest",
"oldest"
],
"type": "string"
},
"UserNoteCriteria": {
"properties": {
"count": {

View File

@@ -594,8 +594,6 @@
}
},
"required": [
"exclude",
"include",
"kind"
],
"type": "object"
@@ -620,6 +618,11 @@
"deleted": {
"type": "boolean"
},
"depth": {
"description": "The (nested) level of a comment.\n\n* 0 mean the comment is at top-level (replying to submission)\n* non-zero, Nth value means the comment has N parent comments",
"pattern": "^\\s*(>|>=|<|<=)\\s*(\\d+)\\s*(days|weeks|months|years|hours|minutes|seconds|milliseconds)\\s*$",
"type": "string"
},
"distinguished": {
"type": "boolean"
},
@@ -966,6 +969,76 @@
},
"type": "object"
},
"OccurredAt": {
"properties": {
"condition": {
"description": "A duration and how to compare it against a value\n\nThe syntax is `(< OR > OR <= OR >=) <number> <unit>` EX `> 100 days`, `<= 2 months`\n\n* EX `> 100 days` => Passes if the date being compared is before 100 days ago\n* EX `<= 2 months` => Passes if the date being compared is after or equal to 2 months\n\nUnit must be one of [DayJS Duration units](https://day.js.org/docs/en/durations/creating)\n\n[See] https://regexr.com/609n8 for example",
"pattern": "^\\s*(>|>=|<|<=)\\s*(\\d+)\\s*(days|weeks|months|years|hours|minutes|seconds|milliseconds)\\s*$",
"type": "string"
},
"testOn": {
"$ref": "#/definitions/TimeBasedSelector",
"description": "Which repost to test on\n\n* `any` -- ANY repost passing `condition` will cause this criteria to be true\n* `all` -- ALL reposts must pass `condition` for this criteria to be true"
}
},
"required": [
"condition",
"testOn"
],
"type": "object"
},
"OccurrenceTests": {
"properties": {
"count": {
"properties": {
"condition": {
"enum": [
"AND",
"OR"
],
"type": "string"
},
"test": {
"description": "An array of strings containing a comparison operator and the number of repost occurrences to compare against\n\nExamples:\n\n* `\">= 7\"` -- TRUE if 7 or more reposts were found\n* `\"< 1\"` -- TRUE if less than 0 reposts were found",
"items": {
"type": "string"
},
"type": "array"
}
},
"required": [
"test"
],
"type": "object"
},
"time": {
"description": "Test the time the reposts occurred at",
"properties": {
"condition": {
"default": "AND",
"description": "How to test all the specified comparisons\n\n* AND -- All criteria must be true\n* OR -- Any criteria must be true\n\nDefaults to AND",
"enum": [
"AND",
"OR"
],
"type": "string"
},
"test": {
"description": "An array of time-based conditions to test against found reposts (test when a repost was made)",
"items": {
"$ref": "#/definitions/OccurredAt"
},
"type": "array"
}
},
"required": [
"test"
],
"type": "object"
}
},
"type": "object"
},
"RecentActivityRuleJSONConfig": {
"description": "Checks a user's history for any Activity (Submission/Comment) in the subreddits specified in thresholds\n\nAvailable data for [Action templating](https://github.com/FoxxMD/context-mod#action-templating):\n\n```\nsummary => comma-deliminated list of subreddits that hit the threshold and their count EX subredditA(1), subredditB(4),...\nsubCount => Total number of subreddits that hit the threshold\ntotalCount => Total number of all activity occurrences in subreddits\n```",
"properties": {
@@ -1465,6 +1538,329 @@
],
"type": "object"
},
"RepostCriteria": {
"description": "A set of criteria used to find reposts\n\nContains options and conditions used to define how candidate reposts are retrieved and if they are a match.",
"properties": {
"caseSensitive": {
"default": false,
"description": "Should text matching be case sensitive?\n\nDefaults to false",
"type": "boolean"
},
"matchScore": {
"default": 85,
"description": "The percentage, as a whole number, of a repost title/comment that must match the title/comment being checked in order to consider both a match\n\nNote: Setting to 0 will make every candidate considered a match -- useful if you want to match if the URL has been reposted anywhere\n\nDefaults to `85` (85%)",
"type": "number"
},
"maxExternalItems": {
"default": 50,
"description": "The maximum number of external items (youtube comments) to check (and cache for comment checks)",
"type": "number"
},
"maxRedditItems": {
"default": 50,
"description": "The maximum number of comments/submissions to check\n\nIn both cases this list is gathered from sorting all submissions or all comments from all submission by number of votes and taking the \"top\" maximum specified\n\nFor comment checks this is the number of comments cached",
"type": "number"
},
"minWordCount": {
"default": 2,
"description": "The minimum number of words in the activity being checked for which this rule will run on\n\nIf the word count is below the minimum the rule fails\n\nDefaults to 2",
"type": "number"
},
"occurredAt": {
"description": "Test the time the reposts occurred at",
"properties": {
"condition": {
"default": "AND",
"description": "How to test all the specified comparisons\n\n* AND -- All criteria must be true\n* OR -- Any criteria must be true\n\nDefaults to AND",
"enum": [
"AND",
"OR"
],
"type": "string"
},
"criteria": {
"description": "An array of time-based conditions to test against found reposts (test when a repost was made)",
"items": {
"$ref": "#/definitions/OccurredAt"
},
"type": "array"
}
},
"required": [
"criteria"
],
"type": "object"
},
"occurrences": {
"description": "A set of comparisons to test against the number of reposts found\n\nIf not specified the default is \"AND [occurrences] > 0\" IE any reposts makes this test pass",
"properties": {
"condition": {
"default": "AND",
"description": "How to test all the specified comparisons\n\n* AND -- All criteria must be true\n* OR -- Any criteria must be true\n\nDefaults to AND",
"enum": [
"AND",
"OR"
],
"type": "string"
},
"criteria": {
"items": {
"$ref": "#/definitions/OccurrenceTests"
},
"type": "array"
}
},
"type": "object"
},
"searchOn": {
"description": "Define how to find candidate reposts\n\n* **title** -- search reddit for submissions with the same title\n* **url** -- search reddit for submissions with the same url\n* **external** -- WHEN ACTIVITY IS A COMMENT - tries to get comments from external source (youtube, twitter, etc...)",
"items": {
"anyOf": [
{
"$ref": "#/definitions/SearchFacetJSONConfig"
},
{
"enum": [
"crossposts",
"duplicates",
"external",
"title",
"url"
],
"type": "string"
}
]
},
"type": "array"
},
"transformations": {
"description": "A set of search-and-replace operations to perform on text values before performing a match. Transformations are performed in the order they are defined.\n\n* If `transformationsActivity` IS NOT defined then these transformations will be performed on BOTH the activity text (submission title or comment) AND the repost candidate text\n* If `transformationsActivity` IS defined then these transformations are only performed on repost candidate text",
"items": {
"$ref": "#/definitions/SearchAndReplaceRegExp"
},
"type": "array"
},
"transformationsActivity": {
"description": "Specify a separate set of transformations for the activity text (submission title or comment)\n\nTo perform no transformations when `transformations` is defined set this to an empty array (`[]`)",
"items": {
"$ref": "#/definitions/SearchAndReplaceRegExp"
},
"type": "array"
},
"window": {
"anyOf": [
{
"$ref": "#/definitions/ActivityWindowCriteria"
},
{
"$ref": "#/definitions/DurationObject"
},
{
"type": [
"string",
"number"
]
}
],
"description": "A value to define the range of Activities to retrieve.\n\nAcceptable values:\n\n**`ActivityWindowCriteria` object**\n\nAllows specify multiple range properties and more specific behavior\n\n**A `number` of Activities to retrieve**\n\n* EX `100` => 100 Activities\n\n*****\n\nAny of the below values that specify the amount of time to subtract from `NOW` to create a time range IE `NOW <---> [duration] ago`\n\nAcceptable values:\n\n**A `string` consisting of a value and a [Day.js](https://day.js.org/docs/en/durations/creating#list-of-all-available-units) time UNIT**\n\n* EX `9 days` => Range is `NOW <---> 9 days ago`\n\n**A [Day.js](https://day.js.org/docs/en/durations/creating) `object`**\n\n* EX `{\"days\": 90, \"minutes\": 15}` => Range is `NOW <---> 90 days and 15 minutes ago`\n\n**An [ISO 8601 duration](https://en.wikipedia.org/wiki/ISO_8601#Durations) `string`**\n\n* EX `PT15M` => 15 minutes => Range is `NOW <----> 15 minutes ago`",
"examples": [
"90 days"
]
}
},
"type": "object"
},
"RepostRuleJSONConfig": {
"description": "Search for reposts of a Submission or Comment\n\n* For submissions the title or URL can searched and matched against\n* For comments, candidate comments are gathered from similar reddit submissions and/or external sources (youtube, twitter, etc..) and then matched against",
"properties": {
"authorIs": {
"$ref": "#/definitions/AuthorOptions",
"description": "If present then these Author criteria are checked before running the rule. If criteria fails then the rule is skipped.",
"examples": [
{
"include": [
{
"flairText": [
"Contributor",
"Veteran"
]
},
{
"isMod": true
}
]
}
]
},
"condition": {
"default": "OR",
"description": "* If `OR` then any set of Criteria that pass will trigger the Rule\n* If `AND` then all Criteria sets must pass to trigger the Rule",
"enum": [
"AND",
"OR"
],
"type": "string"
},
"criteria": {
"description": "A list of Regular Expressions and conditions under which tested Activity(ies) are matched",
"examples": [
{
"matchThreshold": "> 3",
"regex": "/reddit/"
}
],
"items": {
"$ref": "#/definitions/RepostCriteria"
},
"minItems": 1,
"type": "array"
},
"itemIs": {
"anyOf": [
{
"items": {
"$ref": "#/definitions/SubmissionState"
},
"type": "array"
},
{
"items": {
"$ref": "#/definitions/CommentState"
},
"type": "array"
}
],
"description": "A list of criteria to test the state of the `Activity` against before running the Rule.\n\nIf any set of criteria passes the Rule will be run. If the criteria fails then the Rule is skipped."
},
"kind": {
"description": "The kind of rule to run",
"enum": [
"repost"
],
"examples": [
"repost"
],
"type": "string"
},
"name": {
"description": "An optional, but highly recommended, friendly name for this rule. If not present will default to `kind`.\n\nCan only contain letters, numbers, underscore, spaces, and dashes\n\nname is used to reference Rule result data during Action content templating. See CommentAction or ReportAction for more details.",
"examples": [
"myNewRule"
],
"pattern": "^[a-zA-Z]([\\w -]*[\\w])?$",
"type": "string"
}
},
"required": [
"kind"
],
"type": "object"
},
"SearchAndReplaceRegExp": {
"properties": {
"replace": {
"description": "The replacement string/value to use when search is found\n\nThis can be a literal string like `'replace with this`, an empty string to remove the search value (`''`), or a special regex value\n\nSee replacement here for more information: https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/String/replace",
"type": "string"
},
"search": {
"description": "The search value to test for\n\nCan be a normal string (converted to a case-sensitive literal) or a valid regular expression\n\nEX `[\"find this string\", \"/some string*\\/ig\"]`",
"examples": [
"find this string",
"/some string*/ig"
],
"type": "string"
}
},
"required": [
"replace",
"search"
],
"type": "object"
},
"SearchFacetJSONConfig": {
"properties": {
"caseSensitive": {
"default": false,
"description": "Should text matching be case sensitive?\n\nDefaults to false",
"type": "boolean"
},
"kind": {
"anyOf": [
{
"items": {
"enum": [
"crossposts",
"duplicates",
"external",
"title",
"url"
],
"type": "string"
},
"type": "array"
},
{
"enum": [
"crossposts",
"duplicates",
"external",
"title",
"url"
],
"type": "string"
}
]
},
"matchScore": {
"default": 85,
"description": "The percentage, as a whole number, of a repost title/comment that must match the title/comment being checked in order to consider both a match\n\nNote: Setting to 0 will make every candidate considered a match -- useful if you want to match if the URL has been reposted anywhere\n\nDefaults to `85` (85%)",
"type": "number"
},
"minWordCount": {
"default": 2,
"description": "The minimum number of words in the activity being checked for which this rule will run on\n\nIf the word count is below the minimum the rule fails\n\nDefaults to 2",
"type": "number"
},
"transformations": {
"description": "A set of search-and-replace operations to perform on text values before performing a match. Transformations are performed in the order they are defined.\n\n* If `transformationsActivity` IS NOT defined then these transformations will be performed on BOTH the activity text (submission title or comment) AND the repost candidate text\n* If `transformationsActivity` IS defined then these transformations are only performed on repost candidate text",
"items": {
"$ref": "#/definitions/SearchAndReplaceRegExp"
},
"type": "array"
},
"transformationsActivity": {
"description": "Specify a separate set of transformations for the activity text (submission title or comment)\n\nTo perform no transformations when `transformations` is defined set this to an empty array (`[]`)",
"items": {
"$ref": "#/definitions/SearchAndReplaceRegExp"
},
"type": "array"
},
"window": {
"anyOf": [
{
"$ref": "#/definitions/ActivityWindowCriteria"
},
{
"$ref": "#/definitions/DurationObject"
},
{
"type": [
"string",
"number"
]
}
],
"description": "A value to define the range of Activities to retrieve.\n\nAcceptable values:\n\n**`ActivityWindowCriteria` object**\n\nAllows specify multiple range properties and more specific behavior\n\n**A `number` of Activities to retrieve**\n\n* EX `100` => 100 Activities\n\n*****\n\nAny of the below values that specify the amount of time to subtract from `NOW` to create a time range IE `NOW <---> [duration] ago`\n\nAcceptable values:\n\n**A `string` consisting of a value and a [Day.js](https://day.js.org/docs/en/durations/creating#list-of-all-available-units) time UNIT**\n\n* EX `9 days` => Range is `NOW <---> 9 days ago`\n\n**A [Day.js](https://day.js.org/docs/en/durations/creating) `object`**\n\n* EX `{\"days\": 90, \"minutes\": 15}` => Range is `NOW <---> 90 days and 15 minutes ago`\n\n**An [ISO 8601 duration](https://en.wikipedia.org/wiki/ISO_8601#Durations) `string`**\n\n* EX `PT15M` => 15 minutes => Range is `NOW <----> 15 minutes ago`",
"examples": [
"90 days"
]
}
},
"required": [
"kind"
],
"type": "object"
},
"SubmissionState": {
"description": "Different attributes a `Submission` can be in. Only include a property if you want to check it.",
"examples": [
@@ -1547,6 +1943,9 @@
}
],
"properties": {
"isUserProfile": {
"type": "boolean"
},
"name": {
"anyOf": [
{
@@ -1577,6 +1976,15 @@
},
"type": "object"
},
"TimeBasedSelector": {
"enum": [
"all",
"any",
"newest",
"oldest"
],
"type": "string"
},
"UserNoteCriteria": {
"properties": {
"count": {
@@ -1651,6 +2059,9 @@
{
"$ref": "#/definitions/RegexRuleJSONConfig"
},
{
"$ref": "#/definitions/RepostRuleJSONConfig"
},
{
"type": "string"
}

View File

@@ -41,6 +41,7 @@ import NotificationManager from "../Notification/NotificationManager";
import action from "../Web/Server/routes/authenticated/user/action";
import {createHistoricalDefaults, historicalDefaults} from "../Common/defaults";
import {ExtendedSnoowrap} from "../Utils/SnoowrapClients";
import {isRateLimitError} from "../Utils/Errors";
export interface RunningState {
state: RunState,
@@ -73,7 +74,7 @@ interface QueuedIdentifier {
state: 'queued' | 'processing'
}
export class Manager {
export class Manager extends EventEmitter {
subreddit: Subreddit;
client: ExtendedSnoowrap;
logger: Logger;
@@ -94,7 +95,6 @@ export class Manager {
sharedModqueue: boolean;
cacheManager: BotResourcesManager;
globalDryRun?: boolean;
emitter: EventEmitter = new EventEmitter();
queue: QueueObject<CheckTask>;
// firehose is used to ensure all activities from different polling streams are unique
// that is -- if the same activities is in both modqueue and unmoderated we don't want to process the activity twice or use stale data
@@ -182,6 +182,8 @@ export class Manager {
}
constructor(sub: Subreddit, client: ExtendedSnoowrap, logger: Logger, cacheManager: BotResourcesManager, opts: RuntimeManagerOptions = {botName: 'ContextMod', maxWorkers: 1}) {
super();
const {dryRun, sharedModqueue = false, wikiLocation = 'botconfig/contextbot', botName, maxWorkers} = opts;
this.displayLabel = opts.nickname || `${sub.display_name_prefixed}`;
const getLabels = this.getCurrentLabels;
@@ -339,6 +341,7 @@ export class Manager {
const {
polling = [{pollOn: 'unmoderated', limit: DEFAULT_POLLING_LIMIT, interval: DEFAULT_POLLING_INTERVAL}],
caching,
credentials,
dryRun,
footer,
nickname,
@@ -379,6 +382,7 @@ export class Manager {
logger: this.logger,
subreddit: this.subreddit,
caching,
credentials,
client: this.client,
};
this.resources = await this.cacheManager.set(this.subreddit.display_name, resourceConfig);
@@ -390,6 +394,10 @@ export class Manager {
const commentChecks: Array<CommentCheck> = [];
const subChecks: Array<SubmissionCheck> = [];
const structuredChecks = configBuilder.parseToStructured(validJson);
// TODO check that bot has permissions for subreddit for all specified actions
// can find permissions in this.subreddit.mod_permissions
for (const jCheck of structuredChecks) {
const checkConfig = {
...jCheck,
@@ -415,7 +423,7 @@ export class Manager {
this.logger.info(checkSummary);
}
this.validConfigLoaded = true;
} catch (err) {
} catch (err: any) {
this.validConfigLoaded = false;
throw err;
}
@@ -456,7 +464,7 @@ export class Manager {
this.lastWikiRevision = revisionDate;
sourceData = await wiki.content_md;
} catch (err) {
} catch (err: any) {
const msg = `Could not read wiki configuration. Please ensure the page https://reddit.com${this.subreddit.url}wiki/${this.wikiLocation} exists and is readable -- error: ${err.message}`;
this.logger.error(msg);
throw new ConfigParseError(msg);
@@ -484,7 +492,7 @@ export class Manager {
}
return true;
} catch (err) {
} catch (err: any) {
this.validConfigLoaded = false;
throw err;
}
@@ -502,50 +510,10 @@ export class Manager {
const [peek, _] = await itemContentPeek(item);
ePeek = peek;
this.logger.info(`<EVENT> ${peek}`);
} catch (err) {
} catch (err: any) {
this.logger.error(`Error occurred while generate item peek for ${checkType} Activity ${itemId}`, err);
}
const {
checkNames = [],
delayUntil,
dryRun,
refresh = false,
} = options || {};
let wasRefreshed = false;
if (delayUntil !== undefined) {
const created = dayjs.unix(item.created_utc);
const diff = dayjs().diff(created, 's');
if (diff < delayUntil) {
this.logger.verbose(`Delaying processing until Activity is ${delayUntil} seconds old (${delayUntil - diff}s)`);
await sleep(delayUntil - diff);
// @ts-ignore
item = await activity.refresh();
wasRefreshed = true;
}
}
// refresh signal from firehose if activity was ingested multiple times before processing or re-queued while processing
// want to make sure we have the most recent data
if(!wasRefreshed && refresh === true) {
this.logger.verbose('Refreshed data (probably due to signal from firehose)');
// @ts-ignore
item = await activity.refresh();
}
const startingApiLimit = this.client.ratelimitRemaining;
if (item instanceof Submission) {
if (await item.removed_by_category === 'deleted') {
this.logger.warn('Submission was deleted, cannot process.');
return;
}
} else if (item.author.name === '[deleted]') {
this.logger.warn('Comment was deleted, cannot process.');
return;
}
let checksRun = 0;
let actionsRun = 0;
let totalRulesRun = 0;
@@ -567,7 +535,48 @@ export class Manager {
let triggeredCheckName;
const checksRunNames = [];
const cachedCheckNames = [];
const startingApiLimit = this.client.ratelimitRemaining;
const {
checkNames = [],
delayUntil,
dryRun,
refresh = false,
} = options || {};
let wasRefreshed = false;
try {
if (delayUntil !== undefined) {
const created = dayjs.unix(item.created_utc);
const diff = dayjs().diff(created, 's');
if (diff < delayUntil) {
this.logger.verbose(`Delaying processing until Activity is ${delayUntil} seconds old (${delayUntil - diff}s)`);
await sleep(delayUntil - diff);
// @ts-ignore
item = await activity.refresh();
wasRefreshed = true;
}
}
// refresh signal from firehose if activity was ingested multiple times before processing or re-queued while processing
// want to make sure we have the most recent data
if(!wasRefreshed && refresh === true) {
this.logger.verbose('Refreshed data (probably due to signal from firehose)');
// @ts-ignore
item = await activity.refresh();
}
if (item instanceof Submission) {
if (await item.removed_by_category === 'deleted') {
this.logger.warn('Submission was deleted, cannot process.');
return;
}
} else if (item.author.name === '[deleted]') {
this.logger.warn('Comment was deleted, cannot process.');
return;
}
for (const check of checks) {
if (checkNames.length > 0 && !checkNames.map(x => x.toLowerCase()).some(x => x === check.name.toLowerCase())) {
this.logger.warn(`Check ${check.name} not in array of requested checks to run, skipping...`);
@@ -598,7 +607,7 @@ export class Manager {
this.logger.info('Check was triggered but cache result options specified NOT to run actions...counting as check NOT triggered');
triggered = false;
}
} catch (e) {
} catch (e: any) {
if (e.logged !== true) {
this.logger.warn(`Running rules for Check ${check.name} failed due to uncaught exception`, e);
}
@@ -628,10 +637,11 @@ export class Manager {
this.logger.info('No checks triggered');
}
} catch (err) {
} catch (err: any) {
if (!(err instanceof LoggedError) && err.logged !== true) {
this.logger.error('An unhandled error occurred while running checks', err);
}
this.emit('error', err);
} finally {
try {
actionedEvent.actionResults = runActions;
@@ -642,7 +652,7 @@ export class Manager {
this.logger.verbose(`Run Stats: Checks ${checksRun} | Rules => Total: ${totalRulesRun} Unique: ${allRuleResults.length} Cached: ${totalRulesRun - allRuleResults.length} Rolling Avg: ~${formatNumber(this.rulesUniqueRollingAvg)}/s | Actions ${actionsRun}`);
this.logger.verbose(`Reddit API Stats: Initial ${startingApiLimit} | Current ${this.client.ratelimitRemaining} | Used ~${startingApiLimit - this.client.ratelimitRemaining} | Events ~${formatNumber(this.eventsRollingAvg)}/s`);
this.currentLabels = [];
} catch (err) {
} catch (err: any) {
this.logger.error('Error occurred while cleaning up Activity check and generating stats', err);
} finally {
this.resources.updateHistoricalStats({
@@ -664,7 +674,7 @@ export class Manager {
// give current handle() time to stop
//await sleep(1000);
const retryHandler = createRetryHandler({maxRequestRetry: 5, maxOtherRetry: 1}, this.logger);
const retryHandler = createRetryHandler({maxRequestRetry: 3, maxOtherRetry: 1}, this.logger);
const subName = this.subreddit.display_name;
@@ -766,13 +776,19 @@ export class Manager {
// @ts-ignore
stream.on('error', async (err: any) => {
this.emit('error', err);
if(isRateLimitError(err)) {
this.logger.error('Encountered rate limit while polling! Bot is all out of requests :( Stopping subreddit queue and polling.');
await this.stop();
}
this.logger.error('Polling error occurred', err);
const shouldRetry = await retryHandler(err);
if (shouldRetry) {
stream.startInterval();
} else {
this.logger.warn('Pausing event polling due to too many errors');
await this.pauseEvents();
this.logger.warn('Stopping subreddit processing/polling due to too many errors');
await this.stop();
}
});
this.streams.push(stream);
@@ -856,10 +872,19 @@ export class Manager {
} else {
const pauseWaitStart = dayjs();
this.logger.info(`Activity processing queue is stopping...waiting for ${this.queue.running()} activities to finish processing`);
const fullStopTime = dayjs().add(5, 'seconds');
let gracefulStop = true;
while (this.queue.running() > 0) {
gracefulStop = false;
if(dayjs().isAfter(fullStopTime)) {
break;
}
await sleep(1500);
this.logger.verbose(`Activity processing queue is stopping...waiting for ${this.queue.running()} activities to finish processing`);
}
if(!gracefulStop) {
this.logger.warn('Waited longer than 5 seconds to stop activities. Something isn\'t right so forcing stop :/ ');
}
this.logger.info(`Activity processing queue stopped by ${causedBy} and ${this.queue.length()} queued activities cleared (waited ${dayjs().diff(pauseWaitStart, 's')} seconds while activity processing finished)`);
this.firehose.kill();
this.queue.kill();

View File

@@ -5,8 +5,7 @@ import {PollConfiguration} from "snoostorm/out/util/Poll";
import {ClearProcessedOptions, DEFAULT_POLLING_INTERVAL} from "../Common/interfaces";
import dayjs, {Dayjs} from "dayjs";
import { Duration } from "dayjs/plugin/duration";
import {parseDuration, sleep} from "../util";
import setRandomInterval from 'set-random-interval';
import {parseDuration, setRandomInterval, sleep} from "../util";
type Awaitable<T> = Promise<T> | T;
@@ -56,26 +55,30 @@ export class SPoll<T extends object> extends Poll<T> {
this.randInterval = setRandomInterval((function (self) {
return async () => {
try {
// DEBUGGING
//
// Removing processed clearing to see if it fixes weird, duplicate/delayed comment processing behavior
//
// clear the tracked, processed activity ids after a set period or number of activities have been processed
// because when RCB is long-running and has streams from high-volume subreddits this list never gets smaller...
// so clear if after time period
if ((self.clearProcessedAfter !== undefined && dayjs().isSameOrAfter(self.clearProcessedAfter))
// or clear if processed list is larger than defined max allowable size (default setting, 2 * polling option limit)
|| (self.clearProcessedSize !== undefined && self.processed.size >= self.clearProcessedSize)) {
if (self.retainProcessed === 0) {
self.processed = new Set();
} else {
// retain some processed so we have continuity between processed list resets -- this is default behavior and retains polling option limit # of activities
// we can slice from the set here because ID order is guaranteed for Set object so list is oldest -> newest
// -- retain last LIMIT number of activities (or all if retain # is larger than list due to user config error)
self.processed = new Set(Array.from(self.processed).slice(Math.max(0, self.processed.size - self.retainProcessed)));
}
// reset time interval if there is one
if (self.clearProcessedAfter !== undefined && self.clearProcessedDuration !== undefined) {
self.clearProcessedAfter = dayjs().add(self.clearProcessedDuration.asSeconds(), 's');
}
}
// if ((self.clearProcessedAfter !== undefined && dayjs().isSameOrAfter(self.clearProcessedAfter))
// // or clear if processed list is larger than defined max allowable size (default setting, 2 * polling option limit)
// || (self.clearProcessedSize !== undefined && self.processed.size >= self.clearProcessedSize)) {
// if (self.retainProcessed === 0) {
// self.processed = new Set();
// } else {
// // retain some processed so we have continuity between processed list resets -- this is default behavior and retains polling option limit # of activities
// // we can slice from the set here because ID order is guaranteed for Set object so list is oldest -> newest
// // -- retain last LIMIT number of activities (or all if retain # is larger than list due to user config error)
// self.processed = new Set(Array.from(self.processed).slice(Math.max(0, self.processed.size - self.retainProcessed)));
// }
// // reset time interval if there is one
// if (self.clearProcessedAfter !== undefined && self.clearProcessedDuration !== undefined) {
// self.clearProcessedAfter = dayjs().add(self.clearProcessedDuration.asSeconds(), 's');
// }
// }
const batch = await self.getter();
const newItems: T[] = [];
for (const item of batch) {
@@ -90,7 +93,7 @@ export class SPoll<T extends object> extends Poll<T> {
// Emit the new listing of all new items
self.emit("listing", newItems);
} catch (err) {
} catch (err: any) {
self.emit('error', err);
self.end();
}

View File

@@ -17,7 +17,7 @@ import {
cacheStats, compareDurationValue, comparisonTextOp, createCacheManager, createHistoricalStatsDisplay,
formatNumber, getActivityAuthorName, getActivitySubredditName, isStrongSubredditState,
mergeArr, parseDurationComparison,
parseExternalUrl, parseGenericValueComparison,
parseExternalUrl, parseGenericValueComparison, parseRedditEntity,
parseWikiContext, shouldCacheSubredditStateCriteriaResult, subredditStateIsNameOnly, toStrongSubredditState
} from "../util";
import LoggedError from "../Utils/LoggedError";
@@ -40,7 +40,7 @@ import {
HistoricalStats,
HistoricalStatUpdateData,
SubredditHistoricalStats,
SubredditHistoricalStatsDisplay,
SubredditHistoricalStatsDisplay, ThirdPartyCredentialsJsonConfig,
} from "../Common/interfaces";
import UserNotes from "./UserNotes";
import Mustache from "mustache";
@@ -62,6 +62,7 @@ export interface SubredditResourceConfig extends Footer {
subreddit: Subreddit,
logger: Logger;
client: ExtendedSnoowrap
credentials?: ThirdPartyCredentialsJsonConfig
}
interface SubredditResourceOptions extends Footer {
@@ -74,6 +75,7 @@ interface SubredditResourceOptions extends Footer {
client: ExtendedSnoowrap;
prefix?: string;
actionedEventsMax: number;
thirdPartyCredentials: ThirdPartyCredentialsJsonConfig
}
export interface SubredditResourceSetOptions extends CacheConfig, Footer {
@@ -101,6 +103,7 @@ export class SubredditResources {
historicalSaveInterval?: any;
prefix?: string
actionedEventsMax: number;
thirdPartyCredentials: ThirdPartyCredentialsJsonConfig;
stats: {
cache: ResourceStats
@@ -126,6 +129,7 @@ export class SubredditResources {
actionedEventsMax,
cacheSettingsHash,
client,
thirdPartyCredentials,
} = options || {};
this.cacheSettingsHash = cacheSettingsHash;
@@ -141,6 +145,7 @@ export class SubredditResources {
this.wikiTTL = wikiTTL === true ? 0 : wikiTTL;
this.filterCriteriaTTL = filterCriteriaTTL === true ? 0 : filterCriteriaTTL;
this.subreddit = subreddit;
this.thirdPartyCredentials = thirdPartyCredentials;
this.name = name;
if (logger === undefined) {
const alogger = winston.loggers.get('app')
@@ -380,7 +385,7 @@ export class SubredditResources {
// @ts-ignore
return await item.fetch();
}
} catch (err) {
} catch (err: any) {
this.logger.error('Error while trying to fetch a cached activity', err);
throw err.logged;
}
@@ -415,7 +420,7 @@ export class SubredditResources {
return subreddit as Subreddit;
}
} catch (err) {
} catch (err: any) {
this.logger.error('Error while trying to fetch a cached activity', err);
throw err.logged;
}
@@ -527,7 +532,7 @@ export class SubredditResources {
// @ts-ignore
const wikiPage = sub.getWikiPage(wikiContext.wiki);
wikiContent = await wikiPage.content_md;
} catch (err) {
} catch (err: any) {
let msg = `Could not read wiki page for an unknown reason. Please ensure the page 'https://reddit.com${sub.display_name_prefixed}/wiki/${wikiContext.wiki}' exists and is readable`;
if(err.statusCode !== undefined) {
if(err.statusCode === 404) {
@@ -543,7 +548,7 @@ export class SubredditResources {
try {
const response = await fetch(extUrl as string);
wikiContent = await response.text();
} catch (err) {
} catch (err: any) {
const msg = `Error occurred while trying to fetch the url ${extUrl}`;
this.logger.error(msg, err);
throw new LoggedError(msg);
@@ -648,7 +653,7 @@ export class SubredditResources {
this.stats.cache.subredditCrit.miss++;
await this.cache.set(hash, itemResult, {ttl: this.filterCriteriaTTL});
return itemResult;
} catch (err) {
} catch (err: any) {
if (err.logged !== true) {
this.logger.error('Error occurred while testing subreddit criteria', err);
}
@@ -718,7 +723,7 @@ export class SubredditResources {
this.stats.cache.itemCrit.miss++;
await this.cache.set(hash, itemResult, {ttl: this.filterCriteriaTTL});
return itemResult;
} catch (err) {
} catch (err: any) {
if (err.logged !== true) {
this.logger.error('Error occurred while testing item criteria', err);
}
@@ -751,6 +756,15 @@ export class SubredditResources {
return false;
}
break;
case 'isUserProfile':
const entity = parseRedditEntity(subreddit.display_name);
const entityIsUserProfile = entity.type === 'user';
if(crit[k] !== entityIsUserProfile) {
// @ts-ignore
log.debug(`Failed: Expected => ${k}:${crit[k]} | Found => ${k}:${entityIsUserProfile}`)
return false
}
break;
case 'over18':
case 'over_18':
// handling an edge case where user may have confused Comment/Submission state "over_18" with SubredditState "over18"
@@ -880,7 +894,7 @@ export class SubredditResources {
log.debug(`Failed to match title as regular expression: ${titleReg}`);
return false;
}
} catch (err) {
} catch (err: any) {
log.error(`An error occurred while attempting to match title against string as regular expression: ${titleReg}. Most likely the string does not make a valid regular expression.`, err);
return false
}
@@ -898,6 +912,31 @@ export class SubredditResources {
return false
}
break;
case 'op':
if(item instanceof Submission) {
log.warn(`On a Submission the 'op' property will always be true. Did you mean to use this on a comment instead?`);
break;
}
// @ts-ignore
if (item.is_submitter !== crit[k]) {
// @ts-ignore
log.debug(`Failed: Expected => ${k}:${crit[k]} | Found => ${k}:${item[k]}`)
return false
}
break;
case 'depth':
if(item instanceof Submission) {
log.warn(`Cannot test for 'depth' on a Submission`);
break;
}
// @ts-ignore
const depthCompare = parseGenericValueComparison(crit[k] as string);
if(!comparisonTextOp(item.score, depthCompare.operator, depthCompare.value)) {
// @ts-ignore
log.debug(`Failed: Expected => ${k}:${crit[k]} | Found => ${k}:${item.score}`)
return false
}
break;
default:
// @ts-ignore
if (item[k] !== undefined) {
@@ -991,6 +1030,13 @@ export class SubredditResources {
// }
// return hash;
}
getThirdPartyCredentials(name: string) {
if(this.thirdPartyCredentials[name] !== undefined) {
return this.thirdPartyCredentials[name];
}
return undefined;
}
}
export class BotResourcesManager {
@@ -1006,6 +1052,7 @@ export class BotResourcesManager {
actionedEventsMaxDefault?: number;
actionedEventsDefault: number;
pruneInterval: any;
defaultThirdPartyCredentials: ThirdPartyCredentialsJsonConfig;
constructor(config: BotInstanceConfig) {
const {
@@ -1022,13 +1069,17 @@ export class BotResourcesManager {
actionedEventsDefault,
},
name,
credentials,
credentials: {
reddit,
...thirdParty
},
caching,
} = config;
caching.provider.prefix = buildCachePrefix([caching.provider.prefix, 'SHARED']);
const {actionedEventsMax: eMax, actionedEventsDefault: eDef, ...relevantCacheSettings} = caching;
this.cacheHash = objectHash.sha1(relevantCacheSettings);
this.defaultCacheConfig = caching;
this.defaultThirdPartyCredentials = thirdParty;
this.ttlDefaults = {authorTTL, userNotesTTL, wikiTTL, commentTTL, submissionTTL, filterCriteriaTTL, subredditTTL};
const options = provider;
@@ -1061,13 +1112,14 @@ export class BotResourcesManager {
async set(subName: string, initOptions: SubredditResourceConfig): Promise<SubredditResources> {
let hash = 'default';
const { caching, ...init } = initOptions;
const { caching, credentials, ...init } = initOptions;
let opts: SubredditResourceOptions = {
cache: this.defaultCache,
cacheType: this.cacheType,
cacheSettingsHash: hash,
ttl: this.ttlDefaults,
thirdPartyCredentials: credentials ?? this.defaultThirdPartyCredentials,
prefix: this.defaultCacheConfig.provider.prefix,
actionedEventsMax: this.actionedEventsMaxDefault !== undefined ? Math.min(this.actionedEventsDefault, this.actionedEventsMaxDefault) : this.actionedEventsDefault,
...init,
@@ -1095,6 +1147,7 @@ export class BotResourcesManager {
actionedEventsMax: eventsMax,
cacheType: trueProvider.store,
cacheSettingsHash: hash,
thirdPartyCredentials: credentials ?? this.defaultThirdPartyCredentials,
prefix: subPrefix,
...init,
...trueRest,

View File

@@ -4,7 +4,6 @@ import {
COMMENT_URL_ID,
deflateUserNotes, getActivityAuthorName,
inflateUserNotes,
isScopeError,
parseLinkIdentifier,
SUBMISSION_URL_ID
} from "../util";
@@ -14,6 +13,7 @@ import LoggedError from "../Utils/LoggedError";
import Submission from "snoowrap/dist/objects/Submission";
import {RichContent} from "../Common/interfaces";
import {Cache} from 'cache-manager';
import {isScopeError} from "../Utils/Errors";
interface RawUserNotesPayload {
ver: number,
@@ -185,7 +185,7 @@ export class UserNotes {
}
return userNotes as RawUserNotesPayload;
} catch (err) {
} catch (err: any) {
const msg = `Could not read usernotes. Make sure at least one moderator has used toolbox and usernotes before.`;
this.logger.error(msg, err);
throw new LoggedError(msg);
@@ -235,7 +235,7 @@ export class UserNotes {
}
return payload as RawUserNotesPayload;
} catch (err) {
} catch (err: any) {
let msg = 'Could not edit usernotes.';
// Make sure at least one moderator has used toolbox and usernotes before and that this account has editing permissions`;
if(isScopeError(err)) {

18
src/Utils/Errors.ts Normal file
View File

@@ -0,0 +1,18 @@
import {StatusCodeError} from "../Common/interfaces";
export const isRateLimitError = (err: any) => {
return typeof err === 'object' && err.name === 'RateLimitError';
}
export const isScopeError = (err: any): boolean => {
if(typeof err === 'object' && err.name === 'StatusCodeError' && err.response !== undefined) {
const authHeader = err.response.headers['www-authenticate'];
return authHeader !== undefined && authHeader.includes('insufficient_scope');
}
return false;
}
export const isStatusError = (err: any): err is StatusCodeError => {
return typeof err === 'object' && err.name === 'StatusCodeError' && err.response !== undefined;
}

View File

@@ -26,13 +26,25 @@ export class ExtendedSnoowrap extends Snoowrap {
}
try {
return parseSubredditName(x);
} catch (err) {
} catch (err: any) {
return x;
}
});
return await this.oauthRequest({uri: '/api/info', method: 'get', qs: { sr_name: names.join(',')}}) as Listing<Subreddit>;
}
async assignUserFlairByTemplateId(options: { flairTemplateId: string, username: string, subredditName: string }): Promise<any> {
return await this.oauthRequest({
uri: `/r/${options.subredditName}/api/selectflair`,
method: 'post',
form: {
api_type: 'json',
name: options.username,
flair_template_id: options.flairTemplateId,
}
});
}
}
export class RequestTrackingSnoowrap extends ExtendedSnoowrap {

View File

@@ -1,4 +1,4 @@
import Snoowrap, {RedditUser} from "snoowrap";
import Snoowrap, {Listing, RedditUser} from "snoowrap";
import Submission from "snoowrap/dist/objects/Submission";
import Comment from "snoowrap/dist/objects/Comment";
import {Duration, DurationUnitsObjectType} from "dayjs/plugin/duration";
@@ -15,7 +15,7 @@ import {
import {
compareDurationValue,
comparisonTextOp, escapeRegex, getActivityAuthorName,
isActivityWindowCriteria, isStatusError,
isActivityWindowCriteria,
normalizeName,
parseDuration,
parseDurationComparison,
@@ -23,7 +23,7 @@ import {
parseGenericValueOrPercentComparison,
parseRuleResultsToMarkdownSummary, parseStringToRegex,
parseSubredditName,
truncateStringToLength
truncateStringToLength, windowToActivityWindowCriteria
} from "../util";
import UserNotes from "../Subreddit/UserNotes";
import {Logger} from "winston";
@@ -31,6 +31,7 @@ import InvalidRegexError from "./InvalidRegexError";
import SimpleError from "./SimpleError";
import {AuthorCriteria} from "../Author/Author";
import {URL} from "url";
import {isStatusError} from "./Errors";
export const BOT_LINK = 'https://www.reddit.com/r/ContextModBot/comments/otz396/introduction_to_contextmodbot';
@@ -43,14 +44,16 @@ export interface AuthorActivitiesOptions {
chunkSize?: number,
// TODO maybe move this into window
keepRemoved?: boolean,
[key: string]: any,
}
export async function getAuthorActivities(user: RedditUser, options: AuthorTypedActivitiesOptions): Promise<Array<Submission | Comment>> {
export async function getActivities(listingFunc: (limit: number) => Promise<Listing<Submission | Comment>>, options: AuthorActivitiesOptions): Promise<Array<Submission | Comment>> {
const {
chunkSize: cs = 100,
window: optWindow,
keepRemoved = true,
...restFetchOptions
} = options;
let satisfiedCount: number | undefined,
@@ -64,33 +67,56 @@ export async function getAuthorActivities(user: RedditUser, options: AuthorTyped
let includes: string[] = [];
let excludes: string[] = [];
if (isActivityWindowCriteria(optWindow)) {
const {
satisfyOn = 'any',
count,
duration,
subreddits: {
include = [],
exclude = [],
} = {},
} = optWindow;
const strongWindow = windowToActivityWindowCriteria(optWindow);
includes = include.map(x => parseSubredditName(x).toLowerCase());
excludes = exclude.map(x => parseSubredditName(x).toLowerCase());
const {
satisfyOn = 'any',
count,
duration: oDuration,
subreddits: {
include = [],
exclude = [],
} = {},
} = strongWindow;
if (includes.length > 0 && excludes.length > 0) {
// TODO add logger so this can be logged...
// this.logger.warn('include and exclude both specified, exclude will be ignored');
}
satisfiedCount = count;
durVal = duration;
satisfy = satisfyOn
} else if (typeof optWindow === 'number') {
satisfiedCount = optWindow;
} else {
durVal = optWindow as DurationVal;
satisfy = satisfyOn;
satisfiedCount = count;
includes = include;
excludes = exclude;
durVal = oDuration;
if (includes.length > 0 && excludes.length > 0) {
// TODO add logger so this can be logged...
// this.logger.warn('include and exclude both specified, exclude will be ignored');
}
// if (isActivityWindowCriteria(optWindow)) {
// const {
// satisfyOn = 'any',
// count,
// duration,
// subreddits: {
// include = [],
// exclude = [],
// } = {},
// } = optWindow;
//
// includes = include.map(x => parseSubredditName(x).toLowerCase());
// excludes = exclude.map(x => parseSubredditName(x).toLowerCase());
//
// if (includes.length > 0 && excludes.length > 0) {
// // TODO add logger so this can be logged...
// // this.logger.warn('include and exclude both specified, exclude will be ignored');
// }
// satisfiedCount = count;
// durVal = duration;
// satisfy = satisfyOn
// } else if (typeof optWindow === 'number') {
// satisfiedCount = optWindow;
// } else {
// durVal = optWindow as DurationVal;
// }
// if count is less than max limit (100) go ahead and just get that many. may result in faster response time for low numbers
if (satisfiedCount !== undefined) {
chunkSize = Math.min(chunkSize, satisfiedCount);
@@ -124,27 +150,8 @@ export async function getAuthorActivities(user: RedditUser, options: AuthorTyped
}
let items: Array<Submission | Comment> = [];
//let count = 1;
let listing = [];
try {
switch (options.type) {
case 'comment':
listing = await user.getComments({limit: chunkSize});
break;
case 'submission':
listing = await user.getSubmissions({limit: chunkSize});
break;
default:
listing = await user.getOverview({limit: chunkSize});
break;
}
} catch (err) {
if(isStatusError(err) && err.statusCode === 404) {
throw new SimpleError('Reddit returned a 404 for user history. Likely this user is shadowbanned.');
} else {
throw err;
}
}
let listing = await listingFunc(chunkSize);
let hitEnd = false;
let offset = chunkSize;
while (!hitEnd) {
@@ -219,12 +226,35 @@ export async function getAuthorActivities(user: RedditUser, options: AuthorTyped
if (!hitEnd) {
offset += chunkSize;
listing = await listing.fetchMore({amount: chunkSize});
listing = await listing.fetchMore({amount: chunkSize, ...restFetchOptions});
}
}
return Promise.resolve(items);
}
export async function getAuthorActivities(user: RedditUser, options: AuthorTypedActivitiesOptions): Promise<Array<Submission | Comment>> {
const listFunc = (chunkSize: number): Promise<Listing<Submission | Comment>> => {
switch (options.type) {
case 'comment':
return user.getComments({limit: chunkSize});
case 'submission':
return user.getSubmissions({limit: chunkSize});
default:
return user.getOverview({limit: chunkSize});
}
};
try {
return await getActivities(listFunc, options);
} catch (err: any) {
if(isStatusError(err) && err.statusCode === 404) {
throw new SimpleError('Reddit returned a 404 for user history. Likely this user is shadowbanned.');
} else {
throw err;
}
}
}
export const getAuthorComments = async (user: RedditUser, options: AuthorActivitiesOptions): Promise<Comment[]> => {
return await getAuthorActivities(user, {...options, type: 'comment'}) as unknown as Promise<Comment[]>;
}
@@ -336,7 +366,7 @@ export const testAuthorCriteria = async (item: (Comment | Submission), authorOpt
if(shadowBanned) {
return false;
}
} catch (err) {
} catch (err: any) {
if(isStatusError(err) && err.statusCode === 404) {
// user is shadowbanned
// if criteria specifies they should not be shadowbanned then return false now
@@ -545,7 +575,7 @@ export const testAuthorCriteria = async (item: (Comment | Submission), authorOpt
}
}
return true;
} catch (err) {
} catch (err: any) {
if(isStatusError(err) && err.statusCode === 404) {
throw new SimpleError('Reddit returned a 404 while trying to retrieve User profile. It is likely this user is shadowbanned.');
} else {
@@ -571,7 +601,7 @@ export const itemContentPeek = async (item: (Comment | Submission), peekLength =
submissionTitle = item.title;
peek = `${truncatePeek(item.title)} by ${author} https://reddit.com${item.permalink}`;
} else if (item instanceof Comment) {
} else {
// replace newlines with spaces to make peek more compact
content = truncatePeek(item.body.replaceAll('\n', ' '));
peek = `${truncatePeek(content)} by ${author} in https://reddit.com${item.permalink}`;

View File

@@ -0,0 +1,70 @@
// reproduced from https://github.com/sumn2u/string-comparison/blob/master/jscosine.js
// https://sumn2u.medium.com/string-similarity-comparision-in-js-with-examples-4bae35f13968
interface StrMap {
[key: string]: number
}
interface BoolMap {
[key: string]: boolean
}
function termFreqMap(str: string) {
var words = str.split(' ');
var termFreq: StrMap = {};
words.forEach(function(w) {
termFreq[w] = (termFreq[w] || 0) + 1;
});
return termFreq;
}
function addKeysToDict(map: StrMap, dict: BoolMap) {
for (var key in map) {
dict[key] = true;
}
}
function termFreqMapToVector(map: StrMap, dict: StrMap): number[] {
var termFreqVector = [];
for (var term in dict) {
termFreqVector.push(map[term] || 0);
}
return termFreqVector;
}
function vecDotProduct(vecA: number[], vecB: number[]) {
var product = 0;
for (var i = 0; i < vecA.length; i++) {
product += vecA[i] * vecB[i];
}
return product;
}
function vecMagnitude(vec: number[]) {
var sum = 0;
for (var i = 0; i < vec.length; i++) {
sum += vec[i] * vec[i];
}
return Math.sqrt(sum);
}
function cosineSimilarity(vecA: number[], vecB: number[]) {
return vecDotProduct(vecA, vecB) / (vecMagnitude(vecA) * vecMagnitude(vecB));
}
const calculateCosineSimilarity = function textCosineSimilarity(strA: string, strB: string) {
var termFreqA = termFreqMap(strA);
var termFreqB = termFreqMap(strB);
var dict = {};
addKeysToDict(termFreqA, dict);
addKeysToDict(termFreqB, dict);
var termFreqVecA = termFreqMapToVector(termFreqA, dict);
var termFreqVecB = termFreqMapToVector(termFreqB, dict);
return cosineSimilarity(termFreqVecA, termFreqVecB);
}
export default calculateCosineSimilarity;

View File

@@ -0,0 +1,19 @@
import leven from "leven";
const levenSimilarity = (valA: string, valB: string) => {
let longer: string;
let shorter: string;
if (valA.length > valB.length) {
longer = valA;
shorter = valB;
} else {
longer = valB;
shorter = valA;
}
const distance = leven(longer, shorter);
const diff = (distance / longer.length) * 100;
return [distance, 100 - diff];
}
export default levenSimilarity;

59
src/Utils/ThirdParty/YoutubeClient.ts vendored Normal file
View File

@@ -0,0 +1,59 @@
import {URLSearchParams} from "url";
import fetch from "node-fetch";
import {parseUsableLinkIdentifier} from "../../util";
import dayjs from "dayjs";
import {youtube, youtube_v3 } from '@googleapis/youtube';
import Schema$CommentThread = youtube_v3.Schema$CommentThread;
import {RepostItem} from "../../Common/interfaces";
const parseYtIdentifier = parseUsableLinkIdentifier();
export class YoutubeClient {
apiKey: string;
client: youtube_v3.Youtube
constructor(key: string) {
this.apiKey = key;
this.client = youtube({version: 'v3', auth: key});
}
getVideoTopComments = async (url: string, maxResults: number = 50): Promise<Schema$CommentThread[]> => {
const videoId = parseYtIdentifier(url);
const res = await this.client.commentThreads.list({
part: ['snippet'],
videoId,
maxResults: maxResults,
textFormat: 'plainText',
order: 'relevance',
});
const items = res.data.items as Schema$CommentThread[];
items.sort((a, b) => (a.snippet?.topLevelComment?.snippet?.likeCount as number) - (b.snippet?.topLevelComment?.snippet?.likeCount as number)).reverse();
return items;
}
}
export const commentsAsRepostItems = (comments: Schema$CommentThread[]): RepostItem[] => {
return comments.map((x) => {
const textDisplay = x.snippet?.topLevelComment?.snippet?.textDisplay;
const publishedAt = x.snippet?.topLevelComment?.snippet?.publishedAt;
const id = x.snippet?.topLevelComment?.id;
const videoId = x.snippet?.topLevelComment?.snippet?.videoId;
return {
value: textDisplay as string,
createdOn: dayjs(publishedAt as string).unix(),
source: 'Youtube',
id: x.snippet?.topLevelComment?.id as string,
sourceUrl: `https://youtube.com/watch?v=${videoId}&lc=${id}`,
score: x.snippet?.topLevelComment?.snippet?.likeCount as number,
acquisitionType: 'external',
itemType: 'comment'
};
})
}

View File

@@ -15,7 +15,6 @@ export const getLogger = (options: any, name = 'app'): Logger => {
const consoleTransport = new transports.Console({
handleExceptions: true,
// @ts-expect-error
handleRejections: true,
});
@@ -31,7 +30,6 @@ export const getLogger = (options: any, name = 'app'): Logger => {
name: 'duplex',
dump: false,
handleExceptions: true,
// @ts-expect-error
handleRejections: true,
}),
...additionalTransports,

View File

@@ -44,6 +44,7 @@ import {BotInstance, CMInstance} from "../interfaces";
import { URL } from "url";
import {MESSAGE} from "triple-beam";
import Autolinker from "autolinker";
import path from "path";
const emitter = new EventEmitter();
@@ -572,7 +573,7 @@ const webClient = async (options: OperatorConfig) => {
try {
server = await app.listen(port);
io = new SocketServer(server);
} catch (err) {
} catch (err: any) {
logger.error('Error occurred while initializing web or socket.io server', err);
err.logged = true;
throw err;
@@ -761,7 +762,7 @@ const webClient = async (options: OperatorConfig) => {
},
}).json() as any;
} catch(err) {
} catch(err: any) {
logger.error(`Error occurred while retrieving bot information. Will update heartbeat -- ${err.message}`, {instance: instance.friendly});
refreshClient(clients.find(x => normalizeUrl(x.host) === instance.normalUrl) as BotConnection);
return res.render('offline', {
@@ -812,8 +813,10 @@ const webClient = async (options: OperatorConfig) => {
});
app.getAsync('/config', async (req: express.Request, res: express.Response) => {
const {format = 'json'} = req.query as any;
res.render('config', {
title: `Configuration Editor`
title: `Configuration Editor`,
format,
});
});
@@ -989,7 +992,7 @@ const webClient = async (options: OperatorConfig) => {
}
}).json() as object;
io.to(session.id).emit('opStats', resp);
} catch (err) {
} catch (err: any) {
logger.error(`Could not retrieve stats ${err.message}`, {instance: bot.friendly});
clearInterval(interval);
}
@@ -1027,7 +1030,7 @@ const webClient = async (options: OperatorConfig) => {
}
}).json() as object;
return {success: true, ...resp};
} catch (err) {
} catch (err: any) {
return {success: false, error: err.message};
}
}
@@ -1096,7 +1099,7 @@ const webClient = async (options: OperatorConfig) => {
// botStat.indicator = 'red';
// }
logger.verbose(`Heartbeat detected`, {instance: botStat.friendly});
} catch (err) {
} catch (err: any) {
botStat.error = err.message;
logger.error(`Heartbeat response from ${botStat.friendly} was not ok: ${err.message}`, {instance: botStat.friendly});
} finally {

View File

@@ -83,7 +83,7 @@ const action = async (req: express.Request, res: express.Response) => {
}
break;
}
} catch (err) {
} catch (err: any) {
if (!(err instanceof LoggedError)) {
mLogger.error(err, {subreddit: manager.displayLabel});
}

View File

@@ -32,7 +32,7 @@ const addBot = () => {
req.botApp.logger.error(err);
}
});
} catch (err) {
} catch (err: any) {
if (newBot.error === undefined) {
newBot.error = err.message;
result.error = err.message;

View File

@@ -46,7 +46,7 @@ const logs = (subLogMap: Map<string, LogEntry[]>) => {
console.log('Request closed detected with "close" listener');
res.destroy();
return;
} catch (e) {
} catch (e: any) {
if (e.code !== 'ECONNRESET') {
logger.error(e);
}

View File

@@ -114,7 +114,7 @@ const rcbServer = async function (options: OperatorConfig) {
try {
httpServer = await server.listen(port);
io = new SocketServer(httpServer);
} catch (err) {
} catch (err: any) {
logger.error('Error occurred while initializing web or socket.io server', err);
err.logged = true;
throw err;
@@ -206,7 +206,7 @@ const rcbServer = async function (options: OperatorConfig) {
if(newApp.error === undefined) {
try {
await newApp.initBots(causedBy);
} catch (err) {
} catch (err: any) {
if(newApp.error === undefined) {
newApp.error = err.message;
}

View File

@@ -103,3 +103,42 @@ a {
.yellow {
color: rgb(253 198 94);
}
.breadcrumb {
cursor: pointer;
}
#breadcrumbs::before,
.breadcrumb:not(:last-child)::after {
content: '';
margin: 0 0.2rem;
}
.breadcrumb.array::before {
content: '[]';
}
.breadcrumb.object::before {
content: '{}';
}
#problems {
background-color: #474646;
padding: 10px;
user-select: text;
}
.problem {
align-items: center;
cursor: pointer;
display: flex;
padding: 0.25rem;
}
.problem-text {
margin-left: 0.5rem;
}
.problem .codicon-warning {
color: orange;
}

File diff suppressed because one or more lines are too long

View File

@@ -0,0 +1,2 @@
"use strict";(self.webpackChunkdemo=self.webpackChunkdemo||[]).push([[1569],{1569:(e,t,n)=>{n.r(t),n.d(t,{conf:()=>s,language:()=>o});var s={comments:{lineComment:"#"}},o={defaultToken:"keyword",ignoreCase:!0,tokenPostfix:".azcli",str:/[^#\s]/,tokenizer:{root:[{include:"@comment"},[/\s-+@str*\s*/,{cases:{"@eos":{token:"key.identifier",next:"@popall"},"@default":{token:"key.identifier",next:"@type"}}}],[/^-+@str*\s*/,{cases:{"@eos":{token:"key.identifier",next:"@popall"},"@default":{token:"key.identifier",next:"@type"}}}]],type:[{include:"@comment"},[/-+@str*\s*/,{cases:{"@eos":{token:"key.identifier",next:"@popall"},"@default":"key.identifier"}}],[/@str+\s*/,{cases:{"@eos":{token:"string",next:"@popall"},"@default":"string"}}]],comment:[[/#.*$/,{cases:{"@eos":{token:"comment",next:"@popall"}}}]]}}}}]);
//# sourceMappingURL=1569.entry.js.map

View File

@@ -0,0 +1,2 @@
"use strict";(self.webpackChunkdemo=self.webpackChunkdemo||[]).push([[1585],{1585:(e,s,o)=>{o.r(s),o.d(s,{conf:()=>n,language:()=>l});var n={brackets:[["{","}"],["[","]"],["(",")"]],autoClosingPairs:[{open:"{",close:"}"},{open:"[",close:"]"},{open:"(",close:")"},{open:'"',close:'"'},{open:"'",close:"'"}],surroundingPairs:[{open:"{",close:"}"},{open:"[",close:"]"},{open:"(",close:")"},{open:'"',close:'"'},{open:"'",close:"'"}]},l={defaultToken:"",tokenPostfix:".dockerfile",variable:/\${?[\w]+}?/,tokenizer:{root:[{include:"@whitespace"},{include:"@comment"},[/(ONBUILD)(\s+)/,["keyword",""]],[/(ENV)(\s+)([\w]+)/,["keyword","",{token:"variable",next:"@arguments"}]],[/(FROM|MAINTAINER|RUN|EXPOSE|ENV|ADD|ARG|VOLUME|LABEL|USER|WORKDIR|COPY|CMD|STOPSIGNAL|SHELL|HEALTHCHECK|ENTRYPOINT)/,{token:"keyword",next:"@arguments"}]],arguments:[{include:"@whitespace"},{include:"@strings"},[/(@variable)/,{cases:{"@eos":{token:"variable",next:"@popall"},"@default":"variable"}}],[/\\/,{cases:{"@eos":"","@default":""}}],[/./,{cases:{"@eos":{token:"",next:"@popall"},"@default":""}}]],whitespace:[[/\s+/,{cases:{"@eos":{token:"",next:"@popall"},"@default":""}}]],comment:[[/(^#.*$)/,"comment","@popall"]],strings:[[/\\'$/,"","@popall"],[/\\'/,""],[/'$/,"string","@popall"],[/'/,"string","@stringBody"],[/"$/,"string","@popall"],[/"/,"string","@dblStringBody"]],stringBody:[[/[^\\\$']/,{cases:{"@eos":{token:"string",next:"@popall"},"@default":"string"}}],[/\\./,"string.escape"],[/'$/,"string","@popall"],[/'/,"string","@pop"],[/(@variable)/,"variable"],[/\\$/,"string"],[/$/,"string","@popall"]],dblStringBody:[[/[^\\\$"]/,{cases:{"@eos":{token:"string",next:"@popall"},"@default":"string"}}],[/\\./,"string.escape"],[/"$/,"string","@popall"],[/"/,"string","@pop"],[/(@variable)/,"variable"],[/\\$/,"string"],[/$/,"string","@popall"]]}}}}]);
//# sourceMappingURL=1585.entry.js.map

View File

@@ -0,0 +1,2 @@
"use strict";(self.webpackChunkdemo=self.webpackChunkdemo||[]).push([[1593],{1593:(e,s,o)=>{o.r(s),o.d(s,{conf:()=>t,language:()=>E});var t={comments:{blockComment:["(*","*)"]},brackets:[["{","}"],["[","]"],["(",")"]],autoClosingPairs:[{open:"[",close:"]"},{open:"{",close:"}"},{open:"(",close:")"},{open:"(*",close:"*)"},{open:"<*",close:"*>"},{open:"'",close:"'",notIn:["string","comment"]},{open:'"',close:'"',notIn:["string","comment"]}]},E={defaultToken:"",tokenPostfix:".m3",brackets:[{token:"delimiter.curly",open:"{",close:"}"},{token:"delimiter.parenthesis",open:"(",close:")"},{token:"delimiter.square",open:"[",close:"]"}],keywords:["AND","ANY","ARRAY","AS","BEGIN","BITS","BRANDED","BY","CASE","CONST","DIV","DO","ELSE","ELSIF","END","EVAL","EXCEPT","EXCEPTION","EXIT","EXPORTS","FINALLY","FOR","FROM","GENERIC","IF","IMPORT","IN","INTERFACE","LOCK","LOOP","METHODS","MOD","MODULE","NOT","OBJECT","OF","OR","OVERRIDES","PROCEDURE","RAISE","RAISES","READONLY","RECORD","REF","REPEAT","RETURN","REVEAL","SET","THEN","TO","TRY","TYPE","TYPECASE","UNSAFE","UNTIL","UNTRACED","VALUE","VAR","WHILE","WITH"],reservedConstNames:["ABS","ADR","ADRSIZE","BITSIZE","BYTESIZE","CEILING","DEC","DISPOSE","FALSE","FIRST","FLOAT","FLOOR","INC","ISTYPE","LAST","LOOPHOLE","MAX","MIN","NARROW","NEW","NIL","NUMBER","ORD","ROUND","SUBARRAY","TRUE","TRUNC","TYPECODE","VAL"],reservedTypeNames:["ADDRESS","ANY","BOOLEAN","CARDINAL","CHAR","EXTENDED","INTEGER","LONGCARD","LONGINT","LONGREAL","MUTEX","NULL","REAL","REFANY","ROOT","TEXT"],operators:["+","-","*","/","&","^","."],relations:["=","#","<","<=",">",">=","<:",":"],delimiters:["|","..","=>",",",";",":="],symbols:/[>=<#.,:;+\-*/&^]+/,escapes:/\\(?:[\\fnrt"']|[0-7]{3})/,tokenizer:{root:[[/_\w*/,"invalid"],[/[a-zA-Z][a-zA-Z0-9_]*/,{cases:{"@keywords":{token:"keyword.$0"},"@reservedConstNames":{token:"constant.reserved.$0"},"@reservedTypeNames":{token:"type.reserved.$0"},"@default":"identifier"}}],{include:"@whitespace"},[/[{}()\[\]]/,"@brackets"],[/[0-9]+\.[0-9]+(?:[DdEeXx][\+\-]?[0-9]+)?/,"number.float"],[/[0-9]+(?:\_[0-9a-fA-F]+)?L?/,"number"],[/@symbols/,{cases:{"@operators":"operators","@relations":"operators","@delimiters":"delimiter","@default":"invalid"}}],[/'[^\\']'/,"string.char"],[/(')(@escapes)(')/,["string.char","string.escape","string.char"]],[/'/,"invalid"],[/"([^"\\]|\\.)*$/,"invalid"],[/"/,"string.text","@text"]],text:[[/[^\\"]+/,"string.text"],[/@escapes/,"string.escape"],[/\\./,"invalid"],[/"/,"string.text","@pop"]],comment:[[/\(\*/,"comment","@push"],[/\*\)/,"comment","@pop"],[/./,"comment"]],pragma:[[/<\*/,"keyword.pragma","@push"],[/\*>/,"keyword.pragma","@pop"],[/./,"keyword.pragma"]],whitespace:[[/[ \t\r\n]+/,"white"],[/\(\*/,"comment","@comment"],[/<\*/,"keyword.pragma","@pragma"]]}}}}]);
//# sourceMappingURL=1593.entry.js.map

View File

@@ -0,0 +1,2 @@
"use strict";(self.webpackChunkdemo=self.webpackChunkdemo||[]).push([[1594],{1594:(e,o,n)=>{n.r(o),n.d(o,{conf:()=>t,language:()=>r});var t={comments:{lineComment:"'"},brackets:[["(",")"],["[","]"],["If","EndIf"],["While","EndWhile"],["For","EndFor"],["Sub","EndSub"]],autoClosingPairs:[{open:'"',close:'"',notIn:["string","comment"]},{open:"(",close:")",notIn:["string","comment"]},{open:"[",close:"]",notIn:["string","comment"]}]},r={defaultToken:"",tokenPostfix:".sb",ignoreCase:!0,brackets:[{token:"delimiter.array",open:"[",close:"]"},{token:"delimiter.parenthesis",open:"(",close:")"},{token:"keyword.tag-if",open:"If",close:"EndIf"},{token:"keyword.tag-while",open:"While",close:"EndWhile"},{token:"keyword.tag-for",open:"For",close:"EndFor"},{token:"keyword.tag-sub",open:"Sub",close:"EndSub"}],keywords:["Else","ElseIf","EndFor","EndIf","EndSub","EndWhile","For","Goto","If","Step","Sub","Then","To","While"],tagwords:["If","Sub","While","For"],operators:[">","<","<>","<=",">=","And","Or","+","-","*","/","="],identifier:/[a-zA-Z_][\w]*/,symbols:/[=><:+\-*\/%\.,]+/,escapes:/\\(?:[abfnrtv\\"']|x[0-9A-Fa-f]{1,4}|u[0-9A-Fa-f]{4}|U[0-9A-Fa-f]{8})/,tokenizer:{root:[{include:"@whitespace"},[/(@identifier)(?=[.])/,"type"],[/@identifier/,{cases:{"@keywords":{token:"keyword.$0"},"@operators":"operator","@default":"variable.name"}}],[/([.])(@identifier)/,{cases:{$2:["delimiter","type.member"],"@default":""}}],[/\d*\.\d+/,"number.float"],[/\d+/,"number"],[/[()\[\]]/,"@brackets"],[/@symbols/,{cases:{"@operators":"operator","@default":"delimiter"}}],[/"([^"\\]|\\.)*$/,"string.invalid"],[/"/,"string","@string"]],whitespace:[[/[ \t\r\n]+/,""],[/(\').*$/,"comment"]],string:[[/[^\\"]+/,"string"],[/@escapes/,"string.escape"],[/\\./,"string.escape.invalid"],[/"C?/,"string","@pop"]]}}}}]);
//# sourceMappingURL=1594.entry.js.map

View File

@@ -0,0 +1,2 @@
"use strict";(self.webpackChunkdemo=self.webpackChunkdemo||[]).push([[162],{162:(e,s,t)=>{t.r(s),t.d(s,{conf:()=>n,language:()=>r});var n={comments:{lineComment:"#"},brackets:[["{","}"],["[","]"],["(",")"]],autoClosingPairs:[{open:"'",close:"'",notIn:["string"]},{open:'"',close:'"',notIn:["string"]},{open:"{",close:"}"},{open:"[",close:"]"},{open:"(",close:")"}]},r={defaultToken:"",tokenPostfix:".rq",brackets:[{token:"delimiter.curly",open:"{",close:"}"},{token:"delimiter.parenthesis",open:"(",close:")"},{token:"delimiter.square",open:"[",close:"]"},{token:"delimiter.angle",open:"<",close:">"}],keywords:["add","as","asc","ask","base","by","clear","construct","copy","create","data","delete","desc","describe","distinct","drop","false","filter","from","graph","group","having","in","insert","limit","load","minus","move","named","not","offset","optional","order","prefix","reduced","select","service","silent","to","true","undef","union","using","values","where","with"],builtinFunctions:["a","abs","avg","bind","bnode","bound","ceil","coalesce","concat","contains","count","datatype","day","encode_for_uri","exists","floor","group_concat","hours","if","iri","isblank","isiri","isliteral","isnumeric","isuri","lang","langmatches","lcase","max","md5","min","minutes","month","now","rand","regex","replace","round","sameterm","sample","seconds","sha1","sha256","sha384","sha512","str","strafter","strbefore","strdt","strends","strlang","strlen","strstarts","struuid","substr","sum","timezone","tz","ucase","uri","uuid","year"],ignoreCase:!0,tokenizer:{root:[[/<[^\s\u00a0>]*>?/,"tag"],{include:"@strings"},[/#.*/,"comment"],[/[{}()\[\]]/,"@brackets"],[/[;,.]/,"delimiter"],[/[_\w\d]+:(\.(?=[\w_\-\\%])|[:\w_-]|\\[-\\_~.!$&'()*+,;=/?#@%]|%[a-f\d][a-f\d])*/,"tag"],[/:(\.(?=[\w_\-\\%])|[:\w_-]|\\[-\\_~.!$&'()*+,;=/?#@%]|%[a-f\d][a-f\d])+/,"tag"],[/[$?]?[_\w\d]+/,{cases:{"@keywords":{token:"keyword"},"@builtinFunctions":{token:"predefined.sql"},"@default":"identifier"}}],[/\^\^/,"operator.sql"],[/\^[*+\-<>=&|^\/!?]*/,"operator.sql"],[/[*+\-<>=&|\/!?]/,"operator.sql"],[/@[a-z\d\-]*/,"metatag.html"],[/\s+/,"white"]],strings:[[/'([^'\\]|\\.)*$/,"string.invalid"],[/'$/,"string.sql","@pop"],[/'/,"string.sql","@stringBody"],[/"([^"\\]|\\.)*$/,"string.invalid"],[/"$/,"string.sql","@pop"],[/"/,"string.sql","@dblStringBody"]],stringBody:[[/[^\\']+/,"string.sql"],[/\\./,"string.escape"],[/'/,"string.sql","@pop"]],dblStringBody:[[/[^\\"]+/,"string.sql"],[/\\./,"string.escape"],[/"/,"string.sql","@pop"]]}}}}]);
//# sourceMappingURL=162.entry.js.map

File diff suppressed because one or more lines are too long

View File

@@ -0,0 +1,2 @@
"use strict";(self.webpackChunkdemo=self.webpackChunkdemo||[]).push([[1649],{1649:(e,t,n)=>{n.r(t),n.d(t,{conf:()=>o,language:()=>s});var o={brackets:[["{","}"],["[","]"],["(",")"]],autoClosingPairs:[{open:"{",close:"}"},{open:"[",close:"]"},{open:"(",close:")"},{open:'"',close:'"'},{open:"'",close:"'"}],surroundingPairs:[{open:"{",close:"}"},{open:"[",close:"]"},{open:"(",close:")"},{open:'"',close:'"'},{open:"'",close:"'"}]},s={tokenPostfix:".tcl",specialFunctions:["set","unset","rename","variable","proc","coroutine","foreach","incr","append","lappend","linsert","lreplace"],mainFunctions:["if","then","elseif","else","case","switch","while","for","break","continue","return","package","namespace","catch","exit","eval","expr","uplevel","upvar"],builtinFunctions:["file","info","concat","join","lindex","list","llength","lrange","lsearch","lsort","split","array","parray","binary","format","regexp","regsub","scan","string","subst","dict","cd","clock","exec","glob","pid","pwd","close","eof","fblocked","fconfigure","fcopy","fileevent","flush","gets","open","puts","read","seek","socket","tell","interp","after","auto_execok","auto_load","auto_mkindex","auto_reset","bgerror","error","global","history","load","source","time","trace","unknown","unset","update","vwait","winfo","wm","bind","event","pack","place","grid","font","bell","clipboard","destroy","focus","grab","lower","option","raise","selection","send","tk","tkwait","tk_bisque","tk_focusNext","tk_focusPrev","tk_focusFollowsMouse","tk_popup","tk_setPalette"],symbols:/[=><!~?:&|+\-*\/\^%]+/,brackets:[{open:"(",close:")",token:"delimiter.parenthesis"},{open:"{",close:"}",token:"delimiter.curly"},{open:"[",close:"]",token:"delimiter.square"}],escapes:/\\(?:[abfnrtv\\"'\[\]\{\};\$]|x[0-9A-Fa-f]{1,4}|u[0-9A-Fa-f]{4}|U[0-9A-Fa-f]{8})/,variables:/(?:\$+(?:(?:\:\:?)?[a-zA-Z_]\w*)+)/,tokenizer:{root:[[/[a-zA-Z_]\w*/,{cases:{"@specialFunctions":{token:"keyword.flow",next:"@specialFunc"},"@mainFunctions":"keyword","@builtinFunctions":"variable","@default":"operator.scss"}}],[/\s+\-+(?!\d|\.)\w*|{\*}/,"metatag"],{include:"@whitespace"},[/[{}()\[\]]/,"@brackets"],[/@symbols/,"operator"],[/\$+(?:\:\:)?\{/,{token:"identifier",next:"@nestedVariable"}],[/@variables/,"type.identifier"],[/\.(?!\d|\.)[\w\-]*/,"operator.sql"],[/\d+(\.\d+)?/,"number"],[/\d+/,"number"],[/;/,"delimiter"],[/"/,{token:"string.quote",bracket:"@open",next:"@dstring"}],[/'/,{token:"string.quote",bracket:"@open",next:"@sstring"}]],dstring:[[/\[/,{token:"@brackets",next:"@nestedCall"}],[/\$+(?:\:\:)?\{/,{token:"identifier",next:"@nestedVariable"}],[/@variables/,"type.identifier"],[/[^\\$\[\]"]+/,"string"],[/@escapes/,"string.escape"],[/"/,{token:"string.quote",bracket:"@close",next:"@pop"}]],sstring:[[/\[/,{token:"@brackets",next:"@nestedCall"}],[/\$+(?:\:\:)?\{/,{token:"identifier",next:"@nestedVariable"}],[/@variables/,"type.identifier"],[/[^\\$\[\]']+/,"string"],[/@escapes/,"string.escape"],[/'/,{token:"string.quote",bracket:"@close",next:"@pop"}]],whitespace:[[/[ \t\r\n]+/,"white"],[/#.*\\$/,{token:"comment",next:"@newlineComment"}],[/#.*(?!\\)$/,"comment"]],newlineComment:[[/.*\\$/,"comment"],[/.*(?!\\)$/,{token:"comment",next:"@pop"}]],nestedVariable:[[/[^\{\}\$]+/,"type.identifier"],[/\}/,{token:"identifier",next:"@pop"}]],nestedCall:[[/\[/,{token:"@brackets",next:"@nestedCall"}],[/\]/,{token:"@brackets",next:"@pop"}],{include:"root"}],specialFunc:[[/"/,{token:"string",next:"@dstring"}],[/'/,{token:"string",next:"@sstring"}],[/\S+/,{token:"type",next:"@pop"}]]}}}}]);
//# sourceMappingURL=1649.entry.js.map

File diff suppressed because one or more lines are too long

View File

@@ -0,0 +1,2 @@
"use strict";(self.webpackChunkdemo=self.webpackChunkdemo||[]).push([[2014],{1218:(e,n,t)=>{t.r(n),t.d(n,{conf:()=>r,language:()=>o});var r={comments:{lineComment:"#"},brackets:[["{","}"],["[","]"],["(",")"]],autoClosingPairs:[{open:"{",close:"}"},{open:"[",close:"]"},{open:"(",close:")"},{open:'"',close:'"'},{open:"'",close:"'"}],surroundingPairs:[{open:"{",close:"}"},{open:"[",close:"]"},{open:"(",close:")"},{open:'"',close:'"'},{open:"'",close:"'"}],folding:{offSide:!0}},o={tokenPostfix:".yaml",brackets:[{token:"delimiter.bracket",open:"{",close:"}"},{token:"delimiter.square",open:"[",close:"]"}],keywords:["true","True","TRUE","false","False","FALSE","null","Null","Null","~"],numberInteger:/(?:0|[+-]?[0-9]+)/,numberFloat:/(?:0|[+-]?[0-9]+)(?:\.[0-9]+)?(?:e[-+][1-9][0-9]*)?/,numberOctal:/0o[0-7]+/,numberHex:/0x[0-9a-fA-F]+/,numberInfinity:/[+-]?\.(?:inf|Inf|INF)/,numberNaN:/\.(?:nan|Nan|NAN)/,numberDate:/\d{4}-\d\d-\d\d([Tt ]\d\d:\d\d:\d\d(\.\d+)?(( ?[+-]\d\d?(:\d\d)?)|Z)?)?/,escapes:/\\(?:[btnfr\\"']|[0-7][0-7]?|[0-3][0-7]{2})/,tokenizer:{root:[{include:"@whitespace"},{include:"@comment"},[/%[^ ]+.*$/,"meta.directive"],[/---/,"operators.directivesEnd"],[/\.{3}/,"operators.documentEnd"],[/[-?:](?= )/,"operators"],{include:"@anchor"},{include:"@tagHandle"},{include:"@flowCollections"},{include:"@blockStyle"},[/@numberInteger(?![ \t]*\S+)/,"number"],[/@numberFloat(?![ \t]*\S+)/,"number.float"],[/@numberOctal(?![ \t]*\S+)/,"number.octal"],[/@numberHex(?![ \t]*\S+)/,"number.hex"],[/@numberInfinity(?![ \t]*\S+)/,"number.infinity"],[/@numberNaN(?![ \t]*\S+)/,"number.nan"],[/@numberDate(?![ \t]*\S+)/,"number.date"],[/(".*?"|'.*?'|.*?)([ \t]*)(:)( |$)/,["type","white","operators","white"]],{include:"@flowScalars"},[/[^#]+/,{cases:{"@keywords":"keyword","@default":"string"}}]],object:[{include:"@whitespace"},{include:"@comment"},[/\}/,"@brackets","@pop"],[/,/,"delimiter.comma"],[/:(?= )/,"operators"],[/(?:".*?"|'.*?'|[^,\{\[]+?)(?=: )/,"type"],{include:"@flowCollections"},{include:"@flowScalars"},{include:"@tagHandle"},{include:"@anchor"},{include:"@flowNumber"},[/[^\},]+/,{cases:{"@keywords":"keyword","@default":"string"}}]],array:[{include:"@whitespace"},{include:"@comment"},[/\]/,"@brackets","@pop"],[/,/,"delimiter.comma"],{include:"@flowCollections"},{include:"@flowScalars"},{include:"@tagHandle"},{include:"@anchor"},{include:"@flowNumber"},[/[^\],]+/,{cases:{"@keywords":"keyword","@default":"string"}}]],multiString:[[/^( +).+$/,"string","@multiStringContinued.$1"]],multiStringContinued:[[/^( *).+$/,{cases:{"$1==$S2":"string","@default":{token:"@rematch",next:"@popall"}}}]],whitespace:[[/[ \t\r\n]+/,"white"]],comment:[[/#.*$/,"comment"]],flowCollections:[[/\[/,"@brackets","@array"],[/\{/,"@brackets","@object"]],flowScalars:[[/"([^"\\]|\\.)*$/,"string.invalid"],[/'([^'\\]|\\.)*$/,"string.invalid"],[/'[^']*'/,"string"],[/"/,"string","@doubleQuotedString"]],doubleQuotedString:[[/[^\\"]+/,"string"],[/@escapes/,"string.escape"],[/\\./,"string.escape.invalid"],[/"/,"string","@pop"]],blockStyle:[[/[>|][0-9]*[+-]?$/,"operators","@multiString"]],flowNumber:[[/@numberInteger(?=[ \t]*[,\]\}])/,"number"],[/@numberFloat(?=[ \t]*[,\]\}])/,"number.float"],[/@numberOctal(?=[ \t]*[,\]\}])/,"number.octal"],[/@numberHex(?=[ \t]*[,\]\}])/,"number.hex"],[/@numberInfinity(?=[ \t]*[,\]\}])/,"number.infinity"],[/@numberNaN(?=[ \t]*[,\]\}])/,"number.nan"],[/@numberDate(?=[ \t]*[,\]\}])/,"number.date"]],tagHandle:[[/\![^ ]*/,"tag"]],anchor:[[/[&*][^ ]+/,"namespace"]]}}}}]);
//# sourceMappingURL=2014.entry.js.map

File diff suppressed because one or more lines are too long

File diff suppressed because one or more lines are too long

File diff suppressed because one or more lines are too long

File diff suppressed because one or more lines are too long

File diff suppressed because one or more lines are too long

File diff suppressed because one or more lines are too long

View File

@@ -0,0 +1,3 @@
/*! For license information please see 2906.entry.js.LICENSE.txt */
"use strict";(self.webpackChunkdemo=self.webpackChunkdemo||[]).push([[2906],{2906:(e,t,o)=>{o.r(t),o.d(t,{conf:()=>n,language:()=>i});var n={comments:{lineComment:"//",blockComment:["/*","*/"]},brackets:[["{","}"],["[","]"],["(",")"]],autoClosingPairs:[{open:"{",close:"}"},{open:"[",close:"]"},{open:"(",close:")"},{open:'"',close:'"'},{open:"'",close:"'"},{open:"`",close:"`"}],surroundingPairs:[{open:"{",close:"}"},{open:"[",close:"]"},{open:"(",close:")"},{open:'"',close:'"'},{open:"'",close:"'"},{open:"`",close:"`"}]},i={defaultToken:"",tokenPostfix:".swift",identifier:/[a-zA-Z_][\w$]*/,attributes:["@autoclosure","@noescape","@noreturn","@NSApplicationMain","@NSCopying","@NSManaged","@objc","@UIApplicationMain","@noreturn","@availability","@IBAction","@IBDesignable","@IBInspectable","@IBOutlet"],accessmodifiers:["public","private","fileprivate","internal"],keywords:["__COLUMN__","__FILE__","__FUNCTION__","__LINE__","as","as!","as?","associativity","break","case","catch","class","continue","convenience","default","deinit","didSet","do","dynamic","dynamicType","else","enum","extension","fallthrough","fileprivate","final","for","func","get","guard","if","import","in","infix","init","inout","internal","is","lazy","left","let","mutating","nil","none","nonmutating","operator","optional","override","postfix","precedence","prefix","private","protocol","Protocol","public","repeat","required","return","right","self","Self","set","static","struct","subscript","super","switch","throw","throws","try","try!","Type","typealias","unowned","var","weak","where","while","willSet","FALSE","TRUE"],symbols:/[=(){}\[\].,:;@#\_&\-<>`?!+*\\\/]/,operatorstart:/[\/=\-+!*%<>&|^~?\u00A1-\u00A7\u00A9\u00AB\u00AC\u00AE\u00B0-\u00B1\u00B6\u00BB\u00BF\u00D7\u00F7\u2016-\u2017\u2020-\u2027\u2030-\u203E\u2041-\u2053\u2055-\u205E\u2190-\u23FF\u2500-\u2775\u2794-\u2BFF\u2E00-\u2E7F\u3001-\u3003\u3008-\u3030]/,operatorend:/[\u0300-\u036F\u1DC0-\u1DFF\u20D0-\u20FF\uFE00-\uFE0F\uFE20-\uFE2F\uE0100-\uE01EF]/,operators:/(@operatorstart)((@operatorstart)|(@operatorend))*/,escapes:/\\(?:[abfnrtv\\"']|x[0-9A-Fa-f]{1,4}|u[0-9A-Fa-f]{4}|U[0-9A-Fa-f]{8})/,tokenizer:{root:[{include:"@whitespace"},{include:"@comment"},{include:"@attribute"},{include:"@literal"},{include:"@keyword"},{include:"@invokedmethod"},{include:"@symbol"}],whitespace:[[/\s+/,"white"],[/"""/,"string.quote","@endDblDocString"]],endDblDocString:[[/[^"]+/,"string"],[/\\"/,"string"],[/"""/,"string.quote","@popall"],[/"/,"string"]],symbol:[[/[{}()\[\]]/,"@brackets"],[/[<>](?!@symbols)/,"@brackets"],[/[.]/,"delimiter"],[/@operators/,"operator"],[/@symbols/,"operator"]],comment:[[/\/\/\/.*$/,"comment.doc"],[/\/\*\*/,"comment.doc","@commentdocbody"],[/\/\/.*$/,"comment"],[/\/\*/,"comment","@commentbody"]],commentdocbody:[[/\/\*/,"comment","@commentbody"],[/\*\//,"comment.doc","@pop"],[/\:[a-zA-Z]+\:/,"comment.doc.param"],[/./,"comment.doc"]],commentbody:[[/\/\*/,"comment","@commentbody"],[/\*\//,"comment","@pop"],[/./,"comment"]],attribute:[[/@@@identifier/,{cases:{"@attributes":"keyword.control","@default":""}}]],literal:[[/"/,{token:"string.quote",next:"@stringlit"}],[/0[b]([01]_?)+/,"number.binary"],[/0[o]([0-7]_?)+/,"number.octal"],[/0[x]([0-9a-fA-F]_?)+([pP][\-+](\d_?)+)?/,"number.hex"],[/(\d_?)*\.(\d_?)+([eE][\-+]?(\d_?)+)?/,"number.float"],[/(\d_?)+/,"number"]],stringlit:[[/\\\(/,{token:"operator",next:"@interpolatedexpression"}],[/@escapes/,"string"],[/\\./,"string.escape.invalid"],[/"/,{token:"string.quote",next:"@pop"}],[/./,"string"]],interpolatedexpression:[[/\(/,{token:"operator",next:"@interpolatedexpression"}],[/\)/,{token:"operator",next:"@pop"}],{include:"@literal"},{include:"@keyword"},{include:"@symbol"}],keyword:[[/`/,{token:"operator",next:"@escapedkeyword"}],[/@identifier/,{cases:{"@keywords":"keyword","[A-Z][a-zA-Z0-9$]*":"type.identifier","@default":"identifier"}}]],escapedkeyword:[[/`/,{token:"operator",next:"@pop"}],[/./,"identifier"]],invokedmethod:[[/([.])(@identifier)/,{cases:{$2:["delimeter","type.identifier"],"@default":""}}]]}}}}]);
//# sourceMappingURL=2906.entry.js.map

View File

@@ -0,0 +1,3 @@
/*!---------------------------------------------------------------------------------------------
* Copyright (C) David Owens II, owensd.io. All rights reserved.
*--------------------------------------------------------------------------------------------*/

File diff suppressed because one or more lines are too long

View File

@@ -0,0 +1,2 @@
"use strict";(self.webpackChunkdemo=self.webpackChunkdemo||[]).push([[3008],{3008:(e,n,s)=>{s.r(n),s.d(n,{conf:()=>i,language:()=>t});var i={brackets:[["{","}"],["[","]"],["(",")"]],autoClosingPairs:[{open:"{",close:"}"},{open:"[",close:"]"},{open:"(",close:")"},{open:"<",close:">",notIn:["string"]}],surroundingPairs:[{open:"(",close:")"},{open:"[",close:"]"},{open:"`",close:"`"}],folding:{markers:{start:new RegExp("^\\s*\x3c!--\\s*#?region\\b.*--\x3e"),end:new RegExp("^\\s*\x3c!--\\s*#?endregion\\b.*--\x3e")}}},t={defaultToken:"",tokenPostfix:".rst",control:/[\\`*_\[\]{}()#+\-\.!]/,escapes:/\\(?:@control)/,empty:["area","base","basefont","br","col","frame","hr","img","input","isindex","link","meta","param"],alphanumerics:/[A-Za-z0-9]/,simpleRefNameWithoutBq:/(?:@alphanumerics[-_+:.]*@alphanumerics)+|(?:@alphanumerics+)/,simpleRefName:/(?:`@phrase`|@simpleRefNameWithoutBq)/,phrase:/@simpleRefNameWithoutBq(?:\s@simpleRefNameWithoutBq)*/,citationName:/[A-Za-z][A-Za-z0-9-_.]*/,blockLiteralStart:/(?:[!"#$%&'()*+,-./:;<=>?@\[\]^_`{|}~]|[\s])/,precedingChars:/(?:[ -:/'"<([{])/,followingChars:/(?:[ -.,:;!?/'")\]}>]|$)/,punctuation:/(=|-|~|`|#|"|\^|\+|\*|:|\.|'|_|\+)/,tokenizer:{root:[[/^(@punctuation{3,}$){1,1}?/,"keyword"],[/^\s*([\*\-+‣•]|[a-zA-Z0-9]+\.|\([a-zA-Z0-9]+\)|[a-zA-Z0-9]+\))\s/,"keyword"],[/([ ]::)\s*$/,"keyword","@blankLineOfLiteralBlocks"],[/(::)\s*$/,"keyword","@blankLineOfLiteralBlocks"],{include:"@tables"},{include:"@explicitMarkupBlocks"},{include:"@inlineMarkup"}],explicitMarkupBlocks:[{include:"@citations"},{include:"@footnotes"},[/^(\.\.\s)(@simpleRefName)(::\s)(.*)$/,[{token:"",next:"subsequentLines"},"keyword","",""]],[/^(\.\.)(\s+)(_)(@simpleRefName)(:)(\s+)(.*)/,[{token:"",next:"hyperlinks"},"","","string.link","","","string.link"]],[/^((?:(?:\.\.)(?:\s+))?)(__)(:)(\s+)(.*)/,[{token:"",next:"subsequentLines"},"","","","string.link"]],[/^(__\s+)(.+)/,["","string.link"]],[/^(\.\.)( \|)([^| ]+[^|]*[^| ]*)(\| )(@simpleRefName)(:: .*)/,[{token:"",next:"subsequentLines"},"","string.link","","keyword",""],"@rawBlocks"],[/(\|)([^| ]+[^|]*[^| ]*)(\|_{0,2})/,["","string.link",""]],[/^(\.\.)([ ].*)$/,[{token:"",next:"@comments"},"comment"]]],inlineMarkup:[{include:"@citationsReference"},{include:"@footnotesReference"},[/(@simpleRefName)(_{1,2})/,["string.link",""]],[/(`)([^<`]+\s+)(<)(.*)(>)(`)(_)/,["","string.link","","string.link","","",""]],[/\*\*([^\\*]|\*(?!\*))+\*\*/,"strong"],[/\*[^*]+\*/,"emphasis"],[/(``)((?:[^`]|\`(?!`))+)(``)/,["","keyword",""]],[/(__\s+)(.+)/,["","keyword"]],[/(:)((?:@simpleRefNameWithoutBq)?)(:`)([^`]+)(`)/,["","keyword","","",""]],[/(`)([^`]+)(`:)((?:@simpleRefNameWithoutBq)?)(:)/,["","","","keyword",""]],[/(`)([^`]+)(`)/,""],[/(_`)(@phrase)(`)/,["","string.link",""]]],citations:[[/^(\.\.\s+\[)((?:@citationName))(\]\s+)(.*)/,[{token:"",next:"@subsequentLines"},"string.link","",""]]],citationsReference:[[/(\[)(@citationName)(\]_)/,["","string.link",""]]],footnotes:[[/^(\.\.\s+\[)((?:[0-9]+))(\]\s+.*)/,[{token:"",next:"@subsequentLines"},"string.link",""]],[/^(\.\.\s+\[)((?:#@simpleRefName?))(\]\s+)(.*)/,[{token:"",next:"@subsequentLines"},"string.link","",""]],[/^(\.\.\s+\[)((?:\*))(\]\s+)(.*)/,[{token:"",next:"@subsequentLines"},"string.link","",""]]],footnotesReference:[[/(\[)([0-9]+)(\])(_)/,["","string.link","",""]],[/(\[)(#@simpleRefName?)(\])(_)/,["","string.link","",""]],[/(\[)(\*)(\])(_)/,["","string.link","",""]]],blankLineOfLiteralBlocks:[[/^$/,"","@subsequentLinesOfLiteralBlocks"],[/^.*$/,"","@pop"]],subsequentLinesOfLiteralBlocks:[[/(@blockLiteralStart+)(.*)/,["keyword",""]],[/^(?!blockLiteralStart)/,"","@popall"]],subsequentLines:[[/^[\s]+.*/,""],[/^(?!\s)/,"","@pop"]],hyperlinks:[[/^[\s]+.*/,"string.link"],[/^(?!\s)/,"","@pop"]],comments:[[/^[\s]+.*/,"comment"],[/^(?!\s)/,"","@pop"]],tables:[[/\+-[+-]+/,"keyword"],[/\+=[+=]+/,"keyword"]]}}}}]);
//# sourceMappingURL=3008.entry.js.map

View File

@@ -0,0 +1,2 @@
"use strict";(self.webpackChunkdemo=self.webpackChunkdemo||[]).push([[3315],{3315:(e,o,n)=>{n.r(o),n.d(o,{conf:()=>t,language:()=>s});var t={comments:{lineComment:"//",blockComment:["(*","*)"]},brackets:[["{","}"],["[","]"],["(",")"],["<",">"]],autoClosingPairs:[{open:"{",close:"}"},{open:"[",close:"]"},{open:"(",close:")"},{open:"<",close:">"},{open:"'",close:"'"}],surroundingPairs:[{open:"{",close:"}"},{open:"[",close:"]"},{open:"(",close:")"},{open:"<",close:">"},{open:"'",close:"'"}]},s={defaultToken:"",tokenPostfix:".pascaligo",ignoreCase:!0,brackets:[{open:"{",close:"}",token:"delimiter.curly"},{open:"[",close:"]",token:"delimiter.square"},{open:"(",close:")",token:"delimiter.parenthesis"},{open:"<",close:">",token:"delimiter.angle"}],keywords:["begin","block","case","const","else","end","fail","for","from","function","if","is","nil","of","remove","return","skip","then","type","var","while","with","option","None","transaction"],typeKeywords:["bool","int","list","map","nat","record","string","unit","address","map","mtz","xtz"],operators:["=",">","<","<=",">=","<>",":",":=","and","mod","or","+","-","*","/","@","&","^","%"],symbols:/[=><:@\^&|+\-*\/\^%]+/,tokenizer:{root:[[/[a-zA-Z_][\w]*/,{cases:{"@keywords":{token:"keyword.$0"},"@default":"identifier"}}],{include:"@whitespace"},[/[{}()\[\]]/,"@brackets"],[/[<>](?!@symbols)/,"@brackets"],[/@symbols/,{cases:{"@operators":"delimiter","@default":""}}],[/\d*\.\d+([eE][\-+]?\d+)?/,"number.float"],[/\$[0-9a-fA-F]{1,16}/,"number.hex"],[/\d+/,"number"],[/[;,.]/,"delimiter"],[/'([^'\\]|\\.)*$/,"string.invalid"],[/'/,"string","@string"],[/'[^\\']'/,"string"],[/'/,"string.invalid"],[/\#\d+/,"string"]],comment:[[/[^\(\*]+/,"comment"],[/\*\)/,"comment","@pop"],[/\(\*/,"comment"]],string:[[/[^\\']+/,"string"],[/\\./,"string.escape.invalid"],[/'/,{token:"string.quote",bracket:"@close",next:"@pop"}]],whitespace:[[/[ \t\r\n]+/,"white"],[/\(\*/,"comment","@comment"],[/\/\/.*$/,"comment"]]}}}}]);
//# sourceMappingURL=3315.entry.js.map

File diff suppressed because one or more lines are too long

View File

@@ -0,0 +1,2 @@
"use strict";(self.webpackChunkdemo=self.webpackChunkdemo||[]).push([[3399],{3399:(e,t,s)=>{s.r(t),s.d(t,{conf:()=>n,language:()=>o});var n={wordPattern:/(-?\d*\.\d\w*)|([^\`\~\!\#\%\^\&\*\(\)\-\=\+\[\{\]\}\\\|\;\:\'\"\,\.\<\>\/\?\s]+)/g,comments:{lineComment:"//",blockComment:["/*","*/"]},brackets:[["{","}"],["[","]"],["(",")"]],autoClosingPairs:[{open:"{",close:"}"},{open:"[",close:"]"},{open:"(",close:")"},{open:'"',close:'"'},{open:"'",close:"'"}],surroundingPairs:[{open:"{",close:"}"},{open:"[",close:"]"},{open:"(",close:")"},{open:'"',close:'"'},{open:"'",close:"'"},{open:"<",close:">"}],folding:{markers:{start:new RegExp("^\\s*//\\s*(?:(?:#?region\\b)|(?:<editor-fold\\b))"),end:new RegExp("^\\s*//\\s*(?:(?:#?endregion\\b)|(?:</editor-fold>))")}}},o={defaultToken:"",tokenPostfix:".java",keywords:["abstract","continue","for","new","switch","assert","default","goto","package","synchronized","boolean","do","if","private","this","break","double","implements","protected","throw","byte","else","import","public","throws","case","enum","instanceof","return","transient","catch","extends","int","short","try","char","final","interface","static","void","class","finally","long","strictfp","volatile","const","float","native","super","while","true","false","yield","record","sealed","non-sealed","permits"],operators:["=",">","<","!","~","?",":","==","<=",">=","!=","&&","||","++","--","+","-","*","/","&","|","^","%","<<",">>",">>>","+=","-=","*=","/=","&=","|=","^=","%=","<<=",">>=",">>>="],symbols:/[=><!~?:&|+\-*\/\^%]+/,escapes:/\\(?:[abfnrtv\\"']|x[0-9A-Fa-f]{1,4}|u[0-9A-Fa-f]{4}|U[0-9A-Fa-f]{8})/,digits:/\d+(_+\d+)*/,octaldigits:/[0-7]+(_+[0-7]+)*/,binarydigits:/[0-1]+(_+[0-1]+)*/,hexdigits:/[[0-9a-fA-F]+(_+[0-9a-fA-F]+)*/,tokenizer:{root:[["non-sealed","keyword.non-sealed"],[/[a-zA-Z_$][\w$]*/,{cases:{"@keywords":{token:"keyword.$0"},"@default":"identifier"}}],{include:"@whitespace"},[/[{}()\[\]]/,"@brackets"],[/[<>](?!@symbols)/,"@brackets"],[/@symbols/,{cases:{"@operators":"delimiter","@default":""}}],[/@\s*[a-zA-Z_\$][\w\$]*/,"annotation"],[/(@digits)[eE]([\-+]?(@digits))?[fFdD]?/,"number.float"],[/(@digits)\.(@digits)([eE][\-+]?(@digits))?[fFdD]?/,"number.float"],[/0[xX](@hexdigits)[Ll]?/,"number.hex"],[/0(@octaldigits)[Ll]?/,"number.octal"],[/0[bB](@binarydigits)[Ll]?/,"number.binary"],[/(@digits)[fFdD]/,"number.float"],[/(@digits)[lL]?/,"number"],[/[;,.]/,"delimiter"],[/"([^"\\]|\\.)*$/,"string.invalid"],[/"""/,"string","@multistring"],[/"/,"string","@string"],[/'[^\\']'/,"string"],[/(')(@escapes)(')/,["string","string.escape","string"]],[/'/,"string.invalid"]],whitespace:[[/[ \t\r\n]+/,""],[/\/\*\*(?!\/)/,"comment.doc","@javadoc"],[/\/\*/,"comment","@comment"],[/\/\/.*$/,"comment"]],comment:[[/[^\/*]+/,"comment"],[/\*\//,"comment","@pop"],[/[\/*]/,"comment"]],javadoc:[[/[^\/*]+/,"comment.doc"],[/\/\*/,"comment.doc.invalid"],[/\*\//,"comment.doc","@pop"],[/[\/*]/,"comment.doc"]],string:[[/[^\\"]+/,"string"],[/@escapes/,"string.escape"],[/\\./,"string.escape.invalid"],[/"/,"string","@pop"]],multistring:[[/[^\\"]+/,"string"],[/@escapes/,"string.escape"],[/\\./,"string.escape.invalid"],[/"""/,"string","@pop"],[/./,"string"]]}}}}]);
//# sourceMappingURL=3399.entry.js.map

View File

@@ -0,0 +1,2 @@
"use strict";(self.webpackChunkdemo=self.webpackChunkdemo||[]).push([[3504],{3504:(E,S,e)=>{e.r(S),e.d(S,{conf:()=>T,language:()=>R});var T={brackets:[["{","}"],["[","]"],["(",")"]],autoClosingPairs:[{open:"{",close:"}"},{open:"[",close:"]"},{open:"(",close:")"},{open:'"',close:'"'},{open:"'",close:"'"}],surroundingPairs:[{open:"{",close:"}"},{open:"[",close:"]"},{open:"(",close:")"},{open:'"',close:'"'},{open:"'",close:"'"}]},R={defaultToken:"",tokenPostfix:".redis",ignoreCase:!0,brackets:[{open:"[",close:"]",token:"delimiter.square"},{open:"(",close:")",token:"delimiter.parenthesis"}],keywords:["APPEND","AUTH","BGREWRITEAOF","BGSAVE","BITCOUNT","BITFIELD","BITOP","BITPOS","BLPOP","BRPOP","BRPOPLPUSH","CLIENT","KILL","LIST","GETNAME","PAUSE","REPLY","SETNAME","CLUSTER","ADDSLOTS","COUNT-FAILURE-REPORTS","COUNTKEYSINSLOT","DELSLOTS","FAILOVER","FORGET","GETKEYSINSLOT","INFO","KEYSLOT","MEET","NODES","REPLICATE","RESET","SAVECONFIG","SET-CONFIG-EPOCH","SETSLOT","SLAVES","SLOTS","COMMAND","COUNT","GETKEYS","CONFIG","GET","REWRITE","SET","RESETSTAT","DBSIZE","DEBUG","OBJECT","SEGFAULT","DECR","DECRBY","DEL","DISCARD","DUMP","ECHO","EVAL","EVALSHA","EXEC","EXISTS","EXPIRE","EXPIREAT","FLUSHALL","FLUSHDB","GEOADD","GEOHASH","GEOPOS","GEODIST","GEORADIUS","GEORADIUSBYMEMBER","GETBIT","GETRANGE","GETSET","HDEL","HEXISTS","HGET","HGETALL","HINCRBY","HINCRBYFLOAT","HKEYS","HLEN","HMGET","HMSET","HSET","HSETNX","HSTRLEN","HVALS","INCR","INCRBY","INCRBYFLOAT","KEYS","LASTSAVE","LINDEX","LINSERT","LLEN","LPOP","LPUSH","LPUSHX","LRANGE","LREM","LSET","LTRIM","MGET","MIGRATE","MONITOR","MOVE","MSET","MSETNX","MULTI","PERSIST","PEXPIRE","PEXPIREAT","PFADD","PFCOUNT","PFMERGE","PING","PSETEX","PSUBSCRIBE","PUBSUB","PTTL","PUBLISH","PUNSUBSCRIBE","QUIT","RANDOMKEY","READONLY","READWRITE","RENAME","RENAMENX","RESTORE","ROLE","RPOP","RPOPLPUSH","RPUSH","RPUSHX","SADD","SAVE","SCARD","SCRIPT","FLUSH","LOAD","SDIFF","SDIFFSTORE","SELECT","SETBIT","SETEX","SETNX","SETRANGE","SHUTDOWN","SINTER","SINTERSTORE","SISMEMBER","SLAVEOF","SLOWLOG","SMEMBERS","SMOVE","SORT","SPOP","SRANDMEMBER","SREM","STRLEN","SUBSCRIBE","SUNION","SUNIONSTORE","SWAPDB","SYNC","TIME","TOUCH","TTL","TYPE","UNSUBSCRIBE","UNLINK","UNWATCH","WAIT","WATCH","ZADD","ZCARD","ZCOUNT","ZINCRBY","ZINTERSTORE","ZLEXCOUNT","ZRANGE","ZRANGEBYLEX","ZREVRANGEBYLEX","ZRANGEBYSCORE","ZRANK","ZREM","ZREMRANGEBYLEX","ZREMRANGEBYRANK","ZREMRANGEBYSCORE","ZREVRANGE","ZREVRANGEBYSCORE","ZREVRANK","ZSCORE","ZUNIONSTORE","SCAN","SSCAN","HSCAN","ZSCAN"],operators:[],builtinFunctions:[],builtinVariables:[],pseudoColumns:[],tokenizer:{root:[{include:"@whitespace"},{include:"@pseudoColumns"},{include:"@numbers"},{include:"@strings"},{include:"@scopes"},[/[;,.]/,"delimiter"],[/[()]/,"@brackets"],[/[\w@#$]+/,{cases:{"@keywords":"keyword","@operators":"operator","@builtinVariables":"predefined","@builtinFunctions":"predefined","@default":"identifier"}}],[/[<>=!%&+\-*/|~^]/,"operator"]],whitespace:[[/\s+/,"white"]],pseudoColumns:[[/[$][A-Za-z_][\w@#$]*/,{cases:{"@pseudoColumns":"predefined","@default":"identifier"}}]],numbers:[[/0[xX][0-9a-fA-F]*/,"number"],[/[$][+-]*\d*(\.\d*)?/,"number"],[/((\d+(\.\d*)?)|(\.\d+))([eE][\-+]?\d+)?/,"number"]],strings:[[/'/,{token:"string",next:"@string"}],[/"/,{token:"string.double",next:"@stringDouble"}]],string:[[/[^']+/,"string"],[/''/,"string"],[/'/,{token:"string",next:"@pop"}]],stringDouble:[[/[^"]+/,"string.double"],[/""/,"string.double"],[/"/,{token:"string.double",next:"@pop"}]],scopes:[]}}}}]);
//# sourceMappingURL=3504.entry.js.map

View File

@@ -0,0 +1,2 @@
"use strict";(self.webpackChunkdemo=self.webpackChunkdemo||[]).push([[3553],{3553:(e,s,o)=>{o.r(s),o.d(s,{conf:()=>t,language:()=>n});var t={comments:{lineComment:"REM"},brackets:[["{","}"],["[","]"],["(",")"]],autoClosingPairs:[{open:"{",close:"}"},{open:"[",close:"]"},{open:"(",close:")"},{open:'"',close:'"'}],surroundingPairs:[{open:"[",close:"]"},{open:"(",close:")"},{open:'"',close:'"'}],folding:{markers:{start:new RegExp("^\\s*(::\\s*|REM\\s+)#region"),end:new RegExp("^\\s*(::\\s*|REM\\s+)#endregion")}}},n={defaultToken:"",ignoreCase:!0,tokenPostfix:".bat",brackets:[{token:"delimiter.bracket",open:"{",close:"}"},{token:"delimiter.parenthesis",open:"(",close:")"},{token:"delimiter.square",open:"[",close:"]"}],keywords:/call|defined|echo|errorlevel|exist|for|goto|if|pause|set|shift|start|title|not|pushd|popd/,symbols:/[=><!~?&|+\-*\/\^;\.,]+/,escapes:/\\(?:[abfnrtv\\"']|x[0-9A-Fa-f]{1,4}|u[0-9A-Fa-f]{4}|U[0-9A-Fa-f]{8})/,tokenizer:{root:[[/^(\s*)(rem(?:\s.*|))$/,["","comment"]],[/(\@?)(@keywords)(?!\w)/,[{token:"keyword"},{token:"keyword.$2"}]],[/[ \t\r\n]+/,""],[/setlocal(?!\w)/,"keyword.tag-setlocal"],[/endlocal(?!\w)/,"keyword.tag-setlocal"],[/[a-zA-Z_]\w*/,""],[/:\w*/,"metatag"],[/%[^%]+%/,"variable"],[/%%[\w]+(?!\w)/,"variable"],[/[{}()\[\]]/,"@brackets"],[/@symbols/,"delimiter"],[/\d*\.\d+([eE][\-+]?\d+)?/,"number.float"],[/0[xX][0-9a-fA-F_]*[0-9a-fA-F]/,"number.hex"],[/\d+/,"number"],[/[;,.]/,"delimiter"],[/"/,"string",'@string."'],[/'/,"string","@string.'"]],string:[[/[^\\"'%]+/,{cases:{"@eos":{token:"string",next:"@popall"},"@default":"string"}}],[/@escapes/,"string.escape"],[/\\./,"string.escape.invalid"],[/%[\w ]+%/,"variable"],[/%%[\w]+(?!\w)/,"variable"],[/["']/,{cases:{"$#==$S2":{token:"string",next:"@pop"},"@default":"string"}}],[/$/,"string","@popall"]]}}}}]);
//# sourceMappingURL=3553.entry.js.map

View File

@@ -0,0 +1,2 @@
"use strict";(self.webpackChunkdemo=self.webpackChunkdemo||[]).push([[3673],{3673:(e,t,r)=>{r.r(t),r.d(t,{conf:()=>s,language:()=>n});var s={wordPattern:/(-?\d*\.\d\w*)|([^\`\~\!\@\#%\^\&\*\(\)\=\$\-\+\[\{\]\}\\\|\;\:\'\"\,\.\<\>\/\?\s]+)/g,comments:{blockComment:["###","###"],lineComment:"#"},folding:{markers:{start:new RegExp("^\\s*#region\\b"),end:new RegExp("^\\s*#endregion\\b")}}},n={defaultToken:"",ignoreCase:!1,tokenPostfix:".mips",regEx:/\/(?!\/\/)(?:[^\/\\]|\\.)*\/[igm]*/,keywords:[".data",".text","syscall","trap","add","addu","addi","addiu","and","andi","div","divu","mult","multu","nor","or","ori","sll","slv","sra","srav","srl","srlv","sub","subu","xor","xori","lhi","lho","lhi","llo","slt","slti","sltu","sltiu","beq","bgtz","blez","bne","j","jal","jalr","jr","lb","lbu","lh","lhu","lw","li","la","sb","sh","sw","mfhi","mflo","mthi","mtlo","move"],symbols:/[\.,\:]+/,escapes:/\\(?:[abfnrtv\\"'$]|x[0-9A-Fa-f]{1,4}|u[0-9A-Fa-f]{4}|U[0-9A-Fa-f]{8})/,tokenizer:{root:[[/\$[a-zA-Z_]\w*/,"variable.predefined"],[/[.a-zA-Z_]\w*/,{cases:{this:"variable.predefined","@keywords":{token:"keyword.$0"},"@default":""}}],[/[ \t\r\n]+/,""],[/#.*$/,"comment"],["///",{token:"regexp",next:"@hereregexp"}],[/^(\s*)(@regEx)/,["","regexp"]],[/(\,)(\s*)(@regEx)/,["delimiter","","regexp"]],[/(\:)(\s*)(@regEx)/,["delimiter","","regexp"]],[/@symbols/,"delimiter"],[/\d+[eE]([\-+]?\d+)?/,"number.float"],[/\d+\.\d+([eE][\-+]?\d+)?/,"number.float"],[/0[xX][0-9a-fA-F]+/,"number.hex"],[/0[0-7]+(?!\d)/,"number.octal"],[/\d+/,"number"],[/[,.]/,"delimiter"],[/"""/,"string",'@herestring."""'],[/'''/,"string","@herestring.'''"],[/"/,{cases:{"@eos":"string","@default":{token:"string",next:'@string."'}}}],[/'/,{cases:{"@eos":"string","@default":{token:"string",next:"@string.'"}}}]],string:[[/[^"'\#\\]+/,"string"],[/@escapes/,"string.escape"],[/\./,"string.escape.invalid"],[/\./,"string.escape.invalid"],[/#{/,{cases:{'$S2=="':{token:"string",next:"root.interpolatedstring"},"@default":"string"}}],[/["']/,{cases:{"$#==$S2":{token:"string",next:"@pop"},"@default":"string"}}],[/#/,"string"]],herestring:[[/("""|''')/,{cases:{"$1==$S2":{token:"string",next:"@pop"},"@default":"string"}}],[/[^#\\'"]+/,"string"],[/['"]+/,"string"],[/@escapes/,"string.escape"],[/\./,"string.escape.invalid"],[/#{/,{token:"string.quote",next:"root.interpolatedstring"}],[/#/,"string"]],comment:[[/[^#]+/,"comment"],[/#/,"comment"]],hereregexp:[[/[^\\\/#]+/,"regexp"],[/\\./,"regexp"],[/#.*$/,"comment"],["///[igm]*",{token:"regexp",next:"@pop"}],[/\//,"regexp"]]}}}}]);
//# sourceMappingURL=3673.entry.js.map

View File

@@ -0,0 +1,2 @@
"use strict";(self.webpackChunkdemo=self.webpackChunkdemo||[]).push([[3855],{3855:(e,t,i)=>{i.r(t),i.d(t,{conf:()=>o,language:()=>n});var o={comments:{lineComment:"COMMENT"},brackets:[["(",")"]],autoClosingPairs:[{open:"{",close:"}"},{open:"[",close:"]"},{open:"(",close:")"},{open:'"',close:'"'},{open:":",close:"."}],surroundingPairs:[{open:"{",close:"}"},{open:"[",close:"]"},{open:"(",close:")"},{open:"`",close:"`"},{open:'"',close:'"'},{open:"'",close:"'"},{open:":",close:"."}],folding:{markers:{start:new RegExp("^\\s*(::\\s*|COMMENT\\s+)#region"),end:new RegExp("^\\s*(::\\s*|COMMENT\\s+)#endregion")}}},n={tokenPostfix:".lexon",ignoreCase:!0,keywords:["lexon","lex","clause","terms","contracts","may","pay","pays","appoints","into","to"],typeKeywords:["amount","person","key","time","date","asset","text"],operators:["less","greater","equal","le","gt","or","and","add","added","subtract","subtracted","multiply","multiplied","times","divide","divided","is","be","certified"],symbols:/[=><!~?:&|+\-*\/\^%]+/,tokenizer:{root:[[/^(\s*)(comment:?(?:\s.*|))$/,["","comment"]],[/"/,{token:"identifier.quote",bracket:"@open",next:"@quoted_identifier"}],["LEX$",{token:"keyword",bracket:"@open",next:"@identifier_until_period"}],["LEXON",{token:"keyword",bracket:"@open",next:"@semver"}],[":",{token:"delimiter",bracket:"@open",next:"@identifier_until_period"}],[/[a-z_$][\w$]*/,{cases:{"@operators":"operator","@typeKeywords":"keyword.type","@keywords":"keyword","@default":"identifier"}}],{include:"@whitespace"},[/[{}()\[\]]/,"@brackets"],[/[<>](?!@symbols)/,"@brackets"],[/@symbols/,"delimiter"],[/\d*\.\d*\.\d*/,"number.semver"],[/\d*\.\d+([eE][\-+]?\d+)?/,"number.float"],[/0[xX][0-9a-fA-F]+/,"number.hex"],[/\d+/,"number"],[/[;,.]/,"delimiter"]],quoted_identifier:[[/[^\\"]+/,"identifier"],[/"/,{token:"identifier.quote",bracket:"@close",next:"@pop"}]],space_identifier_until_period:[[":","delimiter"],[" ",{token:"white",next:"@identifier_rest"}]],identifier_until_period:[{include:"@whitespace"},[":",{token:"delimiter",next:"@identifier_rest"}],[/[^\\.]+/,"identifier"],[/\./,{token:"delimiter",bracket:"@close",next:"@pop"}]],identifier_rest:[[/[^\\.]+/,"identifier"],[/\./,{token:"delimiter",bracket:"@close",next:"@pop"}]],semver:[{include:"@whitespace"},[":","delimiter"],[/\d*\.\d*\.\d*/,{token:"number.semver",bracket:"@close",next:"@pop"}]],whitespace:[[/[ \t\r\n]+/,"white"]]}}}}]);
//# sourceMappingURL=3855.entry.js.map

View File

@@ -0,0 +1,2 @@
"use strict";(self.webpackChunkdemo=self.webpackChunkdemo||[]).push([[4035],{4035:(e,n,t)=>{t.r(n),t.d(n,{conf:()=>o,language:()=>r});var o={comments:{lineComment:"//",blockComment:["/*","*/"]},brackets:[["{","}"],["[","]"],["(",")"]],surroundingPairs:[{open:"{",close:"}"},{open:"[",close:"]"},{open:"(",close:")"},{open:"'",close:"'"},{open:"'''",close:"'''"}],autoClosingPairs:[{open:"{",close:"}"},{open:"[",close:"]"},{open:"(",close:")"},{open:"'",close:"'",notIn:["string","comment"]},{open:"'''",close:"'''",notIn:["string","comment"]}],autoCloseBefore:":.,=}])' \n\t",indentationRules:{increaseIndentPattern:new RegExp("^((?!\\/\\/).)*(\\{[^}\"'`]*|\\([^)\"'`]*|\\[[^\\]\"'`]*)$"),decreaseIndentPattern:new RegExp("^((?!.*?\\/\\*).*\\*/)?\\s*[\\}\\]].*$")}},r={defaultToken:"",tokenPostfix:".bicep",brackets:[{open:"{",close:"}",token:"delimiter.curly"},{open:"[",close:"]",token:"delimiter.square"},{open:"(",close:")",token:"delimiter.parenthesis"}],symbols:/[=><!~?:&|+\-*/^%]+/,keywords:["targetScope","resource","module","param","var","output","for","in","if","existing"],namedLiterals:["true","false","null"],escapes:"\\\\(u{[0-9A-Fa-f]+}|n|r|t|\\\\|'|\\${)",tokenizer:{root:[{include:"@expression"},{include:"@whitespace"}],stringVerbatim:[{regex:"(|'|'')[^']",action:{token:"string"}},{regex:"'''",action:{token:"string.quote",next:"@pop"}}],stringLiteral:[{regex:"\\${",action:{token:"delimiter.bracket",next:"@bracketCounting"}},{regex:"[^\\\\'$]+",action:{token:"string"}},{regex:"@escapes",action:{token:"string.escape"}},{regex:"\\\\.",action:{token:"string.escape.invalid"}},{regex:"'",action:{token:"string",next:"@pop"}}],bracketCounting:[{regex:"{",action:{token:"delimiter.bracket",next:"@bracketCounting"}},{regex:"}",action:{token:"delimiter.bracket",next:"@pop"}},{include:"expression"}],comment:[{regex:"[^\\*]+",action:{token:"comment"}},{regex:"\\*\\/",action:{token:"comment",next:"@pop"}},{regex:"[\\/*]",action:{token:"comment"}}],whitespace:[{regex:"[ \\t\\r\\n]"},{regex:"\\/\\*",action:{token:"comment",next:"@comment"}},{regex:"\\/\\/.*$",action:{token:"comment"}}],expression:[{regex:"'''",action:{token:"string.quote",next:"@stringVerbatim"}},{regex:"'",action:{token:"string.quote",next:"@stringLiteral"}},{regex:"[0-9]+",action:{token:"number"}},{regex:"\\b[_a-zA-Z][_a-zA-Z0-9]*\\b",action:{cases:{"@keywords":{token:"keyword"},"@namedLiterals":{token:"keyword"},"@default":{token:"identifier"}}}}]}}}}]);
//# sourceMappingURL=4035.entry.js.map

View File

@@ -0,0 +1,2 @@
"use strict";(self.webpackChunkdemo=self.webpackChunkdemo||[]).push([[4073],{4073:(e,o,r)=>{r.r(o),r.d(o,{conf:()=>t,language:()=>a});var t={comments:{lineComment:"#"},brackets:[["{","}"],["[","]"],["(",")"]],autoClosingPairs:[{open:"{",close:"}"},{open:"[",close:"]"},{open:"(",close:")"},{open:'"',close:'"'}],surroundingPairs:[{open:"{",close:"}"},{open:"[",close:"]"},{open:"(",close:")"},{open:'"',close:'"'}]},a={defaultToken:"",tokenPostfix:".r",roxygen:["@alias","@aliases","@assignee","@author","@backref","@callGraph","@callGraphDepth","@callGraphPrimitives","@concept","@describeIn","@description","@details","@docType","@encoding","@evalNamespace","@evalRd","@example","@examples","@export","@exportClass","@exportMethod","@exportPattern","@family","@field","@formals","@format","@import","@importClassesFrom","@importFrom","@importMethodsFrom","@include","@inherit","@inheritDotParams","@inheritParams","@inheritSection","@keywords","@md","@method","@name","@noMd","@noRd","@note","@param","@rawNamespace","@rawRd","@rdname","@references","@return","@S3method","@section","@seealso","@setClass","@slot","@source","@template","@templateVar","@title","@TODO","@usage","@useDynLib"],constants:["NULL","FALSE","TRUE","NA","Inf","NaN","NA_integer_","NA_real_","NA_complex_","NA_character_","T","F","LETTERS","letters","month.abb","month.name","pi","R.version.string"],keywords:["break","next","return","if","else","for","in","repeat","while","array","category","character","complex","double","function","integer","list","logical","matrix","numeric","vector","data.frame","factor","library","require","attach","detach","source"],special:["\\n","\\r","\\t","\\b","\\a","\\f","\\v","\\'",'\\"',"\\\\"],brackets:[{open:"{",close:"}",token:"delimiter.curly"},{open:"[",close:"]",token:"delimiter.bracket"},{open:"(",close:")",token:"delimiter.parenthesis"}],tokenizer:{root:[{include:"@numbers"},{include:"@strings"},[/[{}\[\]()]/,"@brackets"],{include:"@operators"},[/#'$/,"comment.doc"],[/#'/,"comment.doc","@roxygen"],[/(^#.*$)/,"comment"],[/\s+/,"white"],[/[,:;]/,"delimiter"],[/@[a-zA-Z]\w*/,"tag"],[/[a-zA-Z]\w*/,{cases:{"@keywords":"keyword","@constants":"constant","@default":"identifier"}}]],roxygen:[[/@\w+/,{cases:{"@roxygen":"tag","@eos":{token:"comment.doc",next:"@pop"},"@default":"comment.doc"}}],[/\s+/,{cases:{"@eos":{token:"comment.doc",next:"@pop"},"@default":"comment.doc"}}],[/.*/,{token:"comment.doc",next:"@pop"}]],numbers:[[/0[xX][0-9a-fA-F]+/,"number.hex"],[/-?(\d*\.)?\d+([eE][+\-]?\d+)?/,"number"]],operators:[[/<{1,2}-/,"operator"],[/->{1,2}/,"operator"],[/%[^%\s]+%/,"operator"],[/\*\*/,"operator"],[/%%/,"operator"],[/&&/,"operator"],[/\|\|/,"operator"],[/<</,"operator"],[/>>/,"operator"],[/[-+=&|!<>^~*/:$]/,"operator"]],strings:[[/'/,"string.escape","@stringBody"],[/"/,"string.escape","@dblStringBody"]],stringBody:[[/\\./,{cases:{"@special":"string","@default":"error-token"}}],[/'/,"string.escape","@popall"],[/./,"string"]],dblStringBody:[[/\\./,{cases:{"@special":"string","@default":"error-token"}}],[/"/,"string.escape","@popall"],[/./,"string"]]}}}}]);
//# sourceMappingURL=4073.entry.js.map

File diff suppressed because one or more lines are too long

View File

@@ -0,0 +1,2 @@
"use strict";(self.webpackChunkdemo=self.webpackChunkdemo||[]).push([[4369],{4369:(e,o,t)=>{t.r(o),t.d(o,{conf:()=>n,language:()=>s});var n={comments:{lineComment:"#"},brackets:[["[","]"],["<",">"],["(",")"]],autoClosingPairs:[{open:"[",close:"]"},{open:"<",close:">"},{open:"(",close:")"}],surroundingPairs:[{open:"[",close:"]"},{open:"<",close:">"},{open:"(",close:")"}]},s={defaultToken:"",tokenPostfix:".pla",brackets:[{open:"[",close:"]",token:"delimiter.square"},{open:"<",close:">",token:"delimiter.angle"},{open:"(",close:")",token:"delimiter.parenthesis"}],keywords:[".i",".o",".mv",".ilb",".ob",".label",".type",".phase",".pair",".symbolic",".symbolic-output",".kiss",".p",".e",".end"],comment:/#.*$/,identifier:/[a-zA-Z]+[a-zA-Z0-9_\-]*/,plaContent:/[01\-~\|]+/,tokenizer:{root:[{include:"@whitespace"},[/@comment/,"comment"],[/\.([a-zA-Z_\-]+)/,{cases:{"@eos":{token:"keyword.$1"},"@keywords":{cases:{".type":{token:"keyword.$1",next:"@type"},"@default":{token:"keyword.$1",next:"@keywordArg"}}},"@default":{token:"keyword.$1"}}}],[/@identifier/,"identifier"],[/@plaContent/,"string"]],whitespace:[[/[ \t\r\n]+/,""]],type:[{include:"@whitespace"},[/\w+/,{token:"type",next:"@pop"}]],keywordArg:[[/[ \t\r\n]+/,{cases:{"@eos":{token:"",next:"@pop"},"@default":""}}],[/@comment/,"comment","@pop"],[/[<>()\[\]]/,{cases:{"@eos":{token:"@brackets",next:"@pop"},"@default":"@brackets"}}],[/\-?\d+/,{cases:{"@eos":{token:"number",next:"@pop"},"@default":"number"}}],[/@identifier/,{cases:{"@eos":{token:"identifier",next:"@pop"},"@default":"identifier"}}],[/[;=]/,{cases:{"@eos":{token:"delimiter",next:"@pop"},"@default":"delimiter"}}]]}}}}]);
//# sourceMappingURL=4369.entry.js.map

View File

@@ -0,0 +1,2 @@
"use strict";(self.webpackChunkdemo=self.webpackChunkdemo||[]).push([[4454],{4454:(e,t,n)=>{n.r(t),n.d(t,{conf:()=>o,language:()=>s});var o={comments:{lineComment:"//",blockComment:["/*","*/"]},brackets:[["{","}"],["[","]"],["(",")"],["<",">"]],autoClosingPairs:[{open:'"',close:'"',notIn:["string","comment"]},{open:"{",close:"}",notIn:["string","comment"]},{open:"[",close:"]",notIn:["string","comment"]},{open:"(",close:")",notIn:["string","comment"]}]},s={defaultToken:"",tokenPostfix:".aes",brackets:[{token:"delimiter.curly",open:"{",close:"}"},{token:"delimiter.parenthesis",open:"(",close:")"},{token:"delimiter.square",open:"[",close:"]"},{token:"delimiter.angle",open:"<",close:">"}],keywords:["contract","library","entrypoint","function","stateful","state","hash","signature","tuple","list","address","string","bool","int","record","datatype","type","option","oracle","oracle_query","Call","Bits","Bytes","Oracle","String","Crypto","Address","Auth","Chain","None","Some","bits","bytes","event","let","map","private","public","true","false","var","if","else","throw"],operators:["=",">","<","!","~","?","::",":","==","<=",">=","!=","&&","||","++","--","+","-","*","/","&","|","^","%","<<",">>",">>>","+=","-=","*=","/=","&=","|=","^=","%=","<<=",">>=",">>>="],symbols:/[=><!~?:&|+\-*\/\^%]+/,escapes:/\\(?:[abfnrtv\\"']|x[0-9A-Fa-f]{1,4}|u[0-9A-Fa-f]{4}|U[0-9A-Fa-f]{8})/,integersuffix:/(ll|LL|u|U|l|L)?(ll|LL|u|U|l|L)?/,floatsuffix:/[fFlL]?/,tokenizer:{root:[[/[a-zA-Z_]\w*/,{cases:{"@keywords":{token:"keyword.$0"},"@default":"identifier"}}],{include:"@whitespace"},[/\[\[.*\]\]/,"annotation"],[/^\s*#\w+/,"keyword"],[/int\d*/,"keyword"],[/[{}()\[\]]/,"@brackets"],[/[<>](?!@symbols)/,"@brackets"],[/@symbols/,{cases:{"@operators":"delimiter","@default":""}}],[/\d*\d+[eE]([\-+]?\d+)?(@floatsuffix)/,"number.float"],[/\d*\.\d+([eE][\-+]?\d+)?(@floatsuffix)/,"number.float"],[/0[xX][0-9a-fA-F']*[0-9a-fA-F](@integersuffix)/,"number.hex"],[/0[0-7']*[0-7](@integersuffix)/,"number.octal"],[/0[bB][0-1']*[0-1](@integersuffix)/,"number.binary"],[/\d[\d']*\d(@integersuffix)/,"number"],[/\d(@integersuffix)/,"number"],[/[;,.]/,"delimiter"],[/"([^"\\]|\\.)*$/,"string.invalid"],[/"/,"string","@string"],[/'[^\\']'/,"string"],[/(')(@escapes)(')/,["string","string.escape","string"]],[/'/,"string.invalid"]],whitespace:[[/[ \t\r\n]+/,""],[/\/\*\*(?!\/)/,"comment.doc","@doccomment"],[/\/\*/,"comment","@comment"],[/\/\/.*$/,"comment"]],comment:[[/[^\/*]+/,"comment"],[/\*\//,"comment","@pop"],[/[\/*]/,"comment"]],doccomment:[[/[^\/*]+/,"comment.doc"],[/\*\//,"comment.doc","@pop"],[/[\/*]/,"comment.doc"]],string:[[/[^\\"]+/,"string"],[/@escapes/,"string.escape"],[/\\./,"string.escape.invalid"],[/"/,"string","@pop"]]}}}}]);
//# sourceMappingURL=4454.entry.js.map

View File

@@ -0,0 +1,2 @@
"use strict";(self.webpackChunkdemo=self.webpackChunkdemo||[]).push([[4511],{4511:(e,r,n)=>{n.r(r),n.d(r,{conf:()=>t,language:()=>s});var t={wordPattern:/(-?\d*\.\d\w*)|([^\`\~\!\@\#%\^\&\*\(\)\=\$\-\+\[\{\]\}\\\|\;\:\'\"\,\.\<\>\/\?\s]+)/g,comments:{blockComment:["###","###"],lineComment:"#"},brackets:[["{","}"],["[","]"],["(",")"]],autoClosingPairs:[{open:"{",close:"}"},{open:"[",close:"]"},{open:"(",close:")"},{open:'"',close:'"'},{open:"'",close:"'"}],surroundingPairs:[{open:"{",close:"}"},{open:"[",close:"]"},{open:"(",close:")"},{open:'"',close:'"'},{open:"'",close:"'"}],folding:{markers:{start:new RegExp("^\\s*#region\\b"),end:new RegExp("^\\s*#endregion\\b")}}},s={defaultToken:"",ignoreCase:!0,tokenPostfix:".coffee",brackets:[{open:"{",close:"}",token:"delimiter.curly"},{open:"[",close:"]",token:"delimiter.square"},{open:"(",close:")",token:"delimiter.parenthesis"}],regEx:/\/(?!\/\/)(?:[^\/\\]|\\.)*\/[igm]*/,keywords:["and","or","is","isnt","not","on","yes","@","no","off","true","false","null","this","new","delete","typeof","in","instanceof","return","throw","break","continue","debugger","if","else","switch","for","while","do","try","catch","finally","class","extends","super","undefined","then","unless","until","loop","of","by","when"],symbols:/[=><!~?&%|+\-*\/\^\.,\:]+/,escapes:/\\(?:[abfnrtv\\"'$]|x[0-9A-Fa-f]{1,4}|u[0-9A-Fa-f]{4}|U[0-9A-Fa-f]{8})/,tokenizer:{root:[[/\@[a-zA-Z_]\w*/,"variable.predefined"],[/[a-zA-Z_]\w*/,{cases:{this:"variable.predefined","@keywords":{token:"keyword.$0"},"@default":""}}],[/[ \t\r\n]+/,""],[/###/,"comment","@comment"],[/#.*$/,"comment"],["///",{token:"regexp",next:"@hereregexp"}],[/^(\s*)(@regEx)/,["","regexp"]],[/(\()(\s*)(@regEx)/,["@brackets","","regexp"]],[/(\,)(\s*)(@regEx)/,["delimiter","","regexp"]],[/(\=)(\s*)(@regEx)/,["delimiter","","regexp"]],[/(\:)(\s*)(@regEx)/,["delimiter","","regexp"]],[/(\[)(\s*)(@regEx)/,["@brackets","","regexp"]],[/(\!)(\s*)(@regEx)/,["delimiter","","regexp"]],[/(\&)(\s*)(@regEx)/,["delimiter","","regexp"]],[/(\|)(\s*)(@regEx)/,["delimiter","","regexp"]],[/(\?)(\s*)(@regEx)/,["delimiter","","regexp"]],[/(\{)(\s*)(@regEx)/,["@brackets","","regexp"]],[/(\;)(\s*)(@regEx)/,["","","regexp"]],[/}/,{cases:{"$S2==interpolatedstring":{token:"string",next:"@pop"},"@default":"@brackets"}}],[/[{}()\[\]]/,"@brackets"],[/@symbols/,"delimiter"],[/\d+[eE]([\-+]?\d+)?/,"number.float"],[/\d+\.\d+([eE][\-+]?\d+)?/,"number.float"],[/0[xX][0-9a-fA-F]+/,"number.hex"],[/0[0-7]+(?!\d)/,"number.octal"],[/\d+/,"number"],[/[,.]/,"delimiter"],[/"""/,"string",'@herestring."""'],[/'''/,"string","@herestring.'''"],[/"/,{cases:{"@eos":"string","@default":{token:"string",next:'@string."'}}}],[/'/,{cases:{"@eos":"string","@default":{token:"string",next:"@string.'"}}}]],string:[[/[^"'\#\\]+/,"string"],[/@escapes/,"string.escape"],[/\./,"string.escape.invalid"],[/\./,"string.escape.invalid"],[/#{/,{cases:{'$S2=="':{token:"string",next:"root.interpolatedstring"},"@default":"string"}}],[/["']/,{cases:{"$#==$S2":{token:"string",next:"@pop"},"@default":"string"}}],[/#/,"string"]],herestring:[[/("""|''')/,{cases:{"$1==$S2":{token:"string",next:"@pop"},"@default":"string"}}],[/[^#\\'"]+/,"string"],[/['"]+/,"string"],[/@escapes/,"string.escape"],[/\./,"string.escape.invalid"],[/#{/,{token:"string.quote",next:"root.interpolatedstring"}],[/#/,"string"]],comment:[[/[^#]+/,"comment"],[/###/,"comment","@pop"],[/#/,"comment"]],hereregexp:[[/[^\\\/#]+/,"regexp"],[/\\./,"regexp"],[/#.*$/,"comment"],["///[igm]*",{token:"regexp",next:"@pop"}],[/\//,"regexp"]]}}}}]);
//# sourceMappingURL=4511.entry.js.map

View File

@@ -0,0 +1,2 @@
"use strict";(self.webpackChunkdemo=self.webpackChunkdemo||[]).push([[4558],{4558:(e,t,n)=>{n.r(t),n.d(t,{conf:()=>o,language:()=>d});var i=n(2526),r=["area","base","br","col","embed","hr","img","input","keygen","link","menuitem","meta","param","source","track","wbr"],o={wordPattern:/(-?\d*\.\d\w*)|([^\`\~\!\@\$\^\&\*\(\)\=\+\[\{\]\}\\\|\;\:\'\"\,\.\<\>\/\s]+)/g,comments:{blockComment:["\x3c!--","--\x3e"]},brackets:[["\x3c!--","--\x3e"],["<",">"],["{","}"],["(",")"]],autoClosingPairs:[{open:"{",close:"}"},{open:"[",close:"]"},{open:"(",close:")"},{open:'"',close:'"'},{open:"'",close:"'"}],surroundingPairs:[{open:'"',close:'"'},{open:"'",close:"'"},{open:"{",close:"}"},{open:"[",close:"]"},{open:"(",close:")"},{open:"<",close:">"}],onEnterRules:[{beforeText:new RegExp("<(?!(?:"+r.join("|")+"))([_:\\w][_:\\w-.\\d]*)([^/>]*(?!/)>)[^<]*$","i"),afterText:/^<\/([_:\w][_:\w-.\d]*)\s*>$/i,action:{indentAction:i.Mj.IndentAction.IndentOutdent}},{beforeText:new RegExp("<(?!(?:"+r.join("|")+"))(\\w[\\w\\d]*)([^/>]*(?!/)>)[^<]*$","i"),action:{indentAction:i.Mj.IndentAction.Indent}}],folding:{markers:{start:new RegExp("^\\s*\x3c!--\\s*#region\\b.*--\x3e"),end:new RegExp("^\\s*\x3c!--\\s*#endregion\\b.*--\x3e")}}},d={defaultToken:"",tokenPostfix:".html",ignoreCase:!0,tokenizer:{root:[[/<!DOCTYPE/,"metatag","@doctype"],[/<!--/,"comment","@comment"],[/(<)((?:[\w\-]+:)?[\w\-]+)(\s*)(\/>)/,["delimiter","tag","","delimiter"]],[/(<)(script)/,["delimiter",{token:"tag",next:"@script"}]],[/(<)(style)/,["delimiter",{token:"tag",next:"@style"}]],[/(<)((?:[\w\-]+:)?[\w\-]+)/,["delimiter",{token:"tag",next:"@otherTag"}]],[/(<\/)((?:[\w\-]+:)?[\w\-]+)/,["delimiter",{token:"tag",next:"@otherTag"}]],[/</,"delimiter"],[/[^<]+/]],doctype:[[/[^>]+/,"metatag.content"],[/>/,"metatag","@pop"]],comment:[[/-->/,"comment","@pop"],[/[^-]+/,"comment.content"],[/./,"comment.content"]],otherTag:[[/\/?>/,"delimiter","@pop"],[/"([^"]*)"/,"attribute.value"],[/'([^']*)'/,"attribute.value"],[/[\w\-]+/,"attribute.name"],[/=/,"delimiter"],[/[ \t\r\n]+/]],script:[[/type/,"attribute.name","@scriptAfterType"],[/"([^"]*)"/,"attribute.value"],[/'([^']*)'/,"attribute.value"],[/[\w\-]+/,"attribute.name"],[/=/,"delimiter"],[/>/,{token:"delimiter",next:"@scriptEmbedded",nextEmbedded:"text/javascript"}],[/[ \t\r\n]+/],[/(<\/)(script\s*)(>)/,["delimiter","tag",{token:"delimiter",next:"@pop"}]]],scriptAfterType:[[/=/,"delimiter","@scriptAfterTypeEquals"],[/>/,{token:"delimiter",next:"@scriptEmbedded",nextEmbedded:"text/javascript"}],[/[ \t\r\n]+/],[/<\/script\s*>/,{token:"@rematch",next:"@pop"}]],scriptAfterTypeEquals:[[/"([^"]*)"/,{token:"attribute.value",switchTo:"@scriptWithCustomType.$1"}],[/'([^']*)'/,{token:"attribute.value",switchTo:"@scriptWithCustomType.$1"}],[/>/,{token:"delimiter",next:"@scriptEmbedded",nextEmbedded:"text/javascript"}],[/[ \t\r\n]+/],[/<\/script\s*>/,{token:"@rematch",next:"@pop"}]],scriptWithCustomType:[[/>/,{token:"delimiter",next:"@scriptEmbedded.$S2",nextEmbedded:"$S2"}],[/"([^"]*)"/,"attribute.value"],[/'([^']*)'/,"attribute.value"],[/[\w\-]+/,"attribute.name"],[/=/,"delimiter"],[/[ \t\r\n]+/],[/<\/script\s*>/,{token:"@rematch",next:"@pop"}]],scriptEmbedded:[[/<\/script/,{token:"@rematch",next:"@pop",nextEmbedded:"@pop"}],[/[^<]+/,""]],style:[[/type/,"attribute.name","@styleAfterType"],[/"([^"]*)"/,"attribute.value"],[/'([^']*)'/,"attribute.value"],[/[\w\-]+/,"attribute.name"],[/=/,"delimiter"],[/>/,{token:"delimiter",next:"@styleEmbedded",nextEmbedded:"text/css"}],[/[ \t\r\n]+/],[/(<\/)(style\s*)(>)/,["delimiter","tag",{token:"delimiter",next:"@pop"}]]],styleAfterType:[[/=/,"delimiter","@styleAfterTypeEquals"],[/>/,{token:"delimiter",next:"@styleEmbedded",nextEmbedded:"text/css"}],[/[ \t\r\n]+/],[/<\/style\s*>/,{token:"@rematch",next:"@pop"}]],styleAfterTypeEquals:[[/"([^"]*)"/,{token:"attribute.value",switchTo:"@styleWithCustomType.$1"}],[/'([^']*)'/,{token:"attribute.value",switchTo:"@styleWithCustomType.$1"}],[/>/,{token:"delimiter",next:"@styleEmbedded",nextEmbedded:"text/css"}],[/[ \t\r\n]+/],[/<\/style\s*>/,{token:"@rematch",next:"@pop"}]],styleWithCustomType:[[/>/,{token:"delimiter",next:"@styleEmbedded.$S2",nextEmbedded:"$S2"}],[/"([^"]*)"/,"attribute.value"],[/'([^']*)'/,"attribute.value"],[/[\w\-]+/,"attribute.name"],[/=/,"delimiter"],[/[ \t\r\n]+/],[/<\/style\s*>/,{token:"@rematch",next:"@pop"}]],styleEmbedded:[[/<\/style/,{token:"@rematch",next:"@pop",nextEmbedded:"@pop"}],[/[^<]+/,""]]}}}}]);
//# sourceMappingURL=4558.entry.js.map

View File

@@ -0,0 +1,2 @@
"use strict";(self.webpackChunkdemo=self.webpackChunkdemo||[]).push([[4610],{4610:(e,t,o)=>{o.r(t),o.d(t,{conf:()=>n,language:()=>s});var n={comments:{lineComment:"//",blockComment:["/*","*/"]},brackets:[["{","}"],["[","]"],["(",")"]],autoClosingPairs:[{open:"[",close:"]"},{open:"{",close:"}"},{open:"(",close:")"},{open:'"',close:'"',notIn:["string"]}],surroundingPairs:[{open:"{",close:"}"},{open:"[",close:"]"},{open:"(",close:")"},{open:'"',close:'"'},{open:"'",close:"'"}],folding:{markers:{start:new RegExp("^\\s*#pragma\\s+region\\b"),end:new RegExp("^\\s*#pragma\\s+endregion\\b")}}},s={tokenPostfix:".rust",defaultToken:"invalid",keywords:["as","async","await","box","break","const","continue","crate","dyn","else","enum","extern","false","fn","for","if","impl","in","let","loop","match","mod","move","mut","pub","ref","return","self","static","struct","super","trait","true","try","type","unsafe","use","where","while","catch","default","union","static","abstract","alignof","become","do","final","macro","offsetof","override","priv","proc","pure","sizeof","typeof","unsized","virtual","yield"],typeKeywords:["Self","m32","m64","m128","f80","f16","f128","int","uint","float","char","bool","u8","u16","u32","u64","f32","f64","i8","i16","i32","i64","str","Option","Either","c_float","c_double","c_void","FILE","fpos_t","DIR","dirent","c_char","c_schar","c_uchar","c_short","c_ushort","c_int","c_uint","c_long","c_ulong","size_t","ptrdiff_t","clock_t","time_t","c_longlong","c_ulonglong","intptr_t","uintptr_t","off_t","dev_t","ino_t","pid_t","mode_t","ssize_t"],constants:["true","false","Some","None","Left","Right","Ok","Err"],supportConstants:["EXIT_FAILURE","EXIT_SUCCESS","RAND_MAX","EOF","SEEK_SET","SEEK_CUR","SEEK_END","_IOFBF","_IONBF","_IOLBF","BUFSIZ","FOPEN_MAX","FILENAME_MAX","L_tmpnam","TMP_MAX","O_RDONLY","O_WRONLY","O_RDWR","O_APPEND","O_CREAT","O_EXCL","O_TRUNC","S_IFIFO","S_IFCHR","S_IFBLK","S_IFDIR","S_IFREG","S_IFMT","S_IEXEC","S_IWRITE","S_IREAD","S_IRWXU","S_IXUSR","S_IWUSR","S_IRUSR","F_OK","R_OK","W_OK","X_OK","STDIN_FILENO","STDOUT_FILENO","STDERR_FILENO"],supportMacros:["format!","print!","println!","panic!","format_args!","unreachable!","write!","writeln!"],operators:["!","!=","%","%=","&","&=","&&","*","*=","+","+=","-","-=","->",".","..","...","/","/=",":",";","<<","<<=","<","<=","=","==","=>",">",">=",">>",">>=","@","^","^=","|","|=","||","_","?","#"],escapes:/\\([nrt0\"''\\]|x\h{2}|u\{\h{1,6}\})/,delimiters:/[,]/,symbols:/[\#\!\%\&\*\+\-\.\/\:\;\<\=\>\@\^\|_\?]+/,intSuffixes:/[iu](8|16|32|64|128|size)/,floatSuffixes:/f(32|64)/,tokenizer:{root:[[/r(#*)"/,{token:"string.quote",bracket:"@open",next:"@stringraw.$1"}],[/[a-zA-Z][a-zA-Z0-9_]*!?|_[a-zA-Z0-9_]+/,{cases:{"@typeKeywords":"keyword.type","@keywords":"keyword","@supportConstants":"keyword","@supportMacros":"keyword","@constants":"keyword","@default":"identifier"}}],[/\$/,"identifier"],[/'[a-zA-Z_][a-zA-Z0-9_]*(?=[^\'])/,"identifier"],[/'(\S|@escapes)'/,"string.byteliteral"],[/"/,{token:"string.quote",bracket:"@open",next:"@string"}],{include:"@numbers"},{include:"@whitespace"},[/@delimiters/,{cases:{"@keywords":"keyword","@default":"delimiter"}}],[/[{}()\[\]<>]/,"@brackets"],[/@symbols/,{cases:{"@operators":"operator","@default":""}}]],whitespace:[[/[ \t\r\n]+/,"white"],[/\/\*/,"comment","@comment"],[/\/\/.*$/,"comment"]],comment:[[/[^\/*]+/,"comment"],[/\/\*/,"comment","@push"],["\\*/","comment","@pop"],[/[\/*]/,"comment"]],string:[[/[^\\"]+/,"string"],[/@escapes/,"string.escape"],[/\\./,"string.escape.invalid"],[/"/,{token:"string.quote",bracket:"@close",next:"@pop"}]],stringraw:[[/[^"#]+/,{token:"string"}],[/"(#*)/,{cases:{"$1==$S2":{token:"string.quote",bracket:"@close",next:"@pop"},"@default":{token:"string"}}}],[/["#]/,{token:"string"}]],numbers:[[/(0o[0-7_]+)(@intSuffixes)?/,{token:"number"}],[/(0b[0-1_]+)(@intSuffixes)?/,{token:"number"}],[/[\d][\d_]*(\.[\d][\d_]*)?[eE][+-][\d_]+(@floatSuffixes)?/,{token:"number"}],[/\b(\d\.?[\d_]*)(@floatSuffixes)?\b/,{token:"number"}],[/(0x[\da-fA-F]+)_?(@intSuffixes)?/,{token:"number"}],[/[\d][\d_]*(@intSuffixes?)?/,{token:"number"}]]}}}}]);
//# sourceMappingURL=4610.entry.js.map

File diff suppressed because one or more lines are too long

File diff suppressed because one or more lines are too long

View File

@@ -0,0 +1,2 @@
"use strict";(self.webpackChunkdemo=self.webpackChunkdemo||[]).push([[4858],{4858:(e,t,n)=>{n.r(t),n.d(t,{conf:()=>r,language:()=>i});var r={wordPattern:/(#?-?\d*\.\d\w*%?)|((::|[@#.!:])?[\w-?]+%?)|::|[@#.!:]/g,comments:{blockComment:["/*","*/"]},brackets:[["{","}"],["[","]"],["(",")"]],autoClosingPairs:[{open:"{",close:"}",notIn:["string","comment"]},{open:"[",close:"]",notIn:["string","comment"]},{open:"(",close:")",notIn:["string","comment"]},{open:'"',close:'"',notIn:["string","comment"]},{open:"'",close:"'",notIn:["string","comment"]}],surroundingPairs:[{open:"{",close:"}"},{open:"[",close:"]"},{open:"(",close:")"},{open:'"',close:'"'},{open:"'",close:"'"}],folding:{markers:{start:new RegExp("^\\s*\\/\\*\\s*#region\\b\\s*(.*?)\\s*\\*\\/"),end:new RegExp("^\\s*\\/\\*\\s*#endregion\\b.*\\*\\/")}}},i={defaultToken:"",tokenPostfix:".css",ws:"[ \t\n\r\f]*",identifier:"-?-?([a-zA-Z]|(\\\\(([0-9a-fA-F]{1,6}\\s?)|[^[0-9a-fA-F])))([\\w\\-]|(\\\\(([0-9a-fA-F]{1,6}\\s?)|[^[0-9a-fA-F])))*",brackets:[{open:"{",close:"}",token:"delimiter.bracket"},{open:"[",close:"]",token:"delimiter.bracket"},{open:"(",close:")",token:"delimiter.parenthesis"},{open:"<",close:">",token:"delimiter.angle"}],tokenizer:{root:[{include:"@selector"}],selector:[{include:"@comments"},{include:"@import"},{include:"@strings"},["[@](keyframes|-webkit-keyframes|-moz-keyframes|-o-keyframes)",{token:"keyword",next:"@keyframedeclaration"}],["[@](page|content|font-face|-moz-document)",{token:"keyword"}],["[@](charset|namespace)",{token:"keyword",next:"@declarationbody"}],["(url-prefix)(\\()",["attribute.value",{token:"delimiter.parenthesis",next:"@urldeclaration"}]],["(url)(\\()",["attribute.value",{token:"delimiter.parenthesis",next:"@urldeclaration"}]],{include:"@selectorname"},["[\\*]","tag"],["[>\\+,]","delimiter"],["\\[",{token:"delimiter.bracket",next:"@selectorattribute"}],["{",{token:"delimiter.bracket",next:"@selectorbody"}]],selectorbody:[{include:"@comments"},["[*_]?@identifier@ws:(?=(\\s|\\d|[^{;}]*[;}]))","attribute.name","@rulevalue"],["}",{token:"delimiter.bracket",next:"@pop"}]],selectorname:[["(\\.|#(?=[^{])|%|(@identifier)|:)+","tag"]],selectorattribute:[{include:"@term"},["]",{token:"delimiter.bracket",next:"@pop"}]],term:[{include:"@comments"},["(url-prefix)(\\()",["attribute.value",{token:"delimiter.parenthesis",next:"@urldeclaration"}]],["(url)(\\()",["attribute.value",{token:"delimiter.parenthesis",next:"@urldeclaration"}]],{include:"@functioninvocation"},{include:"@numbers"},{include:"@name"},{include:"@strings"},["([<>=\\+\\-\\*\\/\\^\\|\\~,])","delimiter"],[",","delimiter"]],rulevalue:[{include:"@comments"},{include:"@strings"},{include:"@term"},["!important","keyword"],[";","delimiter","@pop"],["(?=})",{token:"",next:"@pop"}]],warndebug:[["[@](warn|debug)",{token:"keyword",next:"@declarationbody"}]],import:[["[@](import)",{token:"keyword",next:"@declarationbody"}]],urldeclaration:[{include:"@strings"},["[^)\r\n]+","string"],["\\)",{token:"delimiter.parenthesis",next:"@pop"}]],parenthizedterm:[{include:"@term"},["\\)",{token:"delimiter.parenthesis",next:"@pop"}]],declarationbody:[{include:"@term"},[";","delimiter","@pop"],["(?=})",{token:"",next:"@pop"}]],comments:[["\\/\\*","comment","@comment"],["\\/\\/+.*","comment"]],comment:[["\\*\\/","comment","@pop"],[/[^*/]+/,"comment"],[/./,"comment"]],name:[["@identifier","attribute.value"]],numbers:[["-?(\\d*\\.)?\\d+([eE][\\-+]?\\d+)?",{token:"attribute.value.number",next:"@units"}],["#[0-9a-fA-F_]+(?!\\w)","attribute.value.hex"]],units:[["(em|ex|ch|rem|vmin|vmax|vw|vh|vm|cm|mm|in|px|pt|pc|deg|grad|rad|turn|s|ms|Hz|kHz|%)?","attribute.value.unit","@pop"]],keyframedeclaration:[["@identifier","attribute.value"],["{",{token:"delimiter.bracket",switchTo:"@keyframebody"}]],keyframebody:[{include:"@term"},["{",{token:"delimiter.bracket",next:"@selectorbody"}],["}",{token:"delimiter.bracket",next:"@pop"}]],functioninvocation:[["@identifier\\(",{token:"attribute.value",next:"@functionarguments"}]],functionarguments:[["\\$@identifier@ws:","attribute.name"],["[,]","delimiter"],{include:"@term"},["\\)",{token:"attribute.value",next:"@pop"}]],strings:[['~?"',{token:"string",next:"@stringenddoublequote"}],["~?'",{token:"string",next:"@stringendquote"}]],stringenddoublequote:[["\\\\.","string"],['"',{token:"string",next:"@pop"}],[/[^\\"]+/,"string"],[".","string"]],stringendquote:[["\\\\.","string"],["'",{token:"string",next:"@pop"}],[/[^\\']+/,"string"],[".","string"]]}}}}]);
//# sourceMappingURL=4858.entry.js.map

View File

@@ -0,0 +1,2 @@
"use strict";(self.webpackChunkdemo=self.webpackChunkdemo||[]).push([[4896],{4896:(e,n,s)=>{s.r(n),s.d(n,{conf:()=>o,language:()=>t});var o={comments:{lineComment:"#"},brackets:[["{","}"],["[","]"],["(",")"]],autoClosingPairs:[{open:"{",close:"}"},{open:"[",close:"]"},{open:"(",close:")"},{open:'"',close:'"'},{open:"'",close:"'"}],surroundingPairs:[{open:"{",close:"}"},{open:"[",close:"]"},{open:"(",close:")"},{open:'"',close:'"'},{open:"'",close:"'"}]},t={defaultToken:"",tokenPostfix:".ini",escapes:/\\(?:[abfnrtv\\"']|x[0-9A-Fa-f]{1,4}|u[0-9A-Fa-f]{4}|U[0-9A-Fa-f]{8})/,tokenizer:{root:[[/^\[[^\]]*\]/,"metatag"],[/(^\w+)(\s*)(\=)/,["key","","delimiter"]],{include:"@whitespace"},[/\d+/,"number"],[/"([^"\\]|\\.)*$/,"string.invalid"],[/'([^'\\]|\\.)*$/,"string.invalid"],[/"/,"string",'@string."'],[/'/,"string","@string.'"]],whitespace:[[/[ \t\r\n]+/,""],[/^\s*[#;].*$/,"comment"]],string:[[/[^\\"']+/,"string"],[/@escapes/,"string.escape"],[/\\./,"string.escape.invalid"],[/["']/,{cases:{"$#==$S2":{token:"string",next:"@pop"},"@default":"string"}}]]}}}}]);
//# sourceMappingURL=4896.entry.js.map

View File

@@ -0,0 +1,2 @@
"use strict";(self.webpackChunkdemo=self.webpackChunkdemo||[]).push([[491],{491:(e,r,o)=>{o.r(r),o.d(r,{conf:()=>i,language:()=>t});var i={comments:{lineComment:"#"},brackets:[["{","}"],["[","]"],["(",")"]],autoClosingPairs:[{open:"{",close:"}"},{open:"[",close:"]"},{open:"(",close:")"},{open:'"',close:'"'},{open:"'",close:"'"},{open:"`",close:"`"}],surroundingPairs:[{open:"{",close:"}"},{open:"[",close:"]"},{open:"(",close:")"},{open:'"',close:'"'},{open:"'",close:"'"},{open:"`",close:"`"}]},t={defaultToken:"",ignoreCase:!0,tokenPostfix:".shell",brackets:[{token:"delimiter.bracket",open:"{",close:"}"},{token:"delimiter.parenthesis",open:"(",close:")"},{token:"delimiter.square",open:"[",close:"]"}],keywords:["if","then","do","else","elif","while","until","for","in","esac","fi","fin","fil","done","exit","set","unset","export","function"],builtins:["ab","awk","bash","beep","cat","cc","cd","chown","chmod","chroot","clear","cp","curl","cut","diff","echo","find","gawk","gcc","get","git","grep","hg","kill","killall","ln","ls","make","mkdir","openssl","mv","nc","node","npm","ping","ps","restart","rm","rmdir","sed","service","sh","shopt","shred","source","sort","sleep","ssh","start","stop","su","sudo","svn","tee","telnet","top","touch","vi","vim","wall","wc","wget","who","write","yes","zsh"],symbols:/[=><!~?&|+\-*\/\^;\.,]+/,tokenizer:{root:[{include:"@whitespace"},[/[a-zA-Z]\w*/,{cases:{"@keywords":"keyword","@builtins":"type.identifier","@default":""}}],{include:"@strings"},{include:"@parameters"},{include:"@heredoc"},[/[{}\[\]()]/,"@brackets"],[/-+\w+/,"attribute.name"],[/@symbols/,"delimiter"],{include:"@numbers"},[/[,;]/,"delimiter"]],whitespace:[[/\s+/,"white"],[/(^#!.*$)/,"metatag"],[/(^#.*$)/,"comment"]],numbers:[[/\d*\.\d+([eE][\-+]?\d+)?/,"number.float"],[/0[xX][0-9a-fA-F_]*[0-9a-fA-F]/,"number.hex"],[/\d+/,"number"]],strings:[[/'/,"string","@stringBody"],[/"/,"string","@dblStringBody"]],stringBody:[[/'/,"string","@popall"],[/./,"string"]],dblStringBody:[[/"/,"string","@popall"],[/./,"string"]],heredoc:[[/(<<[-<]?)(\s*)(['"`]?)([\w\-]+)(['"`]?)/,["constants","white","string.heredoc.delimiter","string.heredoc","string.heredoc.delimiter"]]],parameters:[[/\$\d+/,"variable.predefined"],[/\$\w+/,"variable"],[/\$[*@#?\-$!0_]/,"variable"],[/\$'/,"variable","@parameterBodyQuote"],[/\$"/,"variable","@parameterBodyDoubleQuote"],[/\$\(/,"variable","@parameterBodyParen"],[/\$\{/,"variable","@parameterBodyCurlyBrace"]],parameterBodyQuote:[[/[^#:%*@\-!_']+/,"variable"],[/[#:%*@\-!_]/,"delimiter"],[/[']/,"variable","@pop"]],parameterBodyDoubleQuote:[[/[^#:%*@\-!_"]+/,"variable"],[/[#:%*@\-!_]/,"delimiter"],[/["]/,"variable","@pop"]],parameterBodyParen:[[/[^#:%*@\-!_)]+/,"variable"],[/[#:%*@\-!_]/,"delimiter"],[/[)]/,"variable","@pop"]],parameterBodyCurlyBrace:[[/[^#:%*@\-!_}]+/,"variable"],[/[#:%*@\-!_]/,"delimiter"],[/[}]/,"variable","@pop"]]}}}}]);
//# sourceMappingURL=491.entry.js.map

File diff suppressed because one or more lines are too long

File diff suppressed because one or more lines are too long

File diff suppressed because one or more lines are too long

View File

@@ -0,0 +1,2 @@
"use strict";(self.webpackChunkdemo=self.webpackChunkdemo||[]).push([[5669],{5669:(e,t,i)=>{i.r(t),i.d(t,{conf:()=>n,language:()=>s});var n={wordPattern:/(-?\d*\.\d\w*)|([^\`\~\!\#\%\^\&\*\(\)\-\=\+\[\{\]\}\\\|\;\:\'\"\,\.\<\>\/\?\s]+)/g,comments:{lineComment:"//",blockComment:["/*","*/"]},brackets:[["{","}"],["[","]"],["(",")"]],autoClosingPairs:[{open:"{",close:"}"},{open:"[",close:"]"},{open:"(",close:")"},{open:'"',close:'"'},{open:"'",close:"'"}],surroundingPairs:[{open:"{",close:"}"},{open:"[",close:"]"},{open:"(",close:")"},{open:'"',close:'"'},{open:"'",close:"'"},{open:"<",close:">"}],folding:{markers:{start:new RegExp("^\\s*//\\s*(?:(?:#?region\\b)|(?:<editor-fold\\b))"),end:new RegExp("^\\s*//\\s*(?:(?:#?endregion\\b)|(?:</editor-fold>))")}}},s={defaultToken:"",tokenPostfix:".kt",keywords:["as","as?","break","class","continue","do","else","false","for","fun","if","in","!in","interface","is","!is","null","object","package","return","super","this","throw","true","try","typealias","val","var","when","while","by","catch","constructor","delegate","dynamic","field","file","finally","get","import","init","param","property","receiver","set","setparam","where","actual","abstract","annotation","companion","const","crossinline","data","enum","expect","external","final","infix","inline","inner","internal","lateinit","noinline","open","operator","out","override","private","protected","public","reified","sealed","suspend","tailrec","vararg","field","it"],operators:["+","-","*","/","%","=","+=","-=","*=","/=","%=","++","--","&&","||","!","==","!=","===","!==",">","<","<=",">=","[","]","!!","?.","?:","::","..",":","?","->","@",";","$","_"],symbols:/[=><!~?:&|+\-*\/\^%]+/,escapes:/\\(?:[abfnrtv\\"']|x[0-9A-Fa-f]{1,4}|u[0-9A-Fa-f]{4}|U[0-9A-Fa-f]{8})/,digits:/\d+(_+\d+)*/,octaldigits:/[0-7]+(_+[0-7]+)*/,binarydigits:/[0-1]+(_+[0-1]+)*/,hexdigits:/[[0-9a-fA-F]+(_+[0-9a-fA-F]+)*/,tokenizer:{root:[[/[A-Z][\w\$]*/,"type.identifier"],[/[a-zA-Z_$][\w$]*/,{cases:{"@keywords":{token:"keyword.$0"},"@default":"identifier"}}],{include:"@whitespace"},[/[{}()\[\]]/,"@brackets"],[/[<>](?!@symbols)/,"@brackets"],[/@symbols/,{cases:{"@operators":"delimiter","@default":""}}],[/@\s*[a-zA-Z_\$][\w\$]*/,"annotation"],[/(@digits)[eE]([\-+]?(@digits))?[fFdD]?/,"number.float"],[/(@digits)\.(@digits)([eE][\-+]?(@digits))?[fFdD]?/,"number.float"],[/0[xX](@hexdigits)[Ll]?/,"number.hex"],[/0(@octaldigits)[Ll]?/,"number.octal"],[/0[bB](@binarydigits)[Ll]?/,"number.binary"],[/(@digits)[fFdD]/,"number.float"],[/(@digits)[lL]?/,"number"],[/[;,.]/,"delimiter"],[/"([^"\\]|\\.)*$/,"string.invalid"],[/"""/,"string","@multistring"],[/"/,"string","@string"],[/'[^\\']'/,"string"],[/(')(@escapes)(')/,["string","string.escape","string"]],[/'/,"string.invalid"]],whitespace:[[/[ \t\r\n]+/,""],[/\/\*\*(?!\/)/,"comment.doc","@javadoc"],[/\/\*/,"comment","@comment"],[/\/\/.*$/,"comment"]],comment:[[/[^\/*]+/,"comment"],[/\/\*/,"comment","@comment"],[/\*\//,"comment","@pop"],[/[\/*]/,"comment"]],javadoc:[[/[^\/*]+/,"comment.doc"],[/\/\*/,"comment.doc","@push"],[/\/\*/,"comment.doc.invalid"],[/\*\//,"comment.doc","@pop"],[/[\/*]/,"comment.doc"]],string:[[/[^\\"]+/,"string"],[/@escapes/,"string.escape"],[/\\./,"string.escape.invalid"],[/"/,"string","@pop"]],multistring:[[/[^\\"]+/,"string"],[/@escapes/,"string.escape"],[/\\./,"string.escape.invalid"],[/"""/,"string","@pop"],[/./,"string"]]}}}}]);
//# sourceMappingURL=5669.entry.js.map

File diff suppressed because one or more lines are too long

View File

@@ -0,0 +1,2 @@
"use strict";(self.webpackChunkdemo=self.webpackChunkdemo||[]).push([[5900],{5900:(e,t,n)=>{n.r(t),n.d(t,{conf:()=>r,language:()=>i});var o=n(2526),r={wordPattern:/(-?\d*\.\d\w*)|([^\`\~\!\@\#\%\^\&\*\(\)\-\=\+\[\{\]\}\\\|\;\:\'\"\,\.\<\>\/\?\s]+)/g,comments:{lineComment:"//",blockComment:["/*","*/"]},brackets:[["{","}"],["[","]"],["(",")"]],onEnterRules:[{beforeText:/^\s*\/\*\*(?!\/)([^\*]|\*(?!\/))*$/,afterText:/^\s*\*\/$/,action:{indentAction:o.Mj.IndentAction.IndentOutdent,appendText:" * "}},{beforeText:/^\s*\/\*\*(?!\/)([^\*]|\*(?!\/))*$/,action:{indentAction:o.Mj.IndentAction.None,appendText:" * "}},{beforeText:/^(\t|(\ \ ))*\ \*(\ ([^\*]|\*(?!\/))*)?$/,action:{indentAction:o.Mj.IndentAction.None,appendText:"* "}},{beforeText:/^(\t|(\ \ ))*\ \*\/\s*$/,action:{indentAction:o.Mj.IndentAction.None,removeText:1}}],autoClosingPairs:[{open:"{",close:"}"},{open:"[",close:"]"},{open:"(",close:")"},{open:'"',close:'"',notIn:["string"]},{open:"'",close:"'",notIn:["string","comment"]},{open:"`",close:"`",notIn:["string","comment"]},{open:"/**",close:" */",notIn:["string"]}],folding:{markers:{start:new RegExp("^\\s*//\\s*#?region\\b"),end:new RegExp("^\\s*//\\s*#?endregion\\b")}}},i={defaultToken:"invalid",tokenPostfix:".ts",keywords:["abstract","any","as","asserts","bigint","boolean","break","case","catch","class","continue","const","constructor","debugger","declare","default","delete","do","else","enum","export","extends","false","finally","for","from","function","get","if","implements","import","in","infer","instanceof","interface","is","keyof","let","module","namespace","never","new","null","number","object","package","private","protected","public","override","readonly","require","global","return","set","static","string","super","switch","symbol","this","throw","true","try","type","typeof","undefined","unique","unknown","var","void","while","with","yield","async","await","of"],operators:["<=",">=","==","!=","===","!==","=>","+","-","**","*","/","%","++","--","<<","</",">>",">>>","&","|","^","!","~","&&","||","??","?",":","=","+=","-=","*=","**=","/=","%=","<<=",">>=",">>>=","&=","|=","^=","@"],symbols:/[=><!~?:&|+\-*\/\^%]+/,escapes:/\\(?:[abfnrtv\\"']|x[0-9A-Fa-f]{1,4}|u[0-9A-Fa-f]{4}|U[0-9A-Fa-f]{8})/,digits:/\d+(_+\d+)*/,octaldigits:/[0-7]+(_+[0-7]+)*/,binarydigits:/[0-1]+(_+[0-1]+)*/,hexdigits:/[[0-9a-fA-F]+(_+[0-9a-fA-F]+)*/,regexpctl:/[(){}\[\]\$\^|\-*+?\.]/,regexpesc:/\\(?:[bBdDfnrstvwWn0\\\/]|@regexpctl|c[A-Z]|x[0-9a-fA-F]{2}|u[0-9a-fA-F]{4})/,tokenizer:{root:[[/[{}]/,"delimiter.bracket"],{include:"common"}],common:[[/[a-z_$][\w$]*/,{cases:{"@keywords":"keyword","@default":"identifier"}}],[/[A-Z][\w\$]*/,"type.identifier"],{include:"@whitespace"},[/\/(?=([^\\\/]|\\.)+\/([dgimsuy]*)(\s*)(\.|;|,|\)|\]|\}|$))/,{token:"regexp",bracket:"@open",next:"@regexp"}],[/[()\[\]]/,"@brackets"],[/[<>](?!@symbols)/,"@brackets"],[/!(?=([^=]|$))/,"delimiter"],[/@symbols/,{cases:{"@operators":"delimiter","@default":""}}],[/(@digits)[eE]([\-+]?(@digits))?/,"number.float"],[/(@digits)\.(@digits)([eE][\-+]?(@digits))?/,"number.float"],[/0[xX](@hexdigits)n?/,"number.hex"],[/0[oO]?(@octaldigits)n?/,"number.octal"],[/0[bB](@binarydigits)n?/,"number.binary"],[/(@digits)n?/,"number"],[/[;,.]/,"delimiter"],[/"([^"\\]|\\.)*$/,"string.invalid"],[/'([^'\\]|\\.)*$/,"string.invalid"],[/"/,"string","@string_double"],[/'/,"string","@string_single"],[/`/,"string","@string_backtick"]],whitespace:[[/[ \t\r\n]+/,""],[/\/\*\*(?!\/)/,"comment.doc","@jsdoc"],[/\/\*/,"comment","@comment"],[/\/\/.*$/,"comment"]],comment:[[/[^\/*]+/,"comment"],[/\*\//,"comment","@pop"],[/[\/*]/,"comment"]],jsdoc:[[/[^\/*]+/,"comment.doc"],[/\*\//,"comment.doc","@pop"],[/[\/*]/,"comment.doc"]],regexp:[[/(\{)(\d+(?:,\d*)?)(\})/,["regexp.escape.control","regexp.escape.control","regexp.escape.control"]],[/(\[)(\^?)(?=(?:[^\]\\\/]|\\.)+)/,["regexp.escape.control",{token:"regexp.escape.control",next:"@regexrange"}]],[/(\()(\?:|\?=|\?!)/,["regexp.escape.control","regexp.escape.control"]],[/[()]/,"regexp.escape.control"],[/@regexpctl/,"regexp.escape.control"],[/[^\\\/]/,"regexp"],[/@regexpesc/,"regexp.escape"],[/\\\./,"regexp.invalid"],[/(\/)([dgimsuy]*)/,[{token:"regexp",bracket:"@close",next:"@pop"},"keyword.other"]]],regexrange:[[/-/,"regexp.escape.control"],[/\^/,"regexp.invalid"],[/@regexpesc/,"regexp.escape"],[/[^\]]/,"regexp"],[/\]/,{token:"regexp.escape.control",next:"@pop",bracket:"@close"}]],string_double:[[/[^\\"]+/,"string"],[/@escapes/,"string.escape"],[/\\./,"string.escape.invalid"],[/"/,"string","@pop"]],string_single:[[/[^\\']+/,"string"],[/@escapes/,"string.escape"],[/\\./,"string.escape.invalid"],[/'/,"string","@pop"]],string_backtick:[[/\$\{/,{token:"delimiter.bracket",next:"@bracketCounting"}],[/[^\\`$]+/,"string"],[/@escapes/,"string.escape"],[/\\./,"string.escape.invalid"],[/`/,"string","@pop"]],bracketCounting:[[/\{/,"delimiter.bracket","@bracketCounting"],[/\}/,"delimiter.bracket","@pop"],{include:"common"}]}}}}]);
//# sourceMappingURL=5900.entry.js.map

Some files were not shown because too many files have changed in this diff Show More