mirror of
https://github.com/atom/atom.git
synced 2026-02-08 05:35:04 -05:00
Merge branch 'master' into wl-async-save-dialog
This commit is contained in:
182
CONTRIBUTING.md
182
CONTRIBUTING.md
@@ -122,7 +122,7 @@ Before creating bug reports, please check [this list](#before-submitting-a-bug-r
|
||||
* **Check the [debugging guide](https://flight-manual.atom.io/hacking-atom/sections/debugging/).** You might be able to find the cause of the problem and fix things yourself. Most importantly, check if you can reproduce the problem [in the latest version of Atom](https://flight-manual.atom.io/hacking-atom/sections/debugging/#update-to-the-latest-version), if the problem happens when you run Atom in [safe mode](https://flight-manual.atom.io/hacking-atom/sections/debugging/#check-if-the-problem-shows-up-in-safe-mode), and if you can get the desired behavior by changing [Atom's or packages' config settings](https://flight-manual.atom.io/hacking-atom/sections/debugging/#check-atom-and-package-settings).
|
||||
* **Check the [FAQs on the forum](https://discuss.atom.io/c/faq)** for a list of common questions and problems.
|
||||
* **Determine [which repository the problem should be reported in](#atom-and-packages)**.
|
||||
* **Perform a [cursory search](https://github.com/issues?q=+is%3Aissue+user%3Aatom)** to see if the problem has already been reported. If it has **and the issue is still open**, add a comment to the existing issue instead of opening a new one.
|
||||
* **Perform a [cursory search](https://github.com/search?q=+is%3Aissue+user%3Aatom)** to see if the problem has already been reported. If it has **and the issue is still open**, add a comment to the existing issue instead of opening a new one.
|
||||
|
||||
#### How Do I Submit A (Good) Bug Report?
|
||||
|
||||
@@ -170,7 +170,7 @@ Before creating enhancement suggestions, please check [this list](#before-submit
|
||||
* **Check the [debugging guide](https://flight-manual.atom.io/hacking-atom/sections/debugging/)** for tips — you might discover that the enhancement is already available. Most importantly, check if you're using [the latest version of Atom](https://flight-manual.atom.io/hacking-atom/sections/debugging/#update-to-the-latest-version) and if you can get the desired behavior by changing [Atom's or packages' config settings](https://flight-manual.atom.io/hacking-atom/sections/debugging/#check-atom-and-package-settings).
|
||||
* **Check if there's already [a package](https://atom.io/packages) which provides that enhancement.**
|
||||
* **Determine [which repository the enhancement should be suggested in](#atom-and-packages).**
|
||||
* **Perform a [cursory search](https://github.com/issues?q=+is%3Aissue+user%3Aatom)** to see if the enhancement has already been suggested. If it has, add a comment to the existing issue instead of opening a new one.
|
||||
* **Perform a [cursory search](https://github.com/search?q=+is%3Aissue+user%3Aatom)** to see if the enhancement has already been suggested. If it has, add a comment to the existing issue instead of opening a new one.
|
||||
|
||||
#### How Do I Submit A (Good) Enhancement Suggestion?
|
||||
|
||||
@@ -337,7 +337,7 @@ disablePackage: (name, options, callback) ->
|
||||
|
||||
This section lists the labels we use to help us track and manage issues and pull requests. Most labels are used across all Atom repositories, but some are specific to `atom/atom`.
|
||||
|
||||
[GitHub search](https://help.github.com/articles/searching-issues/) makes it easy to use labels for finding groups of issues or pull requests you're interested in. For example, you might be interested in [open issues across `atom/atom` and all Atom-owned packages which are labeled as bugs, but still need to be reliably reproduced](https://github.com/issues?utf8=%E2%9C%93&q=is%3Aopen+is%3Aissue+user%3Aatom+label%3Abug+label%3Aneeds-reproduction) or perhaps [open pull requests in `atom/atom` which haven't been reviewed yet](https://github.com/issues?utf8=%E2%9C%93&q=is%3Aopen+is%3Apr+repo%3Aatom%2Fatom+comments%3A0). To help you find issues and pull requests, each label is listed with search links for finding open items with that label in `atom/atom` only and also across all Atom repositories. We encourage you to read about [other search filters](https://help.github.com/articles/searching-issues/) which will help you write more focused queries.
|
||||
[GitHub search](https://help.github.com/articles/searching-issues/) makes it easy to use labels for finding groups of issues or pull requests you're interested in. For example, you might be interested in [open issues across `atom/atom` and all Atom-owned packages which are labeled as bugs, but still need to be reliably reproduced](https://github.com/search?utf8=%E2%9C%93&q=is%3Aopen+is%3Aissue+user%3Aatom+label%3Abug+label%3Aneeds-reproduction) or perhaps [open pull requests in `atom/atom` which haven't been reviewed yet](https://github.com/search?utf8=%E2%9C%93&q=is%3Aopen+is%3Apr+repo%3Aatom%2Fatom+comments%3A0). To help you find issues and pull requests, each label is listed with search links for finding open items with that label in `atom/atom` only and also across all Atom repositories. We encourage you to read about [other search filters](https://help.github.com/articles/searching-issues/) which will help you write more focused queries.
|
||||
|
||||
The labels are loosely grouped by their purpose, but it's not required that every issue have a label from every group or that an issue can't have more than one label from the same group.
|
||||
|
||||
@@ -405,94 +405,94 @@ Please open an issue on `atom/atom` if you have suggestions for new labels, and
|
||||
| `requires-changes` | [search][search-atom-repo-label-requires-changes] | [search][search-atom-org-label-requires-changes] | Pull requests which need to be updated based on review comments and then reviewed again. |
|
||||
| `needs-testing` | [search][search-atom-repo-label-needs-testing] | [search][search-atom-org-label-needs-testing] | Pull requests which need manual testing. |
|
||||
|
||||
[search-atom-repo-label-enhancement]: https://github.com/issues?q=is%3Aopen+is%3Aissue+repo%3Aatom%2Fatom+label%3Aenhancement
|
||||
[search-atom-org-label-enhancement]: https://github.com/issues?q=is%3Aopen+is%3Aissue+user%3Aatom+label%3Aenhancement
|
||||
[search-atom-repo-label-bug]: https://github.com/issues?q=is%3Aopen+is%3Aissue+repo%3Aatom%2Fatom+label%3Abug
|
||||
[search-atom-org-label-bug]: https://github.com/issues?q=is%3Aopen+is%3Aissue+user%3Aatom+label%3Abug
|
||||
[search-atom-repo-label-question]: https://github.com/issues?q=is%3Aopen+is%3Aissue+repo%3Aatom%2Fatom+label%3Aquestion
|
||||
[search-atom-org-label-question]: https://github.com/issues?q=is%3Aopen+is%3Aissue+user%3Aatom+label%3Aquestion
|
||||
[search-atom-repo-label-feedback]: https://github.com/issues?q=is%3Aopen+is%3Aissue+repo%3Aatom%2Fatom+label%3Afeedback
|
||||
[search-atom-org-label-feedback]: https://github.com/issues?q=is%3Aopen+is%3Aissue+user%3Aatom+label%3Afeedback
|
||||
[search-atom-repo-label-help-wanted]: https://github.com/issues?q=is%3Aopen+is%3Aissue+repo%3Aatom%2Fatom+label%3Ahelp-wanted
|
||||
[search-atom-org-label-help-wanted]: https://github.com/issues?q=is%3Aopen+is%3Aissue+user%3Aatom+label%3Ahelp-wanted
|
||||
[search-atom-repo-label-beginner]: https://github.com/issues?q=is%3Aopen+is%3Aissue+repo%3Aatom%2Fatom+label%3Abeginner
|
||||
[search-atom-org-label-beginner]: https://github.com/issues?q=is%3Aopen+is%3Aissue+user%3Aatom+label%3Abeginner
|
||||
[search-atom-repo-label-more-information-needed]: https://github.com/issues?q=is%3Aopen+is%3Aissue+repo%3Aatom%2Fatom+label%3Amore-information-needed
|
||||
[search-atom-org-label-more-information-needed]: https://github.com/issues?q=is%3Aopen+is%3Aissue+user%3Aatom+label%3Amore-information-needed
|
||||
[search-atom-repo-label-needs-reproduction]: https://github.com/issues?q=is%3Aopen+is%3Aissue+repo%3Aatom%2Fatom+label%3Aneeds-reproduction
|
||||
[search-atom-org-label-needs-reproduction]: https://github.com/issues?q=is%3Aopen+is%3Aissue+user%3Aatom+label%3Aneeds-reproduction
|
||||
[search-atom-repo-label-triage-help-needed]: https://github.com/issues?q=is%3Aopen+is%3Aissue+repo%3Aatom%2Fatom+label%3Atriage-help-needed
|
||||
[search-atom-org-label-triage-help-needed]: https://github.com/issues?q=is%3Aopen+is%3Aissue+user%3Aatom+label%3Atriage-help-needed
|
||||
[search-atom-repo-label-windows]: https://github.com/issues?q=is%3Aopen+is%3Aissue+repo%3Aatom%2Fatom+label%3Awindows
|
||||
[search-atom-org-label-windows]: https://github.com/issues?q=is%3Aopen+is%3Aissue+user%3Aatom+label%3Awindows
|
||||
[search-atom-repo-label-linux]: https://github.com/issues?q=is%3Aopen+is%3Aissue+repo%3Aatom%2Fatom+label%3Alinux
|
||||
[search-atom-org-label-linux]: https://github.com/issues?q=is%3Aopen+is%3Aissue+user%3Aatom+label%3Alinux
|
||||
[search-atom-repo-label-mac]: https://github.com/issues?q=is%3Aopen+is%3Aissue+repo%3Aatom%2Fatom+label%3Amac
|
||||
[search-atom-org-label-mac]: https://github.com/issues?q=is%3Aopen+is%3Aissue+user%3Aatom+label%3Amac
|
||||
[search-atom-repo-label-documentation]: https://github.com/issues?q=is%3Aopen+is%3Aissue+repo%3Aatom%2Fatom+label%3Adocumentation
|
||||
[search-atom-org-label-documentation]: https://github.com/issues?q=is%3Aopen+is%3Aissue+user%3Aatom+label%3Adocumentation
|
||||
[search-atom-repo-label-performance]: https://github.com/issues?q=is%3Aopen+is%3Aissue+repo%3Aatom%2Fatom+label%3Aperformance
|
||||
[search-atom-org-label-performance]: https://github.com/issues?q=is%3Aopen+is%3Aissue+user%3Aatom+label%3Aperformance
|
||||
[search-atom-repo-label-security]: https://github.com/issues?q=is%3Aopen+is%3Aissue+repo%3Aatom%2Fatom+label%3Asecurity
|
||||
[search-atom-org-label-security]: https://github.com/issues?q=is%3Aopen+is%3Aissue+user%3Aatom+label%3Asecurity
|
||||
[search-atom-repo-label-ui]: https://github.com/issues?q=is%3Aopen+is%3Aissue+repo%3Aatom%2Fatom+label%3Aui
|
||||
[search-atom-org-label-ui]: https://github.com/issues?q=is%3Aopen+is%3Aissue+user%3Aatom+label%3Aui
|
||||
[search-atom-repo-label-api]: https://github.com/issues?q=is%3Aopen+is%3Aissue+repo%3Aatom%2Fatom+label%3Aapi
|
||||
[search-atom-org-label-api]: https://github.com/issues?q=is%3Aopen+is%3Aissue+user%3Aatom+label%3Aapi
|
||||
[search-atom-repo-label-crash]: https://github.com/issues?q=is%3Aopen+is%3Aissue+repo%3Aatom%2Fatom+label%3Acrash
|
||||
[search-atom-org-label-crash]: https://github.com/issues?q=is%3Aopen+is%3Aissue+user%3Aatom+label%3Acrash
|
||||
[search-atom-repo-label-auto-indent]: https://github.com/issues?q=is%3Aopen+is%3Aissue+repo%3Aatom%2Fatom+label%3Aauto-indent
|
||||
[search-atom-org-label-auto-indent]: https://github.com/issues?q=is%3Aopen+is%3Aissue+user%3Aatom+label%3Aauto-indent
|
||||
[search-atom-repo-label-encoding]: https://github.com/issues?q=is%3Aopen+is%3Aissue+repo%3Aatom%2Fatom+label%3Aencoding
|
||||
[search-atom-org-label-encoding]: https://github.com/issues?q=is%3Aopen+is%3Aissue+user%3Aatom+label%3Aencoding
|
||||
[search-atom-repo-label-network]: https://github.com/issues?q=is%3Aopen+is%3Aissue+repo%3Aatom%2Fatom+label%3Anetwork
|
||||
[search-atom-org-label-network]: https://github.com/issues?q=is%3Aopen+is%3Aissue+user%3Aatom+label%3Anetwork
|
||||
[search-atom-repo-label-uncaught-exception]: https://github.com/issues?q=is%3Aopen+is%3Aissue+repo%3Aatom%2Fatom+label%3Auncaught-exception
|
||||
[search-atom-org-label-uncaught-exception]: https://github.com/issues?q=is%3Aopen+is%3Aissue+user%3Aatom+label%3Auncaught-exception
|
||||
[search-atom-repo-label-git]: https://github.com/issues?q=is%3Aopen+is%3Aissue+repo%3Aatom%2Fatom+label%3Agit
|
||||
[search-atom-org-label-git]: https://github.com/issues?q=is%3Aopen+is%3Aissue+user%3Aatom+label%3Agit
|
||||
[search-atom-repo-label-blocked]: https://github.com/issues?q=is%3Aopen+is%3Aissue+repo%3Aatom%2Fatom+label%3Ablocked
|
||||
[search-atom-org-label-blocked]: https://github.com/issues?q=is%3Aopen+is%3Aissue+user%3Aatom+label%3Ablocked
|
||||
[search-atom-repo-label-duplicate]: https://github.com/issues?q=is%3Aopen+is%3Aissue+repo%3Aatom%2Fatom+label%3Aduplicate
|
||||
[search-atom-org-label-duplicate]: https://github.com/issues?q=is%3Aopen+is%3Aissue+user%3Aatom+label%3Aduplicate
|
||||
[search-atom-repo-label-wontfix]: https://github.com/issues?q=is%3Aopen+is%3Aissue+repo%3Aatom%2Fatom+label%3Awontfix
|
||||
[search-atom-org-label-wontfix]: https://github.com/issues?q=is%3Aopen+is%3Aissue+user%3Aatom+label%3Awontfix
|
||||
[search-atom-repo-label-invalid]: https://github.com/issues?q=is%3Aopen+is%3Aissue+repo%3Aatom%2Fatom+label%3Ainvalid
|
||||
[search-atom-org-label-invalid]: https://github.com/issues?q=is%3Aopen+is%3Aissue+user%3Aatom+label%3Ainvalid
|
||||
[search-atom-repo-label-package-idea]: https://github.com/issues?q=is%3Aopen+is%3Aissue+repo%3Aatom%2Fatom+label%3Apackage-idea
|
||||
[search-atom-org-label-package-idea]: https://github.com/issues?q=is%3Aopen+is%3Aissue+user%3Aatom+label%3Apackage-idea
|
||||
[search-atom-repo-label-wrong-repo]: https://github.com/issues?q=is%3Aopen+is%3Aissue+repo%3Aatom%2Fatom+label%3Awrong-repo
|
||||
[search-atom-org-label-wrong-repo]: https://github.com/issues?q=is%3Aopen+is%3Aissue+user%3Aatom+label%3Awrong-repo
|
||||
[search-atom-repo-label-editor-rendering]: https://github.com/issues?q=is%3Aopen+is%3Aissue+repo%3Aatom%2Fatom+label%3Aeditor-rendering
|
||||
[search-atom-org-label-editor-rendering]: https://github.com/issues?q=is%3Aopen+is%3Aissue+user%3Aatom+label%3Aeditor-rendering
|
||||
[search-atom-repo-label-build-error]: https://github.com/issues?q=is%3Aopen+is%3Aissue+repo%3Aatom%2Fatom+label%3Abuild-error
|
||||
[search-atom-org-label-build-error]: https://github.com/issues?q=is%3Aopen+is%3Aissue+user%3Aatom+label%3Abuild-error
|
||||
[search-atom-repo-label-error-from-pathwatcher]: https://github.com/issues?q=is%3Aopen+is%3Aissue+repo%3Aatom%2Fatom+label%3Aerror-from-pathwatcher
|
||||
[search-atom-org-label-error-from-pathwatcher]: https://github.com/issues?q=is%3Aopen+is%3Aissue+user%3Aatom+label%3Aerror-from-pathwatcher
|
||||
[search-atom-repo-label-error-from-save]: https://github.com/issues?q=is%3Aopen+is%3Aissue+repo%3Aatom%2Fatom+label%3Aerror-from-save
|
||||
[search-atom-org-label-error-from-save]: https://github.com/issues?q=is%3Aopen+is%3Aissue+user%3Aatom+label%3Aerror-from-save
|
||||
[search-atom-repo-label-error-from-open]: https://github.com/issues?q=is%3Aopen+is%3Aissue+repo%3Aatom%2Fatom+label%3Aerror-from-open
|
||||
[search-atom-org-label-error-from-open]: https://github.com/issues?q=is%3Aopen+is%3Aissue+user%3Aatom+label%3Aerror-from-open
|
||||
[search-atom-repo-label-installer]: https://github.com/issues?q=is%3Aopen+is%3Aissue+repo%3Aatom%2Fatom+label%3Ainstaller
|
||||
[search-atom-org-label-installer]: https://github.com/issues?q=is%3Aopen+is%3Aissue+user%3Aatom+label%3Ainstaller
|
||||
[search-atom-repo-label-auto-updater]: https://github.com/issues?q=is%3Aopen+is%3Aissue+repo%3Aatom%2Fatom+label%3Aauto-updater
|
||||
[search-atom-org-label-auto-updater]: https://github.com/issues?q=is%3Aopen+is%3Aissue+user%3Aatom+label%3Aauto-updater
|
||||
[search-atom-repo-label-deprecation-help]: https://github.com/issues?q=is%3Aopen+is%3Aissue+repo%3Aatom%2Fatom+label%3Adeprecation-help
|
||||
[search-atom-org-label-deprecation-help]: https://github.com/issues?q=is%3Aopen+is%3Aissue+user%3Aatom+label%3Adeprecation-help
|
||||
[search-atom-repo-label-electron]: https://github.com/issues?q=is%3Aissue+repo%3Aatom%2Fatom+is%3Aopen+label%3Aelectron
|
||||
[search-atom-org-label-electron]: https://github.com/issues?q=is%3Aopen+is%3Aissue+user%3Aatom+label%3Aelectron
|
||||
[search-atom-repo-label-work-in-progress]: https://github.com/pulls?q=is%3Aopen+is%3Apr+repo%3Aatom%2Fatom+label%3Awork-in-progress
|
||||
[search-atom-org-label-work-in-progress]: https://github.com/pulls?q=is%3Aopen+is%3Apr+user%3Aatom+label%3Awork-in-progress
|
||||
[search-atom-repo-label-needs-review]: https://github.com/pulls?q=is%3Aopen+is%3Apr+repo%3Aatom%2Fatom+label%3Aneeds-review
|
||||
[search-atom-org-label-needs-review]: https://github.com/pulls?q=is%3Aopen+is%3Apr+user%3Aatom+label%3Aneeds-review
|
||||
[search-atom-repo-label-under-review]: https://github.com/pulls?q=is%3Aopen+is%3Apr+repo%3Aatom%2Fatom+label%3Aunder-review
|
||||
[search-atom-org-label-under-review]: https://github.com/pulls?q=is%3Aopen+is%3Apr+user%3Aatom+label%3Aunder-review
|
||||
[search-atom-repo-label-requires-changes]: https://github.com/pulls?q=is%3Aopen+is%3Apr+repo%3Aatom%2Fatom+label%3Arequires-changes
|
||||
[search-atom-org-label-requires-changes]: https://github.com/pulls?q=is%3Aopen+is%3Apr+user%3Aatom+label%3Arequires-changes
|
||||
[search-atom-repo-label-needs-testing]: https://github.com/pulls?q=is%3Aopen+is%3Apr+repo%3Aatom%2Fatom+label%3Aneeds-testing
|
||||
[search-atom-org-label-needs-testing]: https://github.com/pulls?q=is%3Aopen+is%3Apr+user%3Aatom+label%3Aneeds-testing
|
||||
[search-atom-repo-label-enhancement]: https://github.com/search?q=is%3Aopen+is%3Aissue+repo%3Aatom%2Fatom+label%3Aenhancement
|
||||
[search-atom-org-label-enhancement]: https://github.com/search?q=is%3Aopen+is%3Aissue+user%3Aatom+label%3Aenhancement
|
||||
[search-atom-repo-label-bug]: https://github.com/search?q=is%3Aopen+is%3Aissue+repo%3Aatom%2Fatom+label%3Abug
|
||||
[search-atom-org-label-bug]: https://github.com/search?q=is%3Aopen+is%3Aissue+user%3Aatom+label%3Abug
|
||||
[search-atom-repo-label-question]: https://github.com/search?q=is%3Aopen+is%3Aissue+repo%3Aatom%2Fatom+label%3Aquestion
|
||||
[search-atom-org-label-question]: https://github.com/search?q=is%3Aopen+is%3Aissue+user%3Aatom+label%3Aquestion
|
||||
[search-atom-repo-label-feedback]: https://github.com/search?q=is%3Aopen+is%3Aissue+repo%3Aatom%2Fatom+label%3Afeedback
|
||||
[search-atom-org-label-feedback]: https://github.com/search?q=is%3Aopen+is%3Aissue+user%3Aatom+label%3Afeedback
|
||||
[search-atom-repo-label-help-wanted]: https://github.com/search?q=is%3Aopen+is%3Aissue+repo%3Aatom%2Fatom+label%3Ahelp-wanted
|
||||
[search-atom-org-label-help-wanted]: https://github.com/search?q=is%3Aopen+is%3Aissue+user%3Aatom+label%3Ahelp-wanted
|
||||
[search-atom-repo-label-beginner]: https://github.com/search?q=is%3Aopen+is%3Aissue+repo%3Aatom%2Fatom+label%3Abeginner
|
||||
[search-atom-org-label-beginner]: https://github.com/search?q=is%3Aopen+is%3Aissue+user%3Aatom+label%3Abeginner
|
||||
[search-atom-repo-label-more-information-needed]: https://github.com/search?q=is%3Aopen+is%3Aissue+repo%3Aatom%2Fatom+label%3Amore-information-needed
|
||||
[search-atom-org-label-more-information-needed]: https://github.com/search?q=is%3Aopen+is%3Aissue+user%3Aatom+label%3Amore-information-needed
|
||||
[search-atom-repo-label-needs-reproduction]: https://github.com/search?q=is%3Aopen+is%3Aissue+repo%3Aatom%2Fatom+label%3Aneeds-reproduction
|
||||
[search-atom-org-label-needs-reproduction]: https://github.com/search?q=is%3Aopen+is%3Aissue+user%3Aatom+label%3Aneeds-reproduction
|
||||
[search-atom-repo-label-triage-help-needed]: https://github.com/search?q=is%3Aopen+is%3Aissue+repo%3Aatom%2Fatom+label%3Atriage-help-needed
|
||||
[search-atom-org-label-triage-help-needed]: https://github.com/search?q=is%3Aopen+is%3Aissue+user%3Aatom+label%3Atriage-help-needed
|
||||
[search-atom-repo-label-windows]: https://github.com/search?q=is%3Aopen+is%3Aissue+repo%3Aatom%2Fatom+label%3Awindows
|
||||
[search-atom-org-label-windows]: https://github.com/search?q=is%3Aopen+is%3Aissue+user%3Aatom+label%3Awindows
|
||||
[search-atom-repo-label-linux]: https://github.com/search?q=is%3Aopen+is%3Aissue+repo%3Aatom%2Fatom+label%3Alinux
|
||||
[search-atom-org-label-linux]: https://github.com/search?q=is%3Aopen+is%3Aissue+user%3Aatom+label%3Alinux
|
||||
[search-atom-repo-label-mac]: https://github.com/search?q=is%3Aopen+is%3Aissue+repo%3Aatom%2Fatom+label%3Amac
|
||||
[search-atom-org-label-mac]: https://github.com/search?q=is%3Aopen+is%3Aissue+user%3Aatom+label%3Amac
|
||||
[search-atom-repo-label-documentation]: https://github.com/search?q=is%3Aopen+is%3Aissue+repo%3Aatom%2Fatom+label%3Adocumentation
|
||||
[search-atom-org-label-documentation]: https://github.com/search?q=is%3Aopen+is%3Aissue+user%3Aatom+label%3Adocumentation
|
||||
[search-atom-repo-label-performance]: https://github.com/search?q=is%3Aopen+is%3Aissue+repo%3Aatom%2Fatom+label%3Aperformance
|
||||
[search-atom-org-label-performance]: https://github.com/search?q=is%3Aopen+is%3Aissue+user%3Aatom+label%3Aperformance
|
||||
[search-atom-repo-label-security]: https://github.com/search?q=is%3Aopen+is%3Aissue+repo%3Aatom%2Fatom+label%3Asecurity
|
||||
[search-atom-org-label-security]: https://github.com/search?q=is%3Aopen+is%3Aissue+user%3Aatom+label%3Asecurity
|
||||
[search-atom-repo-label-ui]: https://github.com/search?q=is%3Aopen+is%3Aissue+repo%3Aatom%2Fatom+label%3Aui
|
||||
[search-atom-org-label-ui]: https://github.com/search?q=is%3Aopen+is%3Aissue+user%3Aatom+label%3Aui
|
||||
[search-atom-repo-label-api]: https://github.com/search?q=is%3Aopen+is%3Aissue+repo%3Aatom%2Fatom+label%3Aapi
|
||||
[search-atom-org-label-api]: https://github.com/search?q=is%3Aopen+is%3Aissue+user%3Aatom+label%3Aapi
|
||||
[search-atom-repo-label-crash]: https://github.com/search?q=is%3Aopen+is%3Aissue+repo%3Aatom%2Fatom+label%3Acrash
|
||||
[search-atom-org-label-crash]: https://github.com/search?q=is%3Aopen+is%3Aissue+user%3Aatom+label%3Acrash
|
||||
[search-atom-repo-label-auto-indent]: https://github.com/search?q=is%3Aopen+is%3Aissue+repo%3Aatom%2Fatom+label%3Aauto-indent
|
||||
[search-atom-org-label-auto-indent]: https://github.com/search?q=is%3Aopen+is%3Aissue+user%3Aatom+label%3Aauto-indent
|
||||
[search-atom-repo-label-encoding]: https://github.com/search?q=is%3Aopen+is%3Aissue+repo%3Aatom%2Fatom+label%3Aencoding
|
||||
[search-atom-org-label-encoding]: https://github.com/search?q=is%3Aopen+is%3Aissue+user%3Aatom+label%3Aencoding
|
||||
[search-atom-repo-label-network]: https://github.com/search?q=is%3Aopen+is%3Aissue+repo%3Aatom%2Fatom+label%3Anetwork
|
||||
[search-atom-org-label-network]: https://github.com/search?q=is%3Aopen+is%3Aissue+user%3Aatom+label%3Anetwork
|
||||
[search-atom-repo-label-uncaught-exception]: https://github.com/search?q=is%3Aopen+is%3Aissue+repo%3Aatom%2Fatom+label%3Auncaught-exception
|
||||
[search-atom-org-label-uncaught-exception]: https://github.com/search?q=is%3Aopen+is%3Aissue+user%3Aatom+label%3Auncaught-exception
|
||||
[search-atom-repo-label-git]: https://github.com/search?q=is%3Aopen+is%3Aissue+repo%3Aatom%2Fatom+label%3Agit
|
||||
[search-atom-org-label-git]: https://github.com/search?q=is%3Aopen+is%3Aissue+user%3Aatom+label%3Agit
|
||||
[search-atom-repo-label-blocked]: https://github.com/search?q=is%3Aopen+is%3Aissue+repo%3Aatom%2Fatom+label%3Ablocked
|
||||
[search-atom-org-label-blocked]: https://github.com/search?q=is%3Aopen+is%3Aissue+user%3Aatom+label%3Ablocked
|
||||
[search-atom-repo-label-duplicate]: https://github.com/search?q=is%3Aopen+is%3Aissue+repo%3Aatom%2Fatom+label%3Aduplicate
|
||||
[search-atom-org-label-duplicate]: https://github.com/search?q=is%3Aopen+is%3Aissue+user%3Aatom+label%3Aduplicate
|
||||
[search-atom-repo-label-wontfix]: https://github.com/search?q=is%3Aopen+is%3Aissue+repo%3Aatom%2Fatom+label%3Awontfix
|
||||
[search-atom-org-label-wontfix]: https://github.com/search?q=is%3Aopen+is%3Aissue+user%3Aatom+label%3Awontfix
|
||||
[search-atom-repo-label-invalid]: https://github.com/search?q=is%3Aopen+is%3Aissue+repo%3Aatom%2Fatom+label%3Ainvalid
|
||||
[search-atom-org-label-invalid]: https://github.com/search?q=is%3Aopen+is%3Aissue+user%3Aatom+label%3Ainvalid
|
||||
[search-atom-repo-label-package-idea]: https://github.com/search?q=is%3Aopen+is%3Aissue+repo%3Aatom%2Fatom+label%3Apackage-idea
|
||||
[search-atom-org-label-package-idea]: https://github.com/search?q=is%3Aopen+is%3Aissue+user%3Aatom+label%3Apackage-idea
|
||||
[search-atom-repo-label-wrong-repo]: https://github.com/search?q=is%3Aopen+is%3Aissue+repo%3Aatom%2Fatom+label%3Awrong-repo
|
||||
[search-atom-org-label-wrong-repo]: https://github.com/search?q=is%3Aopen+is%3Aissue+user%3Aatom+label%3Awrong-repo
|
||||
[search-atom-repo-label-editor-rendering]: https://github.com/search?q=is%3Aopen+is%3Aissue+repo%3Aatom%2Fatom+label%3Aeditor-rendering
|
||||
[search-atom-org-label-editor-rendering]: https://github.com/search?q=is%3Aopen+is%3Aissue+user%3Aatom+label%3Aeditor-rendering
|
||||
[search-atom-repo-label-build-error]: https://github.com/search?q=is%3Aopen+is%3Aissue+repo%3Aatom%2Fatom+label%3Abuild-error
|
||||
[search-atom-org-label-build-error]: https://github.com/search?q=is%3Aopen+is%3Aissue+user%3Aatom+label%3Abuild-error
|
||||
[search-atom-repo-label-error-from-pathwatcher]: https://github.com/search?q=is%3Aopen+is%3Aissue+repo%3Aatom%2Fatom+label%3Aerror-from-pathwatcher
|
||||
[search-atom-org-label-error-from-pathwatcher]: https://github.com/search?q=is%3Aopen+is%3Aissue+user%3Aatom+label%3Aerror-from-pathwatcher
|
||||
[search-atom-repo-label-error-from-save]: https://github.com/search?q=is%3Aopen+is%3Aissue+repo%3Aatom%2Fatom+label%3Aerror-from-save
|
||||
[search-atom-org-label-error-from-save]: https://github.com/search?q=is%3Aopen+is%3Aissue+user%3Aatom+label%3Aerror-from-save
|
||||
[search-atom-repo-label-error-from-open]: https://github.com/search?q=is%3Aopen+is%3Aissue+repo%3Aatom%2Fatom+label%3Aerror-from-open
|
||||
[search-atom-org-label-error-from-open]: https://github.com/search?q=is%3Aopen+is%3Aissue+user%3Aatom+label%3Aerror-from-open
|
||||
[search-atom-repo-label-installer]: https://github.com/search?q=is%3Aopen+is%3Aissue+repo%3Aatom%2Fatom+label%3Ainstaller
|
||||
[search-atom-org-label-installer]: https://github.com/search?q=is%3Aopen+is%3Aissue+user%3Aatom+label%3Ainstaller
|
||||
[search-atom-repo-label-auto-updater]: https://github.com/search?q=is%3Aopen+is%3Aissue+repo%3Aatom%2Fatom+label%3Aauto-updater
|
||||
[search-atom-org-label-auto-updater]: https://github.com/search?q=is%3Aopen+is%3Aissue+user%3Aatom+label%3Aauto-updater
|
||||
[search-atom-repo-label-deprecation-help]: https://github.com/search?q=is%3Aopen+is%3Aissue+repo%3Aatom%2Fatom+label%3Adeprecation-help
|
||||
[search-atom-org-label-deprecation-help]: https://github.com/search?q=is%3Aopen+is%3Aissue+user%3Aatom+label%3Adeprecation-help
|
||||
[search-atom-repo-label-electron]: https://github.com/search?q=is%3Aissue+repo%3Aatom%2Fatom+is%3Aopen+label%3Aelectron
|
||||
[search-atom-org-label-electron]: https://github.com/search?q=is%3Aopen+is%3Aissue+user%3Aatom+label%3Aelectron
|
||||
[search-atom-repo-label-work-in-progress]: https://github.com/search?q=is%3Aopen+is%3Apr+repo%3Aatom%2Fatom+label%3Awork-in-progress
|
||||
[search-atom-org-label-work-in-progress]: https://github.com/search?q=is%3Aopen+is%3Apr+user%3Aatom+label%3Awork-in-progress
|
||||
[search-atom-repo-label-needs-review]: https://github.com/search?q=is%3Aopen+is%3Apr+repo%3Aatom%2Fatom+label%3Aneeds-review
|
||||
[search-atom-org-label-needs-review]: https://github.com/search?q=is%3Aopen+is%3Apr+user%3Aatom+label%3Aneeds-review
|
||||
[search-atom-repo-label-under-review]: https://github.com/search?q=is%3Aopen+is%3Apr+repo%3Aatom%2Fatom+label%3Aunder-review
|
||||
[search-atom-org-label-under-review]: https://github.com/search?q=is%3Aopen+is%3Apr+user%3Aatom+label%3Aunder-review
|
||||
[search-atom-repo-label-requires-changes]: https://github.com/search?q=is%3Aopen+is%3Apr+repo%3Aatom%2Fatom+label%3Arequires-changes
|
||||
[search-atom-org-label-requires-changes]: https://github.com/search?q=is%3Aopen+is%3Apr+user%3Aatom+label%3Arequires-changes
|
||||
[search-atom-repo-label-needs-testing]: https://github.com/search?q=is%3Aopen+is%3Apr+repo%3Aatom%2Fatom+label%3Aneeds-testing
|
||||
[search-atom-org-label-needs-testing]: https://github.com/search?q=is%3Aopen+is%3Apr+user%3Aatom+label%3Aneeds-testing
|
||||
|
||||
[beginner]:https://github.com/issues?utf8=%E2%9C%93&q=is%3Aopen+is%3Aissue+label%3Abeginner+label%3Ahelp-wanted+user%3Aatom+sort%3Acomments-desc
|
||||
[help-wanted]:https://github.com/issues?q=is%3Aopen+is%3Aissue+label%3Ahelp-wanted+user%3Aatom+sort%3Acomments-desc+-label%3Abeginner
|
||||
[beginner]:https://github.com/search?utf8=%E2%9C%93&q=is%3Aopen+is%3Aissue+label%3Abeginner+label%3Ahelp-wanted+user%3Aatom+sort%3Acomments-desc
|
||||
[help-wanted]:https://github.com/search?q=is%3Aopen+is%3Aissue+label%3Ahelp-wanted+user%3Aatom+sort%3Acomments-desc+-label%3Abeginner
|
||||
[contributing-to-official-atom-packages]:https://flight-manual.atom.io/hacking-atom/sections/contributing-to-official-atom-packages/
|
||||
[hacking-on-atom-core]: https://flight-manual.atom.io/hacking-atom/sections/hacking-on-atom-core/
|
||||
|
||||
@@ -6,6 +6,6 @@
|
||||
"url": "https://github.com/atom/atom.git"
|
||||
},
|
||||
"dependencies": {
|
||||
"atom-package-manager": "1.18.10"
|
||||
"atom-package-manager": "1.18.11"
|
||||
}
|
||||
}
|
||||
|
||||
@@ -1,11 +1,9 @@
|
||||
/** @babel */
|
||||
const Chart = require('chart.js')
|
||||
const glob = require('glob')
|
||||
const fs = require('fs-plus')
|
||||
const path = require('path')
|
||||
|
||||
import Chart from 'chart.js'
|
||||
import glob from 'glob'
|
||||
import fs from 'fs-plus'
|
||||
import path from 'path'
|
||||
|
||||
export default async function ({test, benchmarkPaths}) {
|
||||
module.exports = async ({test, benchmarkPaths}) => {
|
||||
document.body.style.backgroundColor = '#ffffff'
|
||||
document.body.style.overflow = 'auto'
|
||||
|
||||
|
||||
@@ -1,6 +1,4 @@
|
||||
/** @babel */
|
||||
|
||||
import {TextEditor, TextBuffer} from 'atom'
|
||||
const {TextEditor, TextBuffer} = require('atom')
|
||||
|
||||
const MIN_SIZE_IN_KB = 0 * 1024
|
||||
const MAX_SIZE_IN_KB = 10 * 1024
|
||||
@@ -8,7 +6,7 @@ const SIZE_STEP_IN_KB = 1024
|
||||
const LINE_TEXT = 'Lorem ipsum dolor sit amet\n'
|
||||
const TEXT = LINE_TEXT.repeat(Math.ceil(MAX_SIZE_IN_KB * 1024 / LINE_TEXT.length))
|
||||
|
||||
export default async function ({test}) {
|
||||
module.exports = async ({test}) => {
|
||||
const data = []
|
||||
|
||||
document.body.appendChild(atom.workspace.getElement())
|
||||
@@ -27,6 +25,7 @@ export default async function ({test}) {
|
||||
let t0 = window.performance.now()
|
||||
const buffer = new TextBuffer({text})
|
||||
const editor = new TextEditor({buffer, autoHeight: false, largeFileMode: true})
|
||||
atom.grammars.autoAssignLanguageMode(buffer)
|
||||
atom.workspace.getActivePane().activateItem(editor)
|
||||
let t1 = window.performance.now()
|
||||
|
||||
|
||||
@@ -1,8 +1,6 @@
|
||||
/** @babel */
|
||||
|
||||
import path from 'path'
|
||||
import fs from 'fs'
|
||||
import {TextEditor, TextBuffer} from 'atom'
|
||||
const path = require('path')
|
||||
const fs = require('fs')
|
||||
const {TextEditor, TextBuffer} = require('atom')
|
||||
|
||||
const SIZES_IN_KB = [
|
||||
512,
|
||||
@@ -12,7 +10,7 @@ const SIZES_IN_KB = [
|
||||
const REPEATED_TEXT = fs.readFileSync(path.join(__dirname, '..', 'spec', 'fixtures', 'sample.js'), 'utf8').replace(/\n/g, '')
|
||||
const TEXT = REPEATED_TEXT.repeat(Math.ceil(SIZES_IN_KB[SIZES_IN_KB.length - 1] * 1024 / REPEATED_TEXT.length))
|
||||
|
||||
export default async function ({test}) {
|
||||
module.exports = async ({test}) => {
|
||||
const data = []
|
||||
|
||||
const workspaceElement = atom.workspace.getElement()
|
||||
@@ -34,7 +32,7 @@ export default async function ({test}) {
|
||||
let t0 = window.performance.now()
|
||||
const buffer = new TextBuffer({text})
|
||||
const editor = new TextEditor({buffer, autoHeight: false, largeFileMode: true})
|
||||
editor.setGrammar(atom.grammars.grammarForScopeName('source.js'))
|
||||
atom.grammars.assignLanguageMode(buffer, 'source.js')
|
||||
atom.workspace.getActivePane().activateItem(editor)
|
||||
let t1 = window.performance.now()
|
||||
|
||||
|
||||
30
package.json
30
package.json
@@ -70,7 +70,7 @@
|
||||
"service-hub": "^0.7.4",
|
||||
"sinon": "1.17.4",
|
||||
"temp": "^0.8.3",
|
||||
"text-buffer": "13.8.6",
|
||||
"text-buffer": "13.9.2",
|
||||
"typescript-simple": "1.0.0",
|
||||
"underscore-plus": "^1.6.6",
|
||||
"winreg": "^1.2.1",
|
||||
@@ -94,22 +94,22 @@
|
||||
"autocomplete-atom-api": "0.10.5",
|
||||
"autocomplete-css": "0.17.4",
|
||||
"autocomplete-html": "0.8.3",
|
||||
"autocomplete-plus": "2.37.5",
|
||||
"autocomplete-plus": "2.39.0",
|
||||
"autocomplete-snippets": "1.11.2",
|
||||
"autoflow": "0.29.0",
|
||||
"autosave": "0.24.6",
|
||||
"background-tips": "0.27.1",
|
||||
"bookmarks": "0.45.0",
|
||||
"bracket-matcher": "0.88.0",
|
||||
"command-palette": "0.42.1",
|
||||
"command-palette": "0.43.0",
|
||||
"dalek": "0.2.1",
|
||||
"deprecation-cop": "0.56.9",
|
||||
"dev-live-reload": "0.48.1",
|
||||
"encoding-selector": "0.23.7",
|
||||
"exception-reporting": "0.41.5",
|
||||
"exception-reporting": "0.42.0",
|
||||
"find-and-replace": "0.215.0",
|
||||
"fuzzy-finder": "1.7.3",
|
||||
"github": "0.8.2",
|
||||
"github": "0.8.3",
|
||||
"git-diff": "1.3.6",
|
||||
"go-to-line": "0.32.1",
|
||||
"grammar-selector": "0.49.8",
|
||||
@@ -120,8 +120,8 @@
|
||||
"link": "0.31.4",
|
||||
"markdown-preview": "0.159.18",
|
||||
"metrics": "1.2.6",
|
||||
"notifications": "0.69.2",
|
||||
"open-on-github": "1.3.0",
|
||||
"notifications": "0.70.2",
|
||||
"open-on-github": "1.3.1",
|
||||
"package-generator": "1.3.0",
|
||||
"settings-view": "0.253.0",
|
||||
"snippets": "1.1.9",
|
||||
@@ -133,23 +133,23 @@
|
||||
"timecop": "0.36.2",
|
||||
"tree-view": "0.221.3",
|
||||
"update-package-dependencies": "0.13.0",
|
||||
"welcome": "0.36.5",
|
||||
"welcome": "0.36.6",
|
||||
"whitespace": "0.37.5",
|
||||
"wrap-guide": "0.40.2",
|
||||
"wrap-guide": "0.40.3",
|
||||
"language-c": "0.58.1",
|
||||
"language-clojure": "0.22.4",
|
||||
"language-clojure": "0.22.5",
|
||||
"language-coffee-script": "0.49.3",
|
||||
"language-csharp": "0.14.3",
|
||||
"language-css": "0.42.7",
|
||||
"language-css": "0.42.8",
|
||||
"language-gfm": "0.90.2",
|
||||
"language-git": "0.19.1",
|
||||
"language-go": "0.44.3",
|
||||
"language-html": "0.48.2",
|
||||
"language-html": "0.48.3",
|
||||
"language-hyperlink": "0.16.3",
|
||||
"language-java": "0.27.6",
|
||||
"language-javascript": "0.127.6",
|
||||
"language-javascript": "0.127.7",
|
||||
"language-json": "0.19.1",
|
||||
"language-less": "0.33.0",
|
||||
"language-less": "0.34.1",
|
||||
"language-make": "0.22.3",
|
||||
"language-mustache": "0.14.4",
|
||||
"language-objective-c": "0.15.1",
|
||||
@@ -159,7 +159,7 @@
|
||||
"language-python": "0.45.5",
|
||||
"language-ruby": "0.71.4",
|
||||
"language-ruby-on-rails": "0.25.2",
|
||||
"language-sass": "0.61.1",
|
||||
"language-sass": "0.61.3",
|
||||
"language-shellscript": "0.25.4",
|
||||
"language-source": "0.9.0",
|
||||
"language-sql": "0.25.8",
|
||||
|
||||
@@ -301,8 +301,9 @@ describe('AtomEnvironment', () => {
|
||||
})
|
||||
|
||||
it('serializes the text editor registry', async () => {
|
||||
await atom.packages.activatePackage('language-text')
|
||||
const editor = await atom.workspace.open('sample.js')
|
||||
atom.textEditors.setGrammarOverride(editor, 'text.plain')
|
||||
expect(atom.grammars.assignLanguageMode(editor, 'text.plain')).toBe(true)
|
||||
|
||||
const atom2 = new AtomEnvironment({
|
||||
applicationDelegate: atom.applicationDelegate,
|
||||
@@ -318,7 +319,9 @@ describe('AtomEnvironment', () => {
|
||||
atom2.initialize({document, window})
|
||||
|
||||
await atom2.deserialize(atom.serialize())
|
||||
expect(atom2.textEditors.getGrammarOverride(editor)).toBe('text.plain')
|
||||
await atom2.packages.activatePackage('language-text')
|
||||
const editor2 = atom2.workspace.getActiveTextEditor()
|
||||
expect(editor2.getBuffer().getLanguageMode().getLanguageId()).toBe('text.plain')
|
||||
atom2.destroy()
|
||||
})
|
||||
|
||||
|
||||
@@ -9,34 +9,41 @@ ipcHelpers = require '../src/ipc-helpers'
|
||||
formatStackTrace = (spec, message='', stackTrace) ->
|
||||
return stackTrace unless stackTrace
|
||||
|
||||
# at ... (.../jasmine.js:1:2)
|
||||
jasminePattern = /^\s*at\s+.*\(?.*[/\\]jasmine(-[^/\\]*)?\.js:\d+:\d+\)?\s*$/
|
||||
firstJasmineLinePattern = /^\s*at [/\\].*[/\\]jasmine(-[^/\\]*)?\.js:\d+:\d+\)?\s*$/
|
||||
# at jasmine.Something... (.../jasmine.js:1:2)
|
||||
firstJasmineLinePattern = /^\s*at\s+jasmine\.[A-Z][^\s]*\s+\(?.*[/\\]jasmine(-[^/\\]*)?\.js:\d+:\d+\)?\s*$/
|
||||
lines = []
|
||||
for line in stackTrace.split('\n')
|
||||
lines.push(line) unless jasminePattern.test(line)
|
||||
break if firstJasmineLinePattern.test(line)
|
||||
lines.push(line) unless jasminePattern.test(line)
|
||||
|
||||
# Remove first line of stack when it is the same as the error message
|
||||
errorMatch = lines[0]?.match(/^Error: (.*)/)
|
||||
lines.shift() if message.trim() is errorMatch?[1]?.trim()
|
||||
|
||||
for line, index in lines
|
||||
# Remove prefix of lines matching: at jasmine.Spec.<anonymous> (path:1:2)
|
||||
prefixMatch = line.match(/at jasmine\.Spec\.<anonymous> \(([^)]+)\)/)
|
||||
line = "at #{prefixMatch[1]}" if prefixMatch
|
||||
lines = lines.map (line) ->
|
||||
# Only format actual stacktrace lines
|
||||
if /^\s*at\s/.test(line)
|
||||
# Needs to occur before path relativization
|
||||
if process.platform is 'win32' and /file:\/\/\//.test(line)
|
||||
# file:///C:/some/file -> C:\some\file
|
||||
line = line.replace('file:///', '').replace(///#{path.posix.sep}///g, path.win32.sep)
|
||||
|
||||
# Relativize locations to spec directory
|
||||
if process.platform is 'win32'
|
||||
line = line.replace('file:///', '').replace(///#{path.posix.sep}///g, path.win32.sep)
|
||||
line = line.replace("at #{spec.specDirectory}#{path.sep}", 'at ')
|
||||
lines[index] = line.replace("(#{spec.specDirectory}#{path.sep}", '(') # at step (path:1:2)
|
||||
line = line.trim()
|
||||
# at jasmine.Spec.<anonymous> (path:1:2) -> at path:1:2
|
||||
.replace(/^at jasmine\.Spec\.<anonymous> \(([^)]+)\)/, 'at $1')
|
||||
# at it (path:1:2) -> at path:1:2
|
||||
.replace(/^at f*it \(([^)]+)\)/, 'at $1')
|
||||
# at spec/file-test.js -> at file-test.js
|
||||
.replace(spec.specDirectory + path.sep, '')
|
||||
|
||||
return line
|
||||
|
||||
lines = lines.map (line) -> line.trim()
|
||||
lines.join('\n').trim()
|
||||
|
||||
module.exports =
|
||||
class AtomReporter
|
||||
|
||||
constructor: ->
|
||||
@element = document.createElement('div')
|
||||
@element.classList.add('spec-reporter-container')
|
||||
|
||||
@@ -366,6 +366,7 @@ describe('GitRepository', () => {
|
||||
notificationManager: atom.notifications,
|
||||
packageManager: atom.packages,
|
||||
confirm: atom.confirm,
|
||||
grammarRegistry: atom.grammars,
|
||||
applicationDelegate: atom.applicationDelegate
|
||||
})
|
||||
await project2.deserialize(atom.project.serialize({isUnloading: false}))
|
||||
|
||||
393
spec/grammar-registry-spec.js
Normal file
393
spec/grammar-registry-spec.js
Normal file
@@ -0,0 +1,393 @@
|
||||
const {it, fit, ffit, fffit, beforeEach, afterEach} = require('./async-spec-helpers')
|
||||
|
||||
const path = require('path')
|
||||
const fs = require('fs-plus')
|
||||
const temp = require('temp').track()
|
||||
const TextBuffer = require('text-buffer')
|
||||
const GrammarRegistry = require('../src/grammar-registry')
|
||||
|
||||
describe('GrammarRegistry', () => {
|
||||
let grammarRegistry
|
||||
|
||||
beforeEach(() => {
|
||||
grammarRegistry = new GrammarRegistry({config: atom.config})
|
||||
})
|
||||
|
||||
describe('.assignLanguageMode(buffer, languageName)', () => {
|
||||
it('assigns to the buffer a language mode with the given language name', async () => {
|
||||
grammarRegistry.loadGrammarSync(require.resolve('language-javascript/grammars/javascript.cson'))
|
||||
grammarRegistry.loadGrammarSync(require.resolve('language-css/grammars/css.cson'))
|
||||
|
||||
const buffer = new TextBuffer()
|
||||
expect(grammarRegistry.assignLanguageMode(buffer, 'source.js')).toBe(true)
|
||||
expect(buffer.getLanguageMode().getLanguageId()).toBe('source.js')
|
||||
|
||||
// Returns true if we found the grammar, even if it didn't change
|
||||
expect(grammarRegistry.assignLanguageMode(buffer, 'source.js')).toBe(true)
|
||||
|
||||
// Language names are not case-sensitive
|
||||
expect(grammarRegistry.assignLanguageMode(buffer, 'source.css')).toBe(true)
|
||||
expect(buffer.getLanguageMode().getLanguageId()).toBe('source.css')
|
||||
|
||||
// Returns false if no language is found
|
||||
expect(grammarRegistry.assignLanguageMode(buffer, 'blub')).toBe(false)
|
||||
expect(buffer.getLanguageMode().getLanguageId()).toBe('source.css')
|
||||
})
|
||||
|
||||
describe('when no languageName is passed', () => {
|
||||
it('makes the buffer use the null grammar', () => {
|
||||
grammarRegistry.loadGrammarSync(require.resolve('language-css/grammars/css.cson'))
|
||||
|
||||
const buffer = new TextBuffer()
|
||||
expect(grammarRegistry.assignLanguageMode(buffer, 'source.css')).toBe(true)
|
||||
expect(buffer.getLanguageMode().getLanguageId()).toBe('source.css')
|
||||
|
||||
expect(grammarRegistry.assignLanguageMode(buffer, null)).toBe(true)
|
||||
expect(buffer.getLanguageMode().getLanguageId()).toBe('text.plain.null-grammar')
|
||||
})
|
||||
})
|
||||
})
|
||||
|
||||
describe('.autoAssignLanguageMode(buffer)', () => {
|
||||
it('assigns to the buffer a language mode based on the best available grammar', () => {
|
||||
grammarRegistry.loadGrammarSync(require.resolve('language-javascript/grammars/javascript.cson'))
|
||||
grammarRegistry.loadGrammarSync(require.resolve('language-css/grammars/css.cson'))
|
||||
|
||||
const buffer = new TextBuffer()
|
||||
buffer.setPath('foo.js')
|
||||
expect(grammarRegistry.assignLanguageMode(buffer, 'source.css')).toBe(true)
|
||||
expect(buffer.getLanguageMode().getLanguageId()).toBe('source.css')
|
||||
|
||||
grammarRegistry.autoAssignLanguageMode(buffer)
|
||||
expect(buffer.getLanguageMode().getLanguageId()).toBe('source.js')
|
||||
})
|
||||
})
|
||||
|
||||
describe('.maintainLanguageMode(buffer)', () => {
|
||||
it('assigns a grammar to the buffer based on its path', async () => {
|
||||
const buffer = new TextBuffer()
|
||||
|
||||
grammarRegistry.loadGrammarSync(require.resolve('language-javascript/grammars/javascript.cson'))
|
||||
grammarRegistry.loadGrammarSync(require.resolve('language-c/grammars/c.cson'))
|
||||
|
||||
buffer.setPath('test.js')
|
||||
grammarRegistry.maintainLanguageMode(buffer)
|
||||
expect(buffer.getLanguageMode().getLanguageId()).toBe('source.js')
|
||||
|
||||
buffer.setPath('test.c')
|
||||
expect(buffer.getLanguageMode().getLanguageId()).toBe('source.c')
|
||||
})
|
||||
|
||||
it('updates the buffer\'s grammar when a more appropriate grammar is added for its path', async () => {
|
||||
const buffer = new TextBuffer()
|
||||
expect(buffer.getLanguageMode().getLanguageId()).toBe(null)
|
||||
|
||||
buffer.setPath('test.js')
|
||||
grammarRegistry.maintainLanguageMode(buffer)
|
||||
|
||||
grammarRegistry.loadGrammarSync(require.resolve('language-javascript/grammars/javascript.cson'))
|
||||
expect(buffer.getLanguageMode().getLanguageId()).toBe('source.js')
|
||||
})
|
||||
|
||||
it('can be overridden by calling .assignLanguageMode', () => {
|
||||
const buffer = new TextBuffer()
|
||||
|
||||
buffer.setPath('test.js')
|
||||
grammarRegistry.maintainLanguageMode(buffer)
|
||||
|
||||
grammarRegistry.loadGrammarSync(require.resolve('language-css/grammars/css.cson'))
|
||||
expect(grammarRegistry.assignLanguageMode(buffer, 'source.css')).toBe(true)
|
||||
expect(buffer.getLanguageMode().getLanguageId()).toBe('source.css')
|
||||
|
||||
grammarRegistry.loadGrammarSync(require.resolve('language-javascript/grammars/javascript.cson'))
|
||||
expect(buffer.getLanguageMode().getLanguageId()).toBe('source.css')
|
||||
})
|
||||
|
||||
it('returns a disposable that can be used to stop the registry from updating the buffer', async () => {
|
||||
const buffer = new TextBuffer()
|
||||
grammarRegistry.loadGrammarSync(require.resolve('language-javascript/grammars/javascript.cson'))
|
||||
|
||||
const previousSubscriptionCount = buffer.emitter.getTotalListenerCount()
|
||||
const disposable = grammarRegistry.maintainLanguageMode(buffer)
|
||||
expect(buffer.emitter.getTotalListenerCount()).toBeGreaterThan(previousSubscriptionCount)
|
||||
expect(retainedBufferCount(grammarRegistry)).toBe(1)
|
||||
|
||||
buffer.setPath('test.js')
|
||||
expect(buffer.getLanguageMode().getLanguageId()).toBe('source.js')
|
||||
|
||||
buffer.setPath('test.txt')
|
||||
expect(buffer.getLanguageMode().getLanguageId()).toBe('text.plain.null-grammar')
|
||||
|
||||
disposable.dispose()
|
||||
expect(buffer.emitter.getTotalListenerCount()).toBe(previousSubscriptionCount)
|
||||
expect(retainedBufferCount(grammarRegistry)).toBe(0)
|
||||
|
||||
buffer.setPath('test.js')
|
||||
expect(buffer.getLanguageMode().getLanguageId()).toBe('text.plain.null-grammar')
|
||||
expect(retainedBufferCount(grammarRegistry)).toBe(0)
|
||||
})
|
||||
|
||||
it('doesn\'t do anything when called a second time with the same buffer', async () => {
|
||||
const buffer = new TextBuffer()
|
||||
grammarRegistry.loadGrammarSync(require.resolve('language-javascript/grammars/javascript.cson'))
|
||||
const disposable1 = grammarRegistry.maintainLanguageMode(buffer)
|
||||
const disposable2 = grammarRegistry.maintainLanguageMode(buffer)
|
||||
|
||||
buffer.setPath('test.js')
|
||||
expect(buffer.getLanguageMode().getLanguageId()).toBe('source.js')
|
||||
|
||||
disposable2.dispose()
|
||||
buffer.setPath('test.txt')
|
||||
expect(buffer.getLanguageMode().getLanguageId()).toBe('text.plain.null-grammar')
|
||||
|
||||
disposable1.dispose()
|
||||
buffer.setPath('test.js')
|
||||
expect(buffer.getLanguageMode().getLanguageId()).toBe('text.plain.null-grammar')
|
||||
})
|
||||
|
||||
it('does not retain the buffer after the buffer is destroyed', () => {
|
||||
const buffer = new TextBuffer()
|
||||
grammarRegistry.loadGrammarSync(require.resolve('language-javascript/grammars/javascript.cson'))
|
||||
|
||||
const disposable = grammarRegistry.maintainLanguageMode(buffer)
|
||||
expect(retainedBufferCount(grammarRegistry)).toBe(1)
|
||||
expect(subscriptionCount(grammarRegistry)).toBe(2)
|
||||
|
||||
buffer.destroy()
|
||||
expect(retainedBufferCount(grammarRegistry)).toBe(0)
|
||||
expect(subscriptionCount(grammarRegistry)).toBe(0)
|
||||
expect(buffer.emitter.getTotalListenerCount()).toBe(0)
|
||||
|
||||
disposable.dispose()
|
||||
expect(retainedBufferCount(grammarRegistry)).toBe(0)
|
||||
expect(subscriptionCount(grammarRegistry)).toBe(0)
|
||||
})
|
||||
|
||||
it('does not retain the buffer when the grammar registry is destroyed', () => {
|
||||
const buffer = new TextBuffer()
|
||||
grammarRegistry.loadGrammarSync(require.resolve('language-javascript/grammars/javascript.cson'))
|
||||
|
||||
const disposable = grammarRegistry.maintainLanguageMode(buffer)
|
||||
expect(retainedBufferCount(grammarRegistry)).toBe(1)
|
||||
expect(subscriptionCount(grammarRegistry)).toBe(2)
|
||||
|
||||
grammarRegistry.clear()
|
||||
|
||||
expect(retainedBufferCount(grammarRegistry)).toBe(0)
|
||||
expect(subscriptionCount(grammarRegistry)).toBe(0)
|
||||
expect(buffer.emitter.getTotalListenerCount()).toBe(0)
|
||||
})
|
||||
})
|
||||
|
||||
describe('.selectGrammar(filePath)', () => {
|
||||
it('always returns a grammar', () => {
|
||||
const registry = new GrammarRegistry({config: atom.config})
|
||||
expect(registry.selectGrammar().scopeName).toBe('text.plain.null-grammar')
|
||||
})
|
||||
|
||||
it('selects the text.plain grammar over the null grammar', async () => {
|
||||
await atom.packages.activatePackage('language-text')
|
||||
expect(atom.grammars.selectGrammar('test.txt').scopeName).toBe('text.plain')
|
||||
})
|
||||
|
||||
it('selects a grammar based on the file path case insensitively', async () => {
|
||||
await atom.packages.activatePackage('language-coffee-script')
|
||||
expect(atom.grammars.selectGrammar('/tmp/source.coffee').scopeName).toBe('source.coffee')
|
||||
expect(atom.grammars.selectGrammar('/tmp/source.COFFEE').scopeName).toBe('source.coffee')
|
||||
})
|
||||
|
||||
describe('on Windows', () => {
|
||||
let originalPlatform
|
||||
|
||||
beforeEach(() => {
|
||||
originalPlatform = process.platform
|
||||
Object.defineProperty(process, 'platform', {value: 'win32'})
|
||||
})
|
||||
|
||||
afterEach(() => {
|
||||
Object.defineProperty(process, 'platform', {value: originalPlatform})
|
||||
})
|
||||
|
||||
it('normalizes back slashes to forward slashes when matching the fileTypes', async () => {
|
||||
await atom.packages.activatePackage('language-git')
|
||||
expect(atom.grammars.selectGrammar('something\\.git\\config').scopeName).toBe('source.git-config')
|
||||
})
|
||||
})
|
||||
|
||||
it("can use the filePath to load the correct grammar based on the grammar's filetype", async () => {
|
||||
await atom.packages.activatePackage('language-git')
|
||||
await atom.packages.activatePackage('language-javascript')
|
||||
await atom.packages.activatePackage('language-ruby')
|
||||
|
||||
expect(atom.grammars.selectGrammar('file.js').name).toBe('JavaScript') // based on extension (.js)
|
||||
expect(atom.grammars.selectGrammar(path.join(temp.dir, '.git', 'config')).name).toBe('Git Config') // based on end of the path (.git/config)
|
||||
expect(atom.grammars.selectGrammar('Rakefile').name).toBe('Ruby') // based on the file's basename (Rakefile)
|
||||
expect(atom.grammars.selectGrammar('curb').name).toBe('Null Grammar')
|
||||
expect(atom.grammars.selectGrammar('/hu.git/config').name).toBe('Null Grammar')
|
||||
})
|
||||
|
||||
it("uses the filePath's shebang line if the grammar cannot be determined by the extension or basename", async () => {
|
||||
await atom.packages.activatePackage('language-javascript')
|
||||
await atom.packages.activatePackage('language-ruby')
|
||||
|
||||
const filePath = require.resolve('./fixtures/shebang')
|
||||
expect(atom.grammars.selectGrammar(filePath).name).toBe('Ruby')
|
||||
})
|
||||
|
||||
it('uses the number of newlines in the first line regex to determine the number of lines to test against', async () => {
|
||||
await atom.packages.activatePackage('language-property-list')
|
||||
await atom.packages.activatePackage('language-coffee-script')
|
||||
|
||||
let fileContent = 'first-line\n<html>'
|
||||
expect(atom.grammars.selectGrammar('dummy.coffee', fileContent).name).toBe('CoffeeScript')
|
||||
|
||||
fileContent = '<?xml version="1.0" encoding="UTF-8"?>'
|
||||
expect(atom.grammars.selectGrammar('grammar.tmLanguage', fileContent).name).toBe('Null Grammar')
|
||||
|
||||
fileContent += '\n<!DOCTYPE plist PUBLIC "-//Apple//DTD PLIST 1.0//EN" "http://www.apple.com/DTDs/PropertyList-1.0.dtd">'
|
||||
expect(atom.grammars.selectGrammar('grammar.tmLanguage', fileContent).name).toBe('Property List (XML)')
|
||||
})
|
||||
|
||||
it("doesn't read the file when the file contents are specified", async () => {
|
||||
await atom.packages.activatePackage('language-ruby')
|
||||
|
||||
const filePath = require.resolve('./fixtures/shebang')
|
||||
const filePathContents = fs.readFileSync(filePath, 'utf8')
|
||||
spyOn(fs, 'read').andCallThrough()
|
||||
expect(atom.grammars.selectGrammar(filePath, filePathContents).name).toBe('Ruby')
|
||||
expect(fs.read).not.toHaveBeenCalled()
|
||||
})
|
||||
|
||||
describe('when multiple grammars have matching fileTypes', () => {
|
||||
it('selects the grammar with the longest fileType match', () => {
|
||||
const grammarPath1 = temp.path({suffix: '.json'})
|
||||
fs.writeFileSync(grammarPath1, JSON.stringify({
|
||||
name: 'test1',
|
||||
scopeName: 'source1',
|
||||
fileTypes: ['test']
|
||||
}))
|
||||
const grammar1 = atom.grammars.loadGrammarSync(grammarPath1)
|
||||
expect(atom.grammars.selectGrammar('more.test', '')).toBe(grammar1)
|
||||
fs.removeSync(grammarPath1)
|
||||
|
||||
const grammarPath2 = temp.path({suffix: '.json'})
|
||||
fs.writeFileSync(grammarPath2, JSON.stringify({
|
||||
name: 'test2',
|
||||
scopeName: 'source2',
|
||||
fileTypes: ['test', 'more.test']
|
||||
}))
|
||||
const grammar2 = atom.grammars.loadGrammarSync(grammarPath2)
|
||||
expect(atom.grammars.selectGrammar('more.test', '')).toBe(grammar2)
|
||||
return fs.removeSync(grammarPath2)
|
||||
})
|
||||
})
|
||||
|
||||
it('favors non-bundled packages when breaking scoring ties', async () => {
|
||||
await atom.packages.activatePackage('language-ruby')
|
||||
await atom.packages.activatePackage(path.join(__dirname, 'fixtures', 'packages', 'package-with-rb-filetype'))
|
||||
|
||||
atom.grammars.grammarForScopeName('source.ruby').bundledPackage = true
|
||||
atom.grammars.grammarForScopeName('test.rb').bundledPackage = false
|
||||
|
||||
expect(atom.grammars.selectGrammar('test.rb', '#!/usr/bin/env ruby').scopeName).toBe('source.ruby')
|
||||
expect(atom.grammars.selectGrammar('test.rb', '#!/usr/bin/env testruby').scopeName).toBe('test.rb')
|
||||
expect(atom.grammars.selectGrammar('test.rb').scopeName).toBe('test.rb')
|
||||
})
|
||||
|
||||
describe('when there is no file path', () => {
|
||||
it('does not throw an exception (regression)', () => {
|
||||
expect(() => atom.grammars.selectGrammar(null, '#!/usr/bin/ruby')).not.toThrow()
|
||||
expect(() => atom.grammars.selectGrammar(null, '')).not.toThrow()
|
||||
expect(() => atom.grammars.selectGrammar(null, null)).not.toThrow()
|
||||
})
|
||||
})
|
||||
|
||||
describe('when the user has custom grammar file types', () => {
|
||||
it('considers the custom file types as well as those defined in the grammar', async () => {
|
||||
await atom.packages.activatePackage('language-ruby')
|
||||
atom.config.set('core.customFileTypes', {'source.ruby': ['Cheffile']})
|
||||
expect(atom.grammars.selectGrammar('build/Cheffile', 'cookbook "postgres"').scopeName).toBe('source.ruby')
|
||||
})
|
||||
|
||||
it('favors user-defined file types over built-in ones of equal length', async () => {
|
||||
await atom.packages.activatePackage('language-ruby')
|
||||
await atom.packages.activatePackage('language-coffee-script')
|
||||
|
||||
atom.config.set('core.customFileTypes', {
|
||||
'source.coffee': ['Rakefile'],
|
||||
'source.ruby': ['Cakefile']
|
||||
})
|
||||
expect(atom.grammars.selectGrammar('Rakefile', '').scopeName).toBe('source.coffee')
|
||||
expect(atom.grammars.selectGrammar('Cakefile', '').scopeName).toBe('source.ruby')
|
||||
})
|
||||
|
||||
it('favors user-defined file types over grammars with matching first-line-regexps', async () => {
|
||||
await atom.packages.activatePackage('language-ruby')
|
||||
await atom.packages.activatePackage('language-javascript')
|
||||
|
||||
atom.config.set('core.customFileTypes', {'source.ruby': ['bootstrap']})
|
||||
expect(atom.grammars.selectGrammar('bootstrap', '#!/usr/bin/env node').scopeName).toBe('source.ruby')
|
||||
})
|
||||
})
|
||||
|
||||
it('favors a grammar with a matching file type over one with m matching first line pattern', async () => {
|
||||
await atom.packages.activatePackage('language-ruby')
|
||||
await atom.packages.activatePackage('language-javascript')
|
||||
expect(atom.grammars.selectGrammar('foo.rb', '#!/usr/bin/env node').scopeName).toBe('source.ruby')
|
||||
})
|
||||
})
|
||||
|
||||
describe('.removeGrammar(grammar)', () => {
|
||||
it("removes the grammar, so it won't be returned by selectGrammar", async () => {
|
||||
await atom.packages.activatePackage('language-javascript')
|
||||
const grammar = atom.grammars.selectGrammar('foo.js')
|
||||
atom.grammars.removeGrammar(grammar)
|
||||
expect(atom.grammars.selectGrammar('foo.js').name).not.toBe(grammar.name)
|
||||
})
|
||||
})
|
||||
|
||||
describe('serialization', () => {
|
||||
it('persists editors\' grammar overrides', async () => {
|
||||
const buffer1 = new TextBuffer()
|
||||
const buffer2 = new TextBuffer()
|
||||
|
||||
grammarRegistry.loadGrammarSync(require.resolve('language-c/grammars/c.cson'))
|
||||
grammarRegistry.loadGrammarSync(require.resolve('language-html/grammars/html.cson'))
|
||||
grammarRegistry.loadGrammarSync(require.resolve('language-javascript/grammars/javascript.cson'))
|
||||
|
||||
grammarRegistry.maintainLanguageMode(buffer1)
|
||||
grammarRegistry.maintainLanguageMode(buffer2)
|
||||
grammarRegistry.assignLanguageMode(buffer1, 'source.c')
|
||||
grammarRegistry.assignLanguageMode(buffer2, 'source.js')
|
||||
|
||||
const buffer1Copy = await TextBuffer.deserialize(buffer1.serialize())
|
||||
const buffer2Copy = await TextBuffer.deserialize(buffer2.serialize())
|
||||
|
||||
const grammarRegistryCopy = new GrammarRegistry({config: atom.config})
|
||||
grammarRegistryCopy.deserialize(JSON.parse(JSON.stringify(grammarRegistry.serialize())))
|
||||
|
||||
grammarRegistryCopy.loadGrammarSync(require.resolve('language-c/grammars/c.cson'))
|
||||
grammarRegistryCopy.loadGrammarSync(require.resolve('language-html/grammars/html.cson'))
|
||||
|
||||
expect(buffer1Copy.getLanguageMode().getLanguageId()).toBe(null)
|
||||
expect(buffer2Copy.getLanguageMode().getLanguageId()).toBe(null)
|
||||
|
||||
grammarRegistryCopy.maintainLanguageMode(buffer1Copy)
|
||||
grammarRegistryCopy.maintainLanguageMode(buffer2Copy)
|
||||
expect(buffer1Copy.getLanguageMode().getLanguageId()).toBe('source.c')
|
||||
expect(buffer2Copy.getLanguageMode().getLanguageId()).toBe(null)
|
||||
|
||||
grammarRegistryCopy.loadGrammarSync(require.resolve('language-javascript/grammars/javascript.cson'))
|
||||
expect(buffer1Copy.getLanguageMode().getLanguageId()).toBe('source.c')
|
||||
expect(buffer2Copy.getLanguageMode().getLanguageId()).toBe('source.js')
|
||||
})
|
||||
})
|
||||
})
|
||||
|
||||
function retainedBufferCount (grammarRegistry) {
|
||||
return grammarRegistry.grammarScoresByBuffer.size
|
||||
}
|
||||
|
||||
function subscriptionCount (grammarRegistry) {
|
||||
return grammarRegistry.subscriptions.disposables.size
|
||||
}
|
||||
@@ -1,182 +0,0 @@
|
||||
path = require 'path'
|
||||
fs = require 'fs-plus'
|
||||
temp = require('temp').track()
|
||||
GrammarRegistry = require '../src/grammar-registry'
|
||||
Grim = require 'grim'
|
||||
|
||||
describe "the `grammars` global", ->
|
||||
beforeEach ->
|
||||
waitsForPromise ->
|
||||
atom.packages.activatePackage('language-text')
|
||||
|
||||
waitsForPromise ->
|
||||
atom.packages.activatePackage('language-javascript')
|
||||
|
||||
waitsForPromise ->
|
||||
atom.packages.activatePackage('language-coffee-script')
|
||||
|
||||
waitsForPromise ->
|
||||
atom.packages.activatePackage('language-ruby')
|
||||
|
||||
waitsForPromise ->
|
||||
atom.packages.activatePackage('language-git')
|
||||
|
||||
afterEach ->
|
||||
waitsForPromise ->
|
||||
atom.packages.deactivatePackages()
|
||||
runs ->
|
||||
atom.packages.unloadPackages()
|
||||
try
|
||||
temp.cleanupSync()
|
||||
|
||||
describe ".selectGrammar(filePath)", ->
|
||||
it "always returns a grammar", ->
|
||||
registry = new GrammarRegistry(config: atom.config)
|
||||
expect(registry.selectGrammar().scopeName).toBe 'text.plain.null-grammar'
|
||||
|
||||
it "selects the text.plain grammar over the null grammar", ->
|
||||
expect(atom.grammars.selectGrammar('test.txt').scopeName).toBe 'text.plain'
|
||||
|
||||
it "selects a grammar based on the file path case insensitively", ->
|
||||
expect(atom.grammars.selectGrammar('/tmp/source.coffee').scopeName).toBe 'source.coffee'
|
||||
expect(atom.grammars.selectGrammar('/tmp/source.COFFEE').scopeName).toBe 'source.coffee'
|
||||
|
||||
describe "on Windows", ->
|
||||
originalPlatform = null
|
||||
|
||||
beforeEach ->
|
||||
originalPlatform = process.platform
|
||||
Object.defineProperty process, 'platform', value: 'win32'
|
||||
|
||||
afterEach ->
|
||||
Object.defineProperty process, 'platform', value: originalPlatform
|
||||
|
||||
it "normalizes back slashes to forward slashes when matching the fileTypes", ->
|
||||
expect(atom.grammars.selectGrammar('something\\.git\\config').scopeName).toBe 'source.git-config'
|
||||
|
||||
it "can use the filePath to load the correct grammar based on the grammar's filetype", ->
|
||||
waitsForPromise ->
|
||||
atom.packages.activatePackage('language-git')
|
||||
|
||||
runs ->
|
||||
expect(atom.grammars.selectGrammar("file.js").name).toBe "JavaScript" # based on extension (.js)
|
||||
expect(atom.grammars.selectGrammar(path.join(temp.dir, '.git', 'config')).name).toBe "Git Config" # based on end of the path (.git/config)
|
||||
expect(atom.grammars.selectGrammar("Rakefile").name).toBe "Ruby" # based on the file's basename (Rakefile)
|
||||
expect(atom.grammars.selectGrammar("curb").name).toBe "Null Grammar"
|
||||
expect(atom.grammars.selectGrammar("/hu.git/config").name).toBe "Null Grammar"
|
||||
|
||||
it "uses the filePath's shebang line if the grammar cannot be determined by the extension or basename", ->
|
||||
filePath = require.resolve("./fixtures/shebang")
|
||||
expect(atom.grammars.selectGrammar(filePath).name).toBe "Ruby"
|
||||
|
||||
it "uses the number of newlines in the first line regex to determine the number of lines to test against", ->
|
||||
waitsForPromise ->
|
||||
atom.packages.activatePackage('language-property-list')
|
||||
|
||||
runs ->
|
||||
fileContent = "first-line\n<html>"
|
||||
expect(atom.grammars.selectGrammar("dummy.coffee", fileContent).name).toBe "CoffeeScript"
|
||||
|
||||
fileContent = '<?xml version="1.0" encoding="UTF-8"?>'
|
||||
expect(atom.grammars.selectGrammar("grammar.tmLanguage", fileContent).name).toBe "Null Grammar"
|
||||
|
||||
fileContent += '\n<!DOCTYPE plist PUBLIC "-//Apple//DTD PLIST 1.0//EN" "http://www.apple.com/DTDs/PropertyList-1.0.dtd">'
|
||||
expect(atom.grammars.selectGrammar("grammar.tmLanguage", fileContent).name).toBe "Property List (XML)"
|
||||
|
||||
it "doesn't read the file when the file contents are specified", ->
|
||||
filePath = require.resolve("./fixtures/shebang")
|
||||
filePathContents = fs.readFileSync(filePath, 'utf8')
|
||||
spyOn(fs, 'read').andCallThrough()
|
||||
expect(atom.grammars.selectGrammar(filePath, filePathContents).name).toBe "Ruby"
|
||||
expect(fs.read).not.toHaveBeenCalled()
|
||||
|
||||
describe "when multiple grammars have matching fileTypes", ->
|
||||
it "selects the grammar with the longest fileType match", ->
|
||||
grammarPath1 = temp.path(suffix: '.json')
|
||||
fs.writeFileSync grammarPath1, JSON.stringify(
|
||||
name: 'test1'
|
||||
scopeName: 'source1'
|
||||
fileTypes: ['test']
|
||||
)
|
||||
grammar1 = atom.grammars.loadGrammarSync(grammarPath1)
|
||||
expect(atom.grammars.selectGrammar('more.test', '')).toBe grammar1
|
||||
fs.removeSync(grammarPath1)
|
||||
|
||||
grammarPath2 = temp.path(suffix: '.json')
|
||||
fs.writeFileSync grammarPath2, JSON.stringify(
|
||||
name: 'test2'
|
||||
scopeName: 'source2'
|
||||
fileTypes: ['test', 'more.test']
|
||||
)
|
||||
grammar2 = atom.grammars.loadGrammarSync(grammarPath2)
|
||||
expect(atom.grammars.selectGrammar('more.test', '')).toBe grammar2
|
||||
fs.removeSync(grammarPath2)
|
||||
|
||||
it "favors non-bundled packages when breaking scoring ties", ->
|
||||
waitsForPromise ->
|
||||
atom.packages.activatePackage(path.join(__dirname, 'fixtures', 'packages', 'package-with-rb-filetype'))
|
||||
|
||||
runs ->
|
||||
atom.grammars.grammarForScopeName('source.ruby').bundledPackage = true
|
||||
atom.grammars.grammarForScopeName('test.rb').bundledPackage = false
|
||||
|
||||
expect(atom.grammars.selectGrammar('test.rb', '#!/usr/bin/env ruby').scopeName).toBe 'source.ruby'
|
||||
expect(atom.grammars.selectGrammar('test.rb', '#!/usr/bin/env testruby').scopeName).toBe 'test.rb'
|
||||
expect(atom.grammars.selectGrammar('test.rb').scopeName).toBe 'test.rb'
|
||||
|
||||
describe "when there is no file path", ->
|
||||
it "does not throw an exception (regression)", ->
|
||||
expect(-> atom.grammars.selectGrammar(null, '#!/usr/bin/ruby')).not.toThrow()
|
||||
expect(-> atom.grammars.selectGrammar(null, '')).not.toThrow()
|
||||
expect(-> atom.grammars.selectGrammar(null, null)).not.toThrow()
|
||||
|
||||
describe "when the user has custom grammar file types", ->
|
||||
it "considers the custom file types as well as those defined in the grammar", ->
|
||||
atom.config.set('core.customFileTypes', 'source.ruby': ['Cheffile'])
|
||||
expect(atom.grammars.selectGrammar('build/Cheffile', 'cookbook "postgres"').scopeName).toBe 'source.ruby'
|
||||
|
||||
it "favors user-defined file types over built-in ones of equal length", ->
|
||||
atom.config.set('core.customFileTypes',
|
||||
'source.coffee': ['Rakefile'],
|
||||
'source.ruby': ['Cakefile']
|
||||
)
|
||||
expect(atom.grammars.selectGrammar('Rakefile', '').scopeName).toBe 'source.coffee'
|
||||
expect(atom.grammars.selectGrammar('Cakefile', '').scopeName).toBe 'source.ruby'
|
||||
|
||||
it "favors user-defined file types over grammars with matching first-line-regexps", ->
|
||||
atom.config.set('core.customFileTypes', 'source.ruby': ['bootstrap'])
|
||||
expect(atom.grammars.selectGrammar('bootstrap', '#!/usr/bin/env node').scopeName).toBe 'source.ruby'
|
||||
|
||||
describe "when there is a grammar with a first line pattern, the file type of the file is known, but from a different grammar", ->
|
||||
it "favors file type over the matching pattern", ->
|
||||
expect(atom.grammars.selectGrammar('foo.rb', '#!/usr/bin/env node').scopeName).toBe 'source.ruby'
|
||||
|
||||
describe ".removeGrammar(grammar)", ->
|
||||
it "removes the grammar, so it won't be returned by selectGrammar", ->
|
||||
grammar = atom.grammars.selectGrammar('foo.js')
|
||||
atom.grammars.removeGrammar(grammar)
|
||||
expect(atom.grammars.selectGrammar('foo.js').name).not.toBe grammar.name
|
||||
|
||||
describe "grammar overrides", ->
|
||||
it "logs deprecations and uses the TextEditorRegistry", ->
|
||||
editor = null
|
||||
|
||||
waitsForPromise ->
|
||||
atom.workspace.open('sample.js').then (e) -> editor = e
|
||||
|
||||
runs ->
|
||||
spyOn(Grim, 'deprecate')
|
||||
|
||||
atom.grammars.setGrammarOverrideForPath(editor.getPath(), 'source.ruby')
|
||||
expect(Grim.deprecate.callCount).toBe 1
|
||||
expect(editor.getGrammar().name).toBe 'Ruby'
|
||||
|
||||
expect(atom.grammars.grammarOverrideForPath(editor.getPath())).toBe('source.ruby')
|
||||
expect(Grim.deprecate.callCount).toBe 2
|
||||
|
||||
atom.grammars.clearGrammarOverrideForPath(editor.getPath(), 'source.ruby')
|
||||
expect(Grim.deprecate.callCount).toBe 3
|
||||
expect(editor.getGrammar().name).toBe 'JavaScript'
|
||||
|
||||
expect(atom.grammars.grammarOverrideForPath(editor.getPath())).toBe(undefined)
|
||||
expect(Grim.deprecate.callCount).toBe 4
|
||||
@@ -11,6 +11,9 @@ describe("HistoryManager", () => {
|
||||
let commandDisposable, projectDisposable
|
||||
|
||||
beforeEach(async () => {
|
||||
// Do not clobber recent project history
|
||||
spyOn(atom.applicationDelegate, 'didChangeHistoryManager')
|
||||
|
||||
commandDisposable = jasmine.createSpyObj('Disposable', ['dispose'])
|
||||
commandRegistry = jasmine.createSpyObj('CommandRegistry', ['add'])
|
||||
commandRegistry.add.andReturn(commandDisposable)
|
||||
|
||||
@@ -35,7 +35,12 @@ describe('Project', () => {
|
||||
})
|
||||
|
||||
it("does not deserialize paths to directories that don't exist", () => {
|
||||
deserializedProject = new Project({notificationManager: atom.notifications, packageManager: atom.packages, confirm: atom.confirm})
|
||||
deserializedProject = new Project({
|
||||
notificationManager: atom.notifications,
|
||||
packageManager: atom.packages,
|
||||
confirm: atom.confirm,
|
||||
grammarRegistry: atom.grammars
|
||||
})
|
||||
const state = atom.project.serialize()
|
||||
state.paths.push('/directory/that/does/not/exist')
|
||||
|
||||
@@ -55,7 +60,12 @@ describe('Project', () => {
|
||||
const childPath = path.join(temp.mkdirSync('atom-spec-project'), 'child')
|
||||
fs.mkdirSync(childPath)
|
||||
|
||||
deserializedProject = new Project({notificationManager: atom.notifications, packageManager: atom.packages, confirm: atom.confirm})
|
||||
deserializedProject = new Project({
|
||||
notificationManager: atom.notifications,
|
||||
packageManager: atom.packages,
|
||||
confirm: atom.confirm,
|
||||
grammarRegistry: atom.grammars
|
||||
})
|
||||
atom.project.setPaths([childPath])
|
||||
const state = atom.project.serialize()
|
||||
|
||||
@@ -80,7 +90,12 @@ describe('Project', () => {
|
||||
runs(() => {
|
||||
expect(atom.project.getBuffers().length).toBe(1)
|
||||
|
||||
deserializedProject = new Project({notificationManager: atom.notifications, packageManager: atom.packages, confirm: atom.confirm})
|
||||
deserializedProject = new Project({
|
||||
notificationManager: atom.notifications,
|
||||
packageManager: atom.packages,
|
||||
confirm: atom.confirm,
|
||||
grammarRegistry: atom.grammars
|
||||
})
|
||||
})
|
||||
|
||||
waitsForPromise(() => deserializedProject.deserialize(atom.project.serialize({isUnloading: false})))
|
||||
@@ -93,7 +108,12 @@ describe('Project', () => {
|
||||
|
||||
runs(() => {
|
||||
expect(atom.project.getBuffers().length).toBe(1)
|
||||
deserializedProject = new Project({notificationManager: atom.notifications, packageManager: atom.packages, confirm: atom.confirm})
|
||||
deserializedProject = new Project({
|
||||
notificationManager: atom.notifications,
|
||||
packageManager: atom.packages,
|
||||
confirm: atom.confirm,
|
||||
grammarRegistry: atom.grammars
|
||||
})
|
||||
})
|
||||
|
||||
waitsForPromise(() => deserializedProject.deserialize(atom.project.serialize({isUnloading: false})))
|
||||
@@ -113,7 +133,12 @@ describe('Project', () => {
|
||||
runs(() => {
|
||||
expect(atom.project.getBuffers().length).toBe(1)
|
||||
fs.mkdirSync(pathToOpen)
|
||||
deserializedProject = new Project({notificationManager: atom.notifications, packageManager: atom.packages, confirm: atom.confirm})
|
||||
deserializedProject = new Project({
|
||||
notificationManager: atom.notifications,
|
||||
packageManager: atom.packages,
|
||||
confirm: atom.confirm,
|
||||
grammarRegistry: atom.grammars
|
||||
})
|
||||
})
|
||||
|
||||
waitsForPromise(() => deserializedProject.deserialize(atom.project.serialize({isUnloading: false})))
|
||||
@@ -131,7 +156,12 @@ describe('Project', () => {
|
||||
runs(() => {
|
||||
expect(atom.project.getBuffers().length).toBe(1)
|
||||
fs.chmodSync(pathToOpen, '000')
|
||||
deserializedProject = new Project({notificationManager: atom.notifications, packageManager: atom.packages, confirm: atom.confirm})
|
||||
deserializedProject = new Project({
|
||||
notificationManager: atom.notifications,
|
||||
packageManager: atom.packages,
|
||||
confirm: atom.confirm,
|
||||
grammarRegistry: atom.grammars
|
||||
})
|
||||
})
|
||||
|
||||
waitsForPromise(() => deserializedProject.deserialize(atom.project.serialize({isUnloading: false})))
|
||||
@@ -148,7 +178,12 @@ describe('Project', () => {
|
||||
runs(() => {
|
||||
expect(atom.project.getBuffers().length).toBe(1)
|
||||
fs.unlinkSync(pathToOpen)
|
||||
deserializedProject = new Project({notificationManager: atom.notifications, packageManager: atom.packages, confirm: atom.confirm})
|
||||
deserializedProject = new Project({
|
||||
notificationManager: atom.notifications,
|
||||
packageManager: atom.packages,
|
||||
confirm: atom.confirm,
|
||||
grammarRegistry: atom.grammars
|
||||
})
|
||||
})
|
||||
|
||||
waitsForPromise(() => deserializedProject.deserialize(atom.project.serialize({isUnloading: false})))
|
||||
@@ -165,7 +200,12 @@ describe('Project', () => {
|
||||
atom.workspace.getActiveTextEditor().setText('unsaved\n')
|
||||
expect(atom.project.getBuffers().length).toBe(1)
|
||||
|
||||
deserializedProject = new Project({notificationManager: atom.notifications, packageManager: atom.packages, confirm: atom.confirm})
|
||||
deserializedProject = new Project({
|
||||
notificationManager: atom.notifications,
|
||||
packageManager: atom.packages,
|
||||
confirm: atom.confirm,
|
||||
grammarRegistry: atom.grammars
|
||||
})
|
||||
})
|
||||
|
||||
waitsForPromise(() => deserializedProject.deserialize(atom.project.serialize({isUnloading: false})))
|
||||
@@ -189,7 +229,12 @@ describe('Project', () => {
|
||||
layerA = bufferA.addMarkerLayer({persistent: true})
|
||||
markerA = layerA.markPosition([0, 3])
|
||||
bufferA.append('!')
|
||||
notQuittingProject = new Project({notificationManager: atom.notifications, packageManager: atom.packages, confirm: atom.confirm})
|
||||
notQuittingProject = new Project({
|
||||
notificationManager: atom.notifications,
|
||||
packageManager: atom.packages,
|
||||
confirm: atom.confirm,
|
||||
grammarRegistry: atom.grammars
|
||||
})
|
||||
})
|
||||
|
||||
waitsForPromise(() => notQuittingProject.deserialize(atom.project.serialize({isUnloading: false})))
|
||||
@@ -197,7 +242,12 @@ describe('Project', () => {
|
||||
runs(() => {
|
||||
expect(notQuittingProject.getBuffers()[0].getMarkerLayer(layerA.id), x => x.getMarker(markerA.id)).toBeUndefined()
|
||||
expect(notQuittingProject.getBuffers()[0].undo()).toBe(false)
|
||||
quittingProject = new Project({notificationManager: atom.notifications, packageManager: atom.packages, confirm: atom.confirm})
|
||||
quittingProject = new Project({
|
||||
notificationManager: atom.notifications,
|
||||
packageManager: atom.packages,
|
||||
confirm: atom.confirm,
|
||||
grammarRegistry: atom.grammars
|
||||
})
|
||||
})
|
||||
|
||||
waitsForPromise(() => quittingProject.deserialize(atom.project.serialize({isUnloading: true})))
|
||||
@@ -209,7 +259,7 @@ describe('Project', () => {
|
||||
})
|
||||
})
|
||||
|
||||
describe('when an editor is saved and the project has no path', () =>
|
||||
describe('when an editor is saved and the project has no path', () => {
|
||||
it("sets the project's path to the saved file's parent directory", () => {
|
||||
const tempFile = temp.openSync().path
|
||||
atom.project.setPaths([])
|
||||
@@ -222,7 +272,7 @@ describe('Project', () => {
|
||||
|
||||
runs(() => expect(atom.project.getPaths()[0]).toBe(path.dirname(tempFile)))
|
||||
})
|
||||
)
|
||||
})
|
||||
|
||||
describe('before and after saving a buffer', () => {
|
||||
let buffer
|
||||
@@ -422,7 +472,7 @@ describe('Project', () => {
|
||||
atom.project.onDidAddBuffer(newBufferHandler)
|
||||
})
|
||||
|
||||
describe("when given an absolute path that isn't currently open", () =>
|
||||
describe("when given an absolute path that isn't currently open", () => {
|
||||
it("returns a new edit session for the given path and emits 'buffer-created'", () => {
|
||||
let editor = null
|
||||
waitsForPromise(() => atom.workspace.open(absolutePath).then(o => { editor = o }))
|
||||
@@ -432,9 +482,9 @@ describe('Project', () => {
|
||||
expect(newBufferHandler).toHaveBeenCalledWith(editor.buffer)
|
||||
})
|
||||
})
|
||||
)
|
||||
})
|
||||
|
||||
describe("when given a relative path that isn't currently opened", () =>
|
||||
describe("when given a relative path that isn't currently opened", () => {
|
||||
it("returns a new edit session for the given path (relative to the project root) and emits 'buffer-created'", () => {
|
||||
let editor = null
|
||||
waitsForPromise(() => atom.workspace.open(absolutePath).then(o => { editor = o }))
|
||||
@@ -444,9 +494,9 @@ describe('Project', () => {
|
||||
expect(newBufferHandler).toHaveBeenCalledWith(editor.buffer)
|
||||
})
|
||||
})
|
||||
)
|
||||
})
|
||||
|
||||
describe('when passed the path to a buffer that is currently opened', () =>
|
||||
describe('when passed the path to a buffer that is currently opened', () => {
|
||||
it('returns a new edit session containing currently opened buffer', () => {
|
||||
let editor = null
|
||||
|
||||
@@ -465,9 +515,9 @@ describe('Project', () => {
|
||||
})
|
||||
)
|
||||
})
|
||||
)
|
||||
})
|
||||
|
||||
describe('when not passed a path', () =>
|
||||
describe('when not passed a path', () => {
|
||||
it("returns a new edit session and emits 'buffer-created'", () => {
|
||||
let editor = null
|
||||
waitsForPromise(() => atom.workspace.open().then(o => { editor = o }))
|
||||
@@ -477,7 +527,7 @@ describe('Project', () => {
|
||||
expect(newBufferHandler).toHaveBeenCalledWith(editor.buffer)
|
||||
})
|
||||
})
|
||||
)
|
||||
})
|
||||
})
|
||||
|
||||
describe('.bufferForPath(path)', () => {
|
||||
@@ -537,7 +587,7 @@ describe('Project', () => {
|
||||
})
|
||||
|
||||
describe('.repositoryForDirectory(directory)', () => {
|
||||
it('resolves to null when the directory does not have a repository', () =>
|
||||
it('resolves to null when the directory does not have a repository', () => {
|
||||
waitsForPromise(() => {
|
||||
const directory = new Directory('/tmp')
|
||||
return atom.project.repositoryForDirectory(directory).then((result) => {
|
||||
@@ -546,9 +596,9 @@ describe('Project', () => {
|
||||
expect(atom.project.repositoryPromisesByPath.size).toBe(0)
|
||||
})
|
||||
})
|
||||
)
|
||||
})
|
||||
|
||||
it('resolves to a GitRepository and is cached when the given directory is a Git repo', () =>
|
||||
it('resolves to a GitRepository and is cached when the given directory is a Git repo', () => {
|
||||
waitsForPromise(() => {
|
||||
const directory = new Directory(path.join(__dirname, '..'))
|
||||
const promise = atom.project.repositoryForDirectory(directory)
|
||||
@@ -561,7 +611,7 @@ describe('Project', () => {
|
||||
expect(atom.project.repositoryForDirectory(directory)).toBe(promise)
|
||||
})
|
||||
})
|
||||
)
|
||||
})
|
||||
|
||||
it('creates a new repository if a previous one with the same directory had been destroyed', () => {
|
||||
let repository = null
|
||||
@@ -582,14 +632,14 @@ describe('Project', () => {
|
||||
})
|
||||
|
||||
describe('.setPaths(paths, options)', () => {
|
||||
describe('when path is a file', () =>
|
||||
describe('when path is a file', () => {
|
||||
it("sets its path to the file's parent directory and updates the root directory", () => {
|
||||
const filePath = require.resolve('./fixtures/dir/a')
|
||||
atom.project.setPaths([filePath])
|
||||
expect(atom.project.getPaths()[0]).toEqual(path.dirname(filePath))
|
||||
expect(atom.project.getDirectories()[0].path).toEqual(path.dirname(filePath))
|
||||
})
|
||||
)
|
||||
})
|
||||
|
||||
describe('when path is a directory', () => {
|
||||
it('assigns the directories and repositories', () => {
|
||||
@@ -636,13 +686,13 @@ describe('Project', () => {
|
||||
})
|
||||
})
|
||||
|
||||
describe('when no paths are given', () =>
|
||||
describe('when no paths are given', () => {
|
||||
it('clears its path', () => {
|
||||
atom.project.setPaths([])
|
||||
expect(atom.project.getPaths()).toEqual([])
|
||||
expect(atom.project.getDirectories()).toEqual([])
|
||||
})
|
||||
)
|
||||
})
|
||||
|
||||
it('normalizes the path to remove consecutive slashes, ., and .. segments', () => {
|
||||
atom.project.setPaths([`${require.resolve('./fixtures/dir/a')}${path.sep}b${path.sep}${path.sep}..`])
|
||||
@@ -693,9 +743,9 @@ describe('Project', () => {
|
||||
expect(atom.project.getPaths()).toEqual(previousPaths)
|
||||
})
|
||||
|
||||
it('optionally throws on non-existent directories', () =>
|
||||
it('optionally throws on non-existent directories', () => {
|
||||
expect(() => atom.project.addPath('/this-definitely/does-not-exist', {mustExist: true})).toThrow()
|
||||
)
|
||||
})
|
||||
})
|
||||
|
||||
describe('.removePath(path)', () => {
|
||||
@@ -813,7 +863,7 @@ describe('Project', () => {
|
||||
})
|
||||
})
|
||||
|
||||
describe('.onDidAddBuffer()', () =>
|
||||
describe('.onDidAddBuffer()', () => {
|
||||
it('invokes the callback with added text buffers', () => {
|
||||
const buffers = []
|
||||
const added = []
|
||||
@@ -838,9 +888,9 @@ describe('Project', () => {
|
||||
expect(added).toEqual([buffers[1]])
|
||||
})
|
||||
})
|
||||
)
|
||||
})
|
||||
|
||||
describe('.observeBuffers()', () =>
|
||||
describe('.observeBuffers()', () => {
|
||||
it('invokes the observer with current and future text buffers', () => {
|
||||
const buffers = []
|
||||
const observed = []
|
||||
@@ -872,7 +922,7 @@ describe('Project', () => {
|
||||
expect(observed).toEqual(buffers)
|
||||
})
|
||||
})
|
||||
)
|
||||
})
|
||||
|
||||
describe('.relativize(path)', () => {
|
||||
it('returns the path, relative to whichever root directory it is inside of', () => {
|
||||
@@ -906,21 +956,21 @@ describe('Project', () => {
|
||||
expect(atom.project.relativizePath(childPath)).toEqual([rootPath, path.join('some', 'child', 'directory')])
|
||||
})
|
||||
|
||||
describe("when the given path isn't inside of any of the project's path", () =>
|
||||
describe("when the given path isn't inside of any of the project's path", () => {
|
||||
it('returns null for the root path, and the given path unchanged', () => {
|
||||
const randomPath = path.join('some', 'random', 'path')
|
||||
expect(atom.project.relativizePath(randomPath)).toEqual([null, randomPath])
|
||||
})
|
||||
)
|
||||
})
|
||||
|
||||
describe('when the given path is a URL', () =>
|
||||
describe('when the given path is a URL', () => {
|
||||
it('returns null for the root path, and the given path unchanged', () => {
|
||||
const url = 'http://the-path'
|
||||
expect(atom.project.relativizePath(url)).toEqual([null, url])
|
||||
})
|
||||
)
|
||||
})
|
||||
|
||||
describe('when the given path is inside more than one root folder', () =>
|
||||
describe('when the given path is inside more than one root folder', () => {
|
||||
it('uses the root folder that is closest to the given path', () => {
|
||||
atom.project.addPath(path.join(atom.project.getPaths()[0], 'a-dir'))
|
||||
|
||||
@@ -933,10 +983,10 @@ describe('Project', () => {
|
||||
path.join('somewhere', 'something.txt')
|
||||
])
|
||||
})
|
||||
)
|
||||
})
|
||||
})
|
||||
|
||||
describe('.contains(path)', () =>
|
||||
describe('.contains(path)', () => {
|
||||
it('returns whether or not the given path is in one of the root directories', () => {
|
||||
const rootPath = atom.project.getPaths()[0]
|
||||
const childPath = path.join(rootPath, 'some', 'child', 'directory')
|
||||
@@ -945,11 +995,11 @@ describe('Project', () => {
|
||||
const randomPath = path.join('some', 'random', 'path')
|
||||
expect(atom.project.contains(randomPath)).toBe(false)
|
||||
})
|
||||
)
|
||||
})
|
||||
|
||||
describe('.resolvePath(uri)', () =>
|
||||
describe('.resolvePath(uri)', () => {
|
||||
it('normalizes disk drive letter in passed path on #win32', () => {
|
||||
expect(atom.project.resolvePath('d:\\file.txt')).toEqual('D:\\file.txt')
|
||||
})
|
||||
)
|
||||
})
|
||||
})
|
||||
|
||||
@@ -7,10 +7,11 @@ fs = require 'fs-plus'
|
||||
Grim = require 'grim'
|
||||
pathwatcher = require 'pathwatcher'
|
||||
FindParentDir = require 'find-parent-dir'
|
||||
{CompositeDisposable} = require 'event-kit'
|
||||
|
||||
TextEditor = require '../src/text-editor'
|
||||
TextEditorElement = require '../src/text-editor-element'
|
||||
TokenizedBuffer = require '../src/tokenized-buffer'
|
||||
TextMateLanguageMode = require '../src/text-mate-language-mode'
|
||||
clipboard = require '../src/safe-clipboard'
|
||||
|
||||
jasmineStyle = document.createElement('style')
|
||||
@@ -61,6 +62,9 @@ else
|
||||
specProjectPath = require('os').tmpdir()
|
||||
|
||||
beforeEach ->
|
||||
# Do not clobber recent project history
|
||||
spyOn(atom.history, 'saveState').andReturn(Promise.resolve())
|
||||
|
||||
atom.project.setPaths([specProjectPath])
|
||||
|
||||
window.resetTimeouts()
|
||||
@@ -96,8 +100,20 @@ beforeEach ->
|
||||
spyOn(TextEditor.prototype, "shouldPromptToSave").andReturn false
|
||||
|
||||
# make tokenization synchronous
|
||||
TokenizedBuffer.prototype.chunkSize = Infinity
|
||||
spyOn(TokenizedBuffer.prototype, "tokenizeInBackground").andCallFake -> @tokenizeNextChunk()
|
||||
TextMateLanguageMode.prototype.chunkSize = Infinity
|
||||
spyOn(TextMateLanguageMode.prototype, "tokenizeInBackground").andCallFake -> @tokenizeNextChunk()
|
||||
|
||||
# Without this spy, TextEditor.onDidTokenize callbacks would not be called
|
||||
# after the buffer's language mode changed, because by the time the editor
|
||||
# called its new language mode's onDidTokenize method, the language mode
|
||||
# would already be fully tokenized.
|
||||
spyOn(TextEditor.prototype, "onDidTokenize").andCallFake (callback) ->
|
||||
new CompositeDisposable(
|
||||
@emitter.on("did-tokenize", callback),
|
||||
@onDidChangeGrammar =>
|
||||
if @buffer.getLanguageMode().tokenizeInBackground.originalValue
|
||||
callback()
|
||||
)
|
||||
|
||||
clipboardContent = 'initial clipboard content'
|
||||
spyOn(clipboard, 'writeText').andCallFake (text) -> clipboardContent = text
|
||||
|
||||
@@ -25,6 +25,8 @@ document.registerElement('text-editor-component-test-element', {
|
||||
})
|
||||
})
|
||||
|
||||
const editors = []
|
||||
|
||||
describe('TextEditorComponent', () => {
|
||||
beforeEach(() => {
|
||||
jasmine.useRealClock()
|
||||
@@ -35,6 +37,13 @@ describe('TextEditorComponent', () => {
|
||||
jasmine.attachToDOM(scrollbarStyle)
|
||||
})
|
||||
|
||||
afterEach(() => {
|
||||
for (const editor of editors) {
|
||||
editor.destroy()
|
||||
}
|
||||
editors.length = 0
|
||||
})
|
||||
|
||||
describe('rendering', () => {
|
||||
it('renders lines and line numbers for the visible region', async () => {
|
||||
const {component, element, editor} = buildComponent({rowsPerTile: 3, autoHeight: false})
|
||||
@@ -786,7 +795,7 @@ describe('TextEditorComponent', () => {
|
||||
const {editor, element, component} = buildComponent()
|
||||
expect(element.dataset.grammar).toBe('text plain null-grammar')
|
||||
|
||||
editor.setGrammar(atom.grammars.grammarForScopeName('source.js'))
|
||||
atom.grammars.assignLanguageMode(editor.getBuffer(), 'source.js')
|
||||
await component.getNextUpdatePromise()
|
||||
expect(element.dataset.grammar).toBe('source js')
|
||||
})
|
||||
@@ -4482,9 +4491,11 @@ function buildEditor (params = {}) {
|
||||
for (const paramName of ['mini', 'autoHeight', 'autoWidth', 'lineNumberGutterVisible', 'showLineNumbers', 'placeholderText', 'softWrapped', 'scrollSensitivity']) {
|
||||
if (params[paramName] != null) editorParams[paramName] = params[paramName]
|
||||
}
|
||||
atom.grammars.autoAssignLanguageMode(buffer)
|
||||
const editor = new TextEditor(editorParams)
|
||||
editor.testAutoscrollRequests = []
|
||||
editor.onDidRequestAutoscroll((request) => { editor.testAutoscrollRequests.push(request) })
|
||||
editors.push(editor)
|
||||
return editor
|
||||
}
|
||||
|
||||
|
||||
@@ -77,13 +77,11 @@ describe('TextEditorElement', () => {
|
||||
})
|
||||
|
||||
describe('when the model is assigned', () =>
|
||||
it("adds the 'mini' attribute if .isMini() returns true on the model", function (done) {
|
||||
it("adds the 'mini' attribute if .isMini() returns true on the model", async () => {
|
||||
const element = buildTextEditorElement()
|
||||
element.getModel().update({mini: true})
|
||||
atom.views.getNextUpdatePromise().then(() => {
|
||||
expect(element.hasAttribute('mini')).toBe(true)
|
||||
done()
|
||||
})
|
||||
await atom.views.getNextUpdatePromise()
|
||||
expect(element.hasAttribute('mini')).toBe(true)
|
||||
})
|
||||
)
|
||||
|
||||
@@ -268,12 +266,11 @@ describe('TextEditorElement', () => {
|
||||
})
|
||||
)
|
||||
|
||||
describe('::setUpdatedSynchronously', () =>
|
||||
describe('::setUpdatedSynchronously', () => {
|
||||
it('controls whether the text editor is updated synchronously', () => {
|
||||
spyOn(window, 'requestAnimationFrame').andCallFake(fn => fn())
|
||||
|
||||
const element = buildTextEditorElement()
|
||||
jasmine.attachToDOM(element)
|
||||
|
||||
expect(element.isUpdatedSynchronously()).toBe(false)
|
||||
|
||||
@@ -288,7 +285,7 @@ describe('TextEditorElement', () => {
|
||||
expect(window.requestAnimationFrame).not.toHaveBeenCalled()
|
||||
expect(element.textContent).toContain('goodbye')
|
||||
})
|
||||
)
|
||||
})
|
||||
|
||||
describe('::getDefaultCharacterWidth', () => {
|
||||
it('returns 0 before the element is attached', () => {
|
||||
|
||||
@@ -1,10 +1,8 @@
|
||||
/** @babel */
|
||||
|
||||
import TextEditorRegistry from '../src/text-editor-registry'
|
||||
import TextEditor from '../src/text-editor'
|
||||
import TextBuffer from 'text-buffer'
|
||||
import {it, fit, ffit, fffit} from './async-spec-helpers'
|
||||
import dedent from 'dedent'
|
||||
const TextEditorRegistry = require('../src/text-editor-registry')
|
||||
const TextEditor = require('../src/text-editor')
|
||||
const TextBuffer = require('text-buffer')
|
||||
const {it, fit, ffit, fffit} = require('./async-spec-helpers')
|
||||
const dedent = require('dedent')
|
||||
|
||||
describe('TextEditorRegistry', function () {
|
||||
let registry, editor, initialPackageActivation
|
||||
@@ -20,6 +18,7 @@ describe('TextEditorRegistry', function () {
|
||||
})
|
||||
|
||||
editor = new TextEditor({autoHeight: false})
|
||||
expect(atom.grammars.assignLanguageMode(editor, 'text.plain.null-grammar')).toBe(true)
|
||||
})
|
||||
|
||||
afterEach(function () {
|
||||
@@ -71,128 +70,17 @@ describe('TextEditorRegistry', function () {
|
||||
atom.config.set('editor.tabLength', 8, {scope: '.source.js'})
|
||||
|
||||
const editor = registry.build({buffer: new TextBuffer({filePath: 'test.js'})})
|
||||
expect(editor.getGrammar().name).toBe("JavaScript")
|
||||
expect(editor.getTabLength()).toBe(8)
|
||||
})
|
||||
})
|
||||
|
||||
describe('.maintainGrammar', function () {
|
||||
it('assigns a grammar to the editor based on its path', async function () {
|
||||
await atom.packages.activatePackage('language-javascript')
|
||||
await atom.packages.activatePackage('language-c')
|
||||
|
||||
editor.getBuffer().setPath('test.js')
|
||||
registry.maintainGrammar(editor)
|
||||
|
||||
expect(editor.getGrammar().name).toBe('JavaScript')
|
||||
|
||||
editor.getBuffer().setPath('test.c')
|
||||
expect(editor.getGrammar().name).toBe('C')
|
||||
})
|
||||
|
||||
it('updates the editor\'s grammar when a more appropriate grammar is added for its path', async function () {
|
||||
expect(editor.getGrammar().name).toBe('Null Grammar')
|
||||
|
||||
editor.getBuffer().setPath('test.js')
|
||||
registry.maintainGrammar(editor)
|
||||
await atom.packages.activatePackage('language-javascript')
|
||||
expect(editor.getGrammar().name).toBe('JavaScript')
|
||||
})
|
||||
|
||||
it('returns a disposable that can be used to stop the registry from updating the editor', async function () {
|
||||
await atom.packages.activatePackage('language-javascript')
|
||||
|
||||
const previousSubscriptionCount = getSubscriptionCount(editor)
|
||||
const disposable = registry.maintainGrammar(editor)
|
||||
expect(getSubscriptionCount(editor)).toBeGreaterThan(previousSubscriptionCount)
|
||||
expect(registry.editorsWithMaintainedGrammar.size).toBe(1)
|
||||
|
||||
editor.getBuffer().setPath('test.js')
|
||||
expect(editor.getGrammar().name).toBe('JavaScript')
|
||||
|
||||
editor.getBuffer().setPath('test.txt')
|
||||
expect(editor.getGrammar().name).toBe('Null Grammar')
|
||||
|
||||
disposable.dispose()
|
||||
expect(getSubscriptionCount(editor)).toBe(previousSubscriptionCount)
|
||||
expect(registry.editorsWithMaintainedGrammar.size).toBe(0)
|
||||
|
||||
editor.getBuffer().setPath('test.js')
|
||||
expect(editor.getGrammar().name).toBe('Null Grammar')
|
||||
expect(retainedEditorCount(registry)).toBe(0)
|
||||
})
|
||||
|
||||
describe('when called twice with a given editor', function () {
|
||||
it('does nothing the second time', async function () {
|
||||
await atom.packages.activatePackage('language-javascript')
|
||||
const disposable1 = registry.maintainGrammar(editor)
|
||||
const disposable2 = registry.maintainGrammar(editor)
|
||||
|
||||
editor.getBuffer().setPath('test.js')
|
||||
expect(editor.getGrammar().name).toBe('JavaScript')
|
||||
|
||||
disposable2.dispose()
|
||||
editor.getBuffer().setPath('test.txt')
|
||||
expect(editor.getGrammar().name).toBe('Null Grammar')
|
||||
|
||||
disposable1.dispose()
|
||||
editor.getBuffer().setPath('test.js')
|
||||
expect(editor.getGrammar().name).toBe('Null Grammar')
|
||||
})
|
||||
})
|
||||
})
|
||||
|
||||
describe('.setGrammarOverride', function () {
|
||||
it('sets the editor\'s grammar and does not update it based on other criteria', async function () {
|
||||
await atom.packages.activatePackage('language-c')
|
||||
await atom.packages.activatePackage('language-javascript')
|
||||
|
||||
registry.maintainGrammar(editor)
|
||||
editor.getBuffer().setPath('file-1.js')
|
||||
expect(editor.getGrammar().name).toBe('JavaScript')
|
||||
|
||||
registry.setGrammarOverride(editor, 'source.c')
|
||||
expect(editor.getGrammar().name).toBe('C')
|
||||
|
||||
editor.getBuffer().setPath('file-3.rb')
|
||||
await atom.packages.activatePackage('language-ruby')
|
||||
expect(editor.getGrammar().name).toBe('C')
|
||||
|
||||
editor.getBuffer().setPath('file-1.js')
|
||||
expect(editor.getGrammar().name).toBe('C')
|
||||
})
|
||||
})
|
||||
|
||||
describe('.clearGrammarOverride', function () {
|
||||
it('resumes setting the grammar based on its path and content', async function () {
|
||||
await atom.packages.activatePackage('language-c')
|
||||
await atom.packages.activatePackage('language-javascript')
|
||||
|
||||
registry.maintainGrammar(editor)
|
||||
editor.getBuffer().setPath('file-1.js')
|
||||
expect(editor.getGrammar().name).toBe('JavaScript')
|
||||
|
||||
registry.setGrammarOverride(editor, 'source.c')
|
||||
expect(registry.getGrammarOverride(editor)).toBe('source.c')
|
||||
expect(editor.getGrammar().name).toBe('C')
|
||||
|
||||
registry.clearGrammarOverride(editor)
|
||||
expect(editor.getGrammar().name).toBe('JavaScript')
|
||||
|
||||
editor.getBuffer().setPath('file-3.rb')
|
||||
await atom.packages.activatePackage('language-ruby')
|
||||
expect(editor.getGrammar().name).toBe('Ruby')
|
||||
expect(registry.getGrammarOverride(editor)).toBe(undefined)
|
||||
})
|
||||
})
|
||||
|
||||
describe('.maintainConfig(editor)', function () {
|
||||
it('does not update the editor when config settings change for unrelated scope selectors', async function () {
|
||||
await atom.packages.activatePackage('language-javascript')
|
||||
|
||||
const editor2 = new TextEditor()
|
||||
|
||||
editor2.setGrammar(atom.grammars.selectGrammar('test.js'))
|
||||
atom.grammars.assignLanguageMode(editor2, 'source.js')
|
||||
|
||||
registry.maintainConfig(editor)
|
||||
registry.maintainConfig(editor2)
|
||||
@@ -254,14 +142,14 @@ describe('TextEditorRegistry', function () {
|
||||
atom.config.set('core.fileEncoding', 'utf16le', {scopeSelector: '.source.js'})
|
||||
expect(editor.getEncoding()).toBe('utf8')
|
||||
|
||||
editor.setGrammar(atom.grammars.grammarForScopeName('source.js'))
|
||||
atom.grammars.assignLanguageMode(editor, 'source.js')
|
||||
await initialPackageActivation
|
||||
expect(editor.getEncoding()).toBe('utf16le')
|
||||
|
||||
atom.config.set('core.fileEncoding', 'utf16be', {scopeSelector: '.source.js'})
|
||||
expect(editor.getEncoding()).toBe('utf16be')
|
||||
|
||||
editor.setGrammar(atom.grammars.selectGrammar('test.txt'))
|
||||
atom.grammars.assignLanguageMode(editor, 'text.plain.null-grammar')
|
||||
await initialPackageActivation
|
||||
expect(editor.getEncoding()).toBe('utf8')
|
||||
})
|
||||
@@ -331,7 +219,7 @@ describe('TextEditorRegistry', function () {
|
||||
describe('when the "tabType" config setting is "auto"', function () {
|
||||
it('enables or disables soft tabs based on the editor\'s content', async function () {
|
||||
await atom.packages.activatePackage('language-javascript')
|
||||
editor.setGrammar(atom.grammars.selectGrammar('test.js'))
|
||||
atom.grammars.assignLanguageMode(editor, 'source.js')
|
||||
atom.config.set('editor.tabType', 'auto')
|
||||
|
||||
registry.maintainConfig(editor)
|
||||
@@ -342,7 +230,7 @@ describe('TextEditorRegistry', function () {
|
||||
hello;
|
||||
}
|
||||
`)
|
||||
editor.tokenizedBuffer.retokenizeLines()
|
||||
editor.getBuffer().getLanguageMode().retokenizeLines()
|
||||
expect(editor.getSoftTabs()).toBe(true)
|
||||
|
||||
editor.setText(dedent`
|
||||
@@ -350,7 +238,7 @@ describe('TextEditorRegistry', function () {
|
||||
hello;
|
||||
}
|
||||
`)
|
||||
editor.tokenizedBuffer.retokenizeLines()
|
||||
editor.getBuffer().getLanguageMode().retokenizeLines()
|
||||
expect(editor.getSoftTabs()).toBe(false)
|
||||
|
||||
editor.setText(dedent`
|
||||
@@ -361,7 +249,7 @@ describe('TextEditorRegistry', function () {
|
||||
${'\t'}hello;
|
||||
}
|
||||
` + editor.getText())
|
||||
editor.tokenizedBuffer.retokenizeLines()
|
||||
editor.getBuffer().getLanguageMode().retokenizeLines()
|
||||
expect(editor.getSoftTabs()).toBe(false)
|
||||
|
||||
editor.setText(dedent`
|
||||
@@ -374,7 +262,7 @@ describe('TextEditorRegistry', function () {
|
||||
}
|
||||
`)
|
||||
|
||||
editor.tokenizedBuffer.retokenizeLines()
|
||||
editor.getBuffer().getLanguageMode().retokenizeLines()
|
||||
expect(editor.getSoftTabs()).toBe(false)
|
||||
|
||||
editor.setText(dedent`
|
||||
@@ -386,7 +274,7 @@ describe('TextEditorRegistry', function () {
|
||||
hello;
|
||||
}
|
||||
`)
|
||||
editor.tokenizedBuffer.retokenizeLines()
|
||||
editor.getBuffer().getLanguageMode().retokenizeLines()
|
||||
expect(editor.getSoftTabs()).toBe(true)
|
||||
})
|
||||
})
|
||||
@@ -624,19 +512,6 @@ describe('TextEditorRegistry', function () {
|
||||
expect(editor.getUndoGroupingInterval()).toBe(300)
|
||||
})
|
||||
|
||||
it('sets the non-word characters based on the config', async function () {
|
||||
editor.update({nonWordCharacters: '()'})
|
||||
expect(editor.getNonWordCharacters()).toBe('()')
|
||||
|
||||
atom.config.set('editor.nonWordCharacters', '(){}')
|
||||
registry.maintainConfig(editor)
|
||||
await initialPackageActivation
|
||||
expect(editor.getNonWordCharacters()).toBe('(){}')
|
||||
|
||||
atom.config.set('editor.nonWordCharacters', '(){}[]')
|
||||
expect(editor.getNonWordCharacters()).toBe('(){}[]')
|
||||
})
|
||||
|
||||
it('sets the scroll sensitivity based on the config', async function () {
|
||||
editor.update({scrollSensitivity: 50})
|
||||
expect(editor.getScrollSensitivity()).toBe(50)
|
||||
@@ -650,21 +525,6 @@ describe('TextEditorRegistry', function () {
|
||||
expect(editor.getScrollSensitivity()).toBe(70)
|
||||
})
|
||||
|
||||
it('gives the editor a scoped-settings delegate based on the config', async function () {
|
||||
atom.config.set('editor.nonWordCharacters', '()')
|
||||
atom.config.set('editor.nonWordCharacters', '(){}', {scopeSelector: '.a.b .c.d'})
|
||||
atom.config.set('editor.nonWordCharacters', '(){}[]', {scopeSelector: '.e.f *'})
|
||||
|
||||
registry.maintainConfig(editor)
|
||||
await initialPackageActivation
|
||||
|
||||
let delegate = editor.getScopedSettingsDelegate()
|
||||
|
||||
expect(delegate.getNonWordCharacters(['a.b', 'c.d'])).toBe('(){}')
|
||||
expect(delegate.getNonWordCharacters(['e.f', 'g.h'])).toBe('(){}[]')
|
||||
expect(delegate.getNonWordCharacters(['i.j'])).toBe('()')
|
||||
})
|
||||
|
||||
describe('when called twice with a given editor', function () {
|
||||
it('does nothing the second time', async function () {
|
||||
editor.update({scrollSensitivity: 50})
|
||||
@@ -686,46 +546,6 @@ describe('TextEditorRegistry', function () {
|
||||
})
|
||||
})
|
||||
})
|
||||
|
||||
describe('serialization', function () {
|
||||
it('persists editors\' grammar overrides', async function () {
|
||||
const editor2 = new TextEditor()
|
||||
|
||||
await atom.packages.activatePackage('language-c')
|
||||
await atom.packages.activatePackage('language-html')
|
||||
await atom.packages.activatePackage('language-javascript')
|
||||
|
||||
registry.maintainGrammar(editor)
|
||||
registry.maintainGrammar(editor2)
|
||||
registry.setGrammarOverride(editor, 'source.c')
|
||||
registry.setGrammarOverride(editor2, 'source.js')
|
||||
|
||||
await atom.packages.deactivatePackage('language-javascript')
|
||||
|
||||
const editorCopy = TextEditor.deserialize(editor.serialize(), atom)
|
||||
const editor2Copy = TextEditor.deserialize(editor2.serialize(), atom)
|
||||
|
||||
const registryCopy = new TextEditorRegistry({
|
||||
assert: atom.assert,
|
||||
config: atom.config,
|
||||
grammarRegistry: atom.grammars,
|
||||
packageManager: {deferredActivationHooks: null}
|
||||
})
|
||||
registryCopy.deserialize(JSON.parse(JSON.stringify(registry.serialize())))
|
||||
|
||||
expect(editorCopy.getGrammar().name).toBe('Null Grammar')
|
||||
expect(editor2Copy.getGrammar().name).toBe('Null Grammar')
|
||||
|
||||
registryCopy.maintainGrammar(editorCopy)
|
||||
registryCopy.maintainGrammar(editor2Copy)
|
||||
expect(editorCopy.getGrammar().name).toBe('C')
|
||||
expect(editor2Copy.getGrammar().name).toBe('Null Grammar')
|
||||
|
||||
await atom.packages.activatePackage('language-javascript')
|
||||
expect(editorCopy.getGrammar().name).toBe('C')
|
||||
expect(editor2Copy.getGrammar().name).toBe('JavaScript')
|
||||
})
|
||||
})
|
||||
})
|
||||
|
||||
function getSubscriptionCount (editor) {
|
||||
|
||||
@@ -7,6 +7,7 @@ const dedent = require('dedent')
|
||||
const clipboard = require('../src/safe-clipboard')
|
||||
const TextEditor = require('../src/text-editor')
|
||||
const TextBuffer = require('text-buffer')
|
||||
const TextMateLanguageMode = require('../src/text-mate-language-mode')
|
||||
|
||||
describe('TextEditor', () => {
|
||||
let buffer, editor, lineLengths
|
||||
@@ -85,22 +86,6 @@ describe('TextEditor', () => {
|
||||
})
|
||||
})
|
||||
|
||||
describe('when the editor is constructed with the largeFileMode option set to true', () => {
|
||||
it("loads the editor but doesn't tokenize", async () => {
|
||||
editor = await atom.workspace.openTextFile('sample.js', {largeFileMode: true})
|
||||
buffer = editor.getBuffer()
|
||||
expect(editor.lineTextForScreenRow(0)).toBe(buffer.lineForRow(0))
|
||||
expect(editor.tokensForScreenRow(0).length).toBe(1)
|
||||
expect(editor.tokensForScreenRow(1).length).toBe(2) // soft tab
|
||||
expect(editor.lineTextForScreenRow(12)).toBe(buffer.lineForRow(12))
|
||||
expect(editor.getCursorScreenPosition()).toEqual([0, 0])
|
||||
|
||||
editor.insertText('hey"')
|
||||
expect(editor.tokensForScreenRow(0).length).toBe(1)
|
||||
expect(editor.tokensForScreenRow(1).length).toBe(2)
|
||||
})
|
||||
})
|
||||
|
||||
describe('.copy()', () => {
|
||||
it('returns a different editor with the same initial state', () => {
|
||||
expect(editor.getAutoHeight()).toBeFalsy()
|
||||
@@ -1356,7 +1341,7 @@ describe('TextEditor', () => {
|
||||
})
|
||||
|
||||
it('will limit paragraph range to comments', () => {
|
||||
editor.setGrammar(atom.grammars.grammarForScopeName('source.js'))
|
||||
atom.grammars.assignLanguageMode(editor.getBuffer(), 'source.js')
|
||||
editor.setText(dedent`
|
||||
var quicksort = function () {
|
||||
/* Single line comment block */
|
||||
@@ -2081,14 +2066,13 @@ describe('TextEditor', () => {
|
||||
expect(scopeDescriptors[0].getScopesArray()).toEqual(['source.js'])
|
||||
expect(scopeDescriptors[1].getScopesArray()).toEqual(['source.js', 'string.quoted.single.js'])
|
||||
|
||||
editor.setScopedSettingsDelegate({
|
||||
getNonWordCharacters (scopes) {
|
||||
const result = '/\()"\':,.;<>~!@#$%^&*|+=[]{}`?'
|
||||
if (scopes.some(scope => scope.startsWith('string'))) {
|
||||
return result
|
||||
} else {
|
||||
return result + '-'
|
||||
}
|
||||
spyOn(editor.getBuffer().getLanguageMode(), 'getNonWordCharacters').andCallFake(function (position) {
|
||||
const result = '/\()"\':,.;<>~!@#$%^&*|+=[]{}`?'
|
||||
const scopes = this.scopeDescriptorForPosition(position).getScopesArray()
|
||||
if (scopes.some(scope => scope.startsWith('string'))) {
|
||||
return result
|
||||
} else {
|
||||
return result + '-'
|
||||
}
|
||||
})
|
||||
|
||||
@@ -3694,7 +3678,7 @@ describe('TextEditor', () => {
|
||||
describe('when a newline is appended with a trailing closing tag behind the cursor (e.g. by pressing enter in the middel of a line)', () => {
|
||||
it('indents the new line to the correct level when editor.autoIndent is true and using a curly-bracket language', () => {
|
||||
editor.update({autoIndent: true})
|
||||
editor.setGrammar(atom.grammars.selectGrammar('file.js'))
|
||||
atom.grammars.assignLanguageMode(editor, 'source.js')
|
||||
editor.setText('var test = () => {\n return true;};')
|
||||
editor.setCursorBufferPosition([1, 14])
|
||||
editor.insertNewline()
|
||||
@@ -3703,7 +3687,7 @@ describe('TextEditor', () => {
|
||||
})
|
||||
|
||||
it('indents the new line to the current level when editor.autoIndent is true and no increaseIndentPattern is specified', () => {
|
||||
editor.setGrammar(atom.grammars.selectGrammar('file'))
|
||||
atom.grammars.assignLanguageMode(editor, null)
|
||||
editor.update({autoIndent: true})
|
||||
editor.setText(' if true')
|
||||
editor.setCursorBufferPosition([0, 8])
|
||||
@@ -3716,7 +3700,7 @@ describe('TextEditor', () => {
|
||||
it('indents the new line to the correct level when editor.autoIndent is true and using an off-side rule language', async () => {
|
||||
await atom.packages.activatePackage('language-coffee-script')
|
||||
editor.update({autoIndent: true})
|
||||
editor.setGrammar(atom.grammars.selectGrammar('file.coffee'))
|
||||
atom.grammars.assignLanguageMode(editor, 'source.coffee')
|
||||
editor.setText('if true\n return trueelse\n return false')
|
||||
editor.setCursorBufferPosition([1, 13])
|
||||
editor.insertNewline()
|
||||
@@ -3730,7 +3714,7 @@ describe('TextEditor', () => {
|
||||
it('indents the new line to the correct level when editor.autoIndent is true', async () => {
|
||||
await atom.packages.activatePackage('language-go')
|
||||
editor.update({autoIndent: true})
|
||||
editor.setGrammar(atom.grammars.selectGrammar('file.go'))
|
||||
atom.grammars.assignLanguageMode(editor, 'source.go')
|
||||
editor.setText('fmt.Printf("some%s",\n "thing")')
|
||||
editor.setCursorBufferPosition([1, 10])
|
||||
editor.insertNewline()
|
||||
@@ -5622,21 +5606,30 @@ describe('TextEditor', () => {
|
||||
})
|
||||
})
|
||||
|
||||
describe('when a better-matched grammar is added to syntax', () => {
|
||||
it('switches to the better-matched grammar and re-tokenizes the buffer', async () => {
|
||||
editor.destroy()
|
||||
describe('when the buffer\'s language mode changes', () => {
|
||||
it('notifies onDidTokenize observers when retokenization is finished', async () => {
|
||||
// Exercise the full `tokenizeInBackground` code path, which bails out early if
|
||||
// `.setVisible` has not been called with `true`.
|
||||
jasmine.unspy(TextMateLanguageMode.prototype, 'tokenizeInBackground')
|
||||
jasmine.attachToDOM(editor.getElement())
|
||||
|
||||
const jsGrammar = atom.grammars.selectGrammar('a.js')
|
||||
atom.grammars.removeGrammar(jsGrammar)
|
||||
const events = []
|
||||
editor.onDidTokenize(event => events.push(event))
|
||||
|
||||
editor = await atom.workspace.open('sample.js', {autoIndent: false})
|
||||
await atom.packages.activatePackage('language-c')
|
||||
expect(atom.grammars.assignLanguageMode(editor.getBuffer(), 'source.c')).toBe(true)
|
||||
advanceClock(1)
|
||||
expect(events.length).toBe(1)
|
||||
})
|
||||
|
||||
expect(editor.getGrammar()).toBe(atom.grammars.nullGrammar)
|
||||
expect(editor.tokensForScreenRow(0).length).toBe(1)
|
||||
it('notifies onDidChangeGrammar observers', async () => {
|
||||
const events = []
|
||||
editor.onDidChangeGrammar(grammar => events.push(grammar))
|
||||
|
||||
atom.grammars.addGrammar(jsGrammar)
|
||||
expect(editor.getGrammar()).toBe(jsGrammar)
|
||||
expect(editor.tokensForScreenRow(0).length).toBeGreaterThan(1)
|
||||
await atom.packages.activatePackage('language-c')
|
||||
expect(atom.grammars.assignLanguageMode(editor.getBuffer(), 'source.c')).toBe(true)
|
||||
expect(events.length).toBe(1)
|
||||
expect(events[0].name).toBe('C')
|
||||
})
|
||||
})
|
||||
|
||||
@@ -6630,17 +6623,6 @@ describe('TextEditor', () => {
|
||||
})
|
||||
})
|
||||
|
||||
describe('when the editor is constructed with the grammar option set', () => {
|
||||
beforeEach(async () => {
|
||||
await atom.packages.activatePackage('language-coffee-script')
|
||||
})
|
||||
|
||||
it('sets the grammar', () => {
|
||||
editor = new TextEditor({grammar: atom.grammars.grammarForScopeName('source.coffee')})
|
||||
expect(editor.getGrammar().name).toBe('CoffeeScript')
|
||||
})
|
||||
})
|
||||
|
||||
describe('softWrapAtPreferredLineLength', () => {
|
||||
it('soft wraps the editor at the preferred line length unless the editor is narrower or the editor is mini', () => {
|
||||
editor.update({
|
||||
@@ -6701,6 +6683,7 @@ describe('TextEditor', () => {
|
||||
beforeEach(async () => {
|
||||
editor = await atom.workspace.open('sample.js')
|
||||
jasmine.unspy(editor, 'shouldPromptToSave')
|
||||
spyOn(atom.stateStore, 'isConnected').andReturn(true)
|
||||
})
|
||||
|
||||
it('returns true when buffer has unsaved changes', () => {
|
||||
@@ -6828,7 +6811,7 @@ describe('TextEditor', () => {
|
||||
})
|
||||
|
||||
it('does nothing for empty lines and null grammar', () => {
|
||||
editor.setGrammar(atom.grammars.grammarForScopeName('text.plain.null-grammar'))
|
||||
atom.grammars.assignLanguageMode(editor, null)
|
||||
editor.setCursorBufferPosition([10, 0])
|
||||
editor.toggleLineCommentsInSelection()
|
||||
expect(editor.lineTextForBufferRow(10)).toBe('')
|
||||
|
||||
1026
spec/text-mate-language-mode-spec.js
Normal file
1026
spec/text-mate-language-mode-spec.js
Normal file
File diff suppressed because it is too large
Load Diff
@@ -1,43 +0,0 @@
|
||||
const TextBuffer = require('text-buffer')
|
||||
const TokenizedBuffer = require('../src/tokenized-buffer')
|
||||
|
||||
describe('TokenIterator', () =>
|
||||
it('correctly terminates scopes at the beginning of the line (regression)', () => {
|
||||
const grammar = atom.grammars.createGrammar('test', {
|
||||
'scopeName': 'text.broken',
|
||||
'name': 'Broken grammar',
|
||||
'patterns': [
|
||||
{
|
||||
'begin': 'start',
|
||||
'end': '(?=end)',
|
||||
'name': 'blue.broken'
|
||||
},
|
||||
{
|
||||
'match': '.',
|
||||
'name': 'yellow.broken'
|
||||
}
|
||||
]
|
||||
})
|
||||
|
||||
const buffer = new TextBuffer({text: `\
|
||||
start x
|
||||
end x
|
||||
x\
|
||||
`})
|
||||
const tokenizedBuffer = new TokenizedBuffer({
|
||||
buffer,
|
||||
config: atom.config,
|
||||
grammarRegistry: atom.grammars,
|
||||
packageManager: atom.packages,
|
||||
assert: atom.assert
|
||||
})
|
||||
tokenizedBuffer.setGrammar(grammar)
|
||||
|
||||
const tokenIterator = tokenizedBuffer.tokenizedLines[1].getTokenIterator()
|
||||
tokenIterator.next()
|
||||
|
||||
expect(tokenIterator.getBufferStart()).toBe(0)
|
||||
expect(tokenIterator.getScopeEnds()).toEqual([])
|
||||
expect(tokenIterator.getScopeStarts()).toEqual(['text.broken', 'yellow.broken'])
|
||||
})
|
||||
)
|
||||
@@ -1,110 +0,0 @@
|
||||
/** @babel */
|
||||
|
||||
import TokenizedBufferIterator from '../src/tokenized-buffer-iterator'
|
||||
import {Point} from 'text-buffer'
|
||||
|
||||
describe('TokenizedBufferIterator', () => {
|
||||
describe('seek(position)', function () {
|
||||
it('seeks to the leftmost tag boundary greater than or equal to the given position and returns the containing tags', function () {
|
||||
const tokenizedBuffer = {
|
||||
tokenizedLineForRow (row) {
|
||||
if (row === 0) {
|
||||
return {
|
||||
tags: [-1, -2, -3, -4, -5, 3, -3, -4, -6, -5, 4, -6, -3, -4],
|
||||
text: 'foo bar',
|
||||
openScopes: []
|
||||
}
|
||||
} else {
|
||||
return null
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
const iterator = new TokenizedBufferIterator(tokenizedBuffer)
|
||||
|
||||
expect(iterator.seek(Point(0, 0))).toEqual([])
|
||||
expect(iterator.getPosition()).toEqual(Point(0, 0))
|
||||
expect(iterator.getCloseScopeIds()).toEqual([])
|
||||
expect(iterator.getOpenScopeIds()).toEqual([257])
|
||||
|
||||
iterator.moveToSuccessor()
|
||||
expect(iterator.getCloseScopeIds()).toEqual([257])
|
||||
expect(iterator.getOpenScopeIds()).toEqual([259])
|
||||
|
||||
expect(iterator.seek(Point(0, 1))).toEqual([261])
|
||||
expect(iterator.getPosition()).toEqual(Point(0, 3))
|
||||
expect(iterator.getCloseScopeIds()).toEqual([])
|
||||
expect(iterator.getOpenScopeIds()).toEqual([259])
|
||||
|
||||
iterator.moveToSuccessor()
|
||||
expect(iterator.getPosition()).toEqual(Point(0, 3))
|
||||
expect(iterator.getCloseScopeIds()).toEqual([259, 261])
|
||||
expect(iterator.getOpenScopeIds()).toEqual([261])
|
||||
|
||||
expect(iterator.seek(Point(0, 3))).toEqual([261])
|
||||
expect(iterator.getPosition()).toEqual(Point(0, 3))
|
||||
expect(iterator.getCloseScopeIds()).toEqual([])
|
||||
expect(iterator.getOpenScopeIds()).toEqual([259])
|
||||
|
||||
iterator.moveToSuccessor()
|
||||
expect(iterator.getPosition()).toEqual(Point(0, 3))
|
||||
expect(iterator.getCloseScopeIds()).toEqual([259, 261])
|
||||
expect(iterator.getOpenScopeIds()).toEqual([261])
|
||||
|
||||
iterator.moveToSuccessor()
|
||||
expect(iterator.getPosition()).toEqual(Point(0, 7))
|
||||
expect(iterator.getCloseScopeIds()).toEqual([261])
|
||||
expect(iterator.getOpenScopeIds()).toEqual([259])
|
||||
|
||||
iterator.moveToSuccessor()
|
||||
expect(iterator.getPosition()).toEqual(Point(0, 7))
|
||||
expect(iterator.getCloseScopeIds()).toEqual([259])
|
||||
expect(iterator.getOpenScopeIds()).toEqual([])
|
||||
|
||||
iterator.moveToSuccessor()
|
||||
expect(iterator.getPosition()).toEqual(Point(1, 0))
|
||||
expect(iterator.getCloseScopeIds()).toEqual([])
|
||||
expect(iterator.getOpenScopeIds()).toEqual([])
|
||||
|
||||
expect(iterator.seek(Point(0, 5))).toEqual([261])
|
||||
expect(iterator.getPosition()).toEqual(Point(0, 7))
|
||||
expect(iterator.getCloseScopeIds()).toEqual([261])
|
||||
expect(iterator.getOpenScopeIds()).toEqual([259])
|
||||
|
||||
iterator.moveToSuccessor()
|
||||
expect(iterator.getPosition()).toEqual(Point(0, 7))
|
||||
expect(iterator.getCloseScopeIds()).toEqual([259])
|
||||
expect(iterator.getOpenScopeIds()).toEqual([])
|
||||
})
|
||||
})
|
||||
|
||||
describe('moveToSuccessor()', function () {
|
||||
it('reports two boundaries at the same position when tags close, open, then close again without a non-negative integer separating them (regression)', () => {
|
||||
const tokenizedBuffer = {
|
||||
tokenizedLineForRow () {
|
||||
return {
|
||||
tags: [-1, -2, -1, -2],
|
||||
text: '',
|
||||
openScopes: []
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
const iterator = new TokenizedBufferIterator(tokenizedBuffer)
|
||||
|
||||
iterator.seek(Point(0, 0))
|
||||
expect(iterator.getPosition()).toEqual(Point(0, 0))
|
||||
expect(iterator.getCloseScopeIds()).toEqual([])
|
||||
expect(iterator.getOpenScopeIds()).toEqual([257])
|
||||
|
||||
iterator.moveToSuccessor()
|
||||
expect(iterator.getPosition()).toEqual(Point(0, 0))
|
||||
expect(iterator.getCloseScopeIds()).toEqual([257])
|
||||
expect(iterator.getOpenScopeIds()).toEqual([257])
|
||||
|
||||
iterator.moveToSuccessor()
|
||||
expect(iterator.getCloseScopeIds()).toEqual([257])
|
||||
expect(iterator.getOpenScopeIds()).toEqual([])
|
||||
})
|
||||
})
|
||||
})
|
||||
@@ -1,904 +0,0 @@
|
||||
const NullGrammar = require('../src/null-grammar')
|
||||
const TokenizedBuffer = require('../src/tokenized-buffer')
|
||||
const TextBuffer = require('text-buffer')
|
||||
const {Point, Range} = TextBuffer
|
||||
const _ = require('underscore-plus')
|
||||
const dedent = require('dedent')
|
||||
const {it, fit, ffit, fffit, beforeEach, afterEach} = require('./async-spec-helpers')
|
||||
const {ScopedSettingsDelegate} = require('../src/text-editor-registry')
|
||||
|
||||
describe('TokenizedBuffer', () => {
|
||||
let tokenizedBuffer, buffer
|
||||
|
||||
beforeEach(async () => {
|
||||
// enable async tokenization
|
||||
TokenizedBuffer.prototype.chunkSize = 5
|
||||
jasmine.unspy(TokenizedBuffer.prototype, 'tokenizeInBackground')
|
||||
await atom.packages.activatePackage('language-javascript')
|
||||
})
|
||||
|
||||
afterEach(() => {
|
||||
buffer && buffer.destroy()
|
||||
tokenizedBuffer && tokenizedBuffer.destroy()
|
||||
})
|
||||
|
||||
function startTokenizing (tokenizedBuffer) {
|
||||
tokenizedBuffer.setVisible(true)
|
||||
}
|
||||
|
||||
function fullyTokenize (tokenizedBuffer) {
|
||||
tokenizedBuffer.setVisible(true)
|
||||
while (tokenizedBuffer.firstInvalidRow() != null) {
|
||||
advanceClock()
|
||||
}
|
||||
}
|
||||
|
||||
describe('serialization', () => {
|
||||
describe('when the underlying buffer has a path', () => {
|
||||
beforeEach(async () => {
|
||||
buffer = atom.project.bufferForPathSync('sample.js')
|
||||
await atom.packages.activatePackage('language-coffee-script')
|
||||
})
|
||||
|
||||
it('deserializes it searching among the buffers in the current project', () => {
|
||||
const tokenizedBufferA = new TokenizedBuffer({buffer, tabLength: 2})
|
||||
const tokenizedBufferB = TokenizedBuffer.deserialize(JSON.parse(JSON.stringify(tokenizedBufferA.serialize())), atom)
|
||||
expect(tokenizedBufferB.buffer).toBe(tokenizedBufferA.buffer)
|
||||
})
|
||||
})
|
||||
|
||||
describe('when the underlying buffer has no path', () => {
|
||||
beforeEach(() => buffer = atom.project.bufferForPathSync(null))
|
||||
|
||||
it('deserializes it searching among the buffers in the current project', () => {
|
||||
const tokenizedBufferA = new TokenizedBuffer({buffer, tabLength: 2})
|
||||
const tokenizedBufferB = TokenizedBuffer.deserialize(JSON.parse(JSON.stringify(tokenizedBufferA.serialize())), atom)
|
||||
expect(tokenizedBufferB.buffer).toBe(tokenizedBufferA.buffer)
|
||||
})
|
||||
})
|
||||
})
|
||||
|
||||
describe('tokenizing', () => {
|
||||
describe('when the buffer is destroyed', () => {
|
||||
beforeEach(() => {
|
||||
buffer = atom.project.bufferForPathSync('sample.js')
|
||||
tokenizedBuffer = new TokenizedBuffer({buffer, grammar: atom.grammars.grammarForScopeName('source.js'), tabLength: 2})
|
||||
startTokenizing(tokenizedBuffer)
|
||||
})
|
||||
|
||||
it('stops tokenization', () => {
|
||||
tokenizedBuffer.destroy()
|
||||
spyOn(tokenizedBuffer, 'tokenizeNextChunk')
|
||||
advanceClock()
|
||||
expect(tokenizedBuffer.tokenizeNextChunk).not.toHaveBeenCalled()
|
||||
})
|
||||
})
|
||||
|
||||
describe('when the buffer contains soft-tabs', () => {
|
||||
beforeEach(() => {
|
||||
buffer = atom.project.bufferForPathSync('sample.js')
|
||||
tokenizedBuffer = new TokenizedBuffer({buffer, grammar: atom.grammars.grammarForScopeName('source.js'), tabLength: 2})
|
||||
startTokenizing(tokenizedBuffer)
|
||||
})
|
||||
|
||||
afterEach(() => {
|
||||
tokenizedBuffer.destroy()
|
||||
buffer.release()
|
||||
})
|
||||
|
||||
describe('on construction', () =>
|
||||
it('tokenizes lines chunk at a time in the background', () => {
|
||||
const line0 = tokenizedBuffer.tokenizedLines[0]
|
||||
expect(line0).toBeUndefined()
|
||||
|
||||
const line11 = tokenizedBuffer.tokenizedLines[11]
|
||||
expect(line11).toBeUndefined()
|
||||
|
||||
// tokenize chunk 1
|
||||
advanceClock()
|
||||
expect(tokenizedBuffer.tokenizedLines[0].ruleStack != null).toBeTruthy()
|
||||
expect(tokenizedBuffer.tokenizedLines[4].ruleStack != null).toBeTruthy()
|
||||
expect(tokenizedBuffer.tokenizedLines[5]).toBeUndefined()
|
||||
|
||||
// tokenize chunk 2
|
||||
advanceClock()
|
||||
expect(tokenizedBuffer.tokenizedLines[5].ruleStack != null).toBeTruthy()
|
||||
expect(tokenizedBuffer.tokenizedLines[9].ruleStack != null).toBeTruthy()
|
||||
expect(tokenizedBuffer.tokenizedLines[10]).toBeUndefined()
|
||||
|
||||
// tokenize last chunk
|
||||
advanceClock()
|
||||
expect(tokenizedBuffer.tokenizedLines[10].ruleStack != null).toBeTruthy()
|
||||
expect(tokenizedBuffer.tokenizedLines[12].ruleStack != null).toBeTruthy()
|
||||
})
|
||||
)
|
||||
|
||||
describe('when the buffer is partially tokenized', () => {
|
||||
beforeEach(() => {
|
||||
// tokenize chunk 1 only
|
||||
advanceClock()
|
||||
})
|
||||
|
||||
describe('when there is a buffer change inside the tokenized region', () => {
|
||||
describe('when lines are added', () => {
|
||||
it('pushes the invalid rows down', () => {
|
||||
expect(tokenizedBuffer.firstInvalidRow()).toBe(5)
|
||||
buffer.insert([1, 0], '\n\n')
|
||||
expect(tokenizedBuffer.firstInvalidRow()).toBe(7)
|
||||
})
|
||||
})
|
||||
|
||||
describe('when lines are removed', () => {
|
||||
it('pulls the invalid rows up', () => {
|
||||
expect(tokenizedBuffer.firstInvalidRow()).toBe(5)
|
||||
buffer.delete([[1, 0], [3, 0]])
|
||||
expect(tokenizedBuffer.firstInvalidRow()).toBe(2)
|
||||
})
|
||||
})
|
||||
|
||||
describe('when the change invalidates all the lines before the current invalid region', () => {
|
||||
it('retokenizes the invalidated lines and continues into the valid region', () => {
|
||||
expect(tokenizedBuffer.firstInvalidRow()).toBe(5)
|
||||
buffer.insert([2, 0], '/*')
|
||||
expect(tokenizedBuffer.firstInvalidRow()).toBe(3)
|
||||
advanceClock()
|
||||
expect(tokenizedBuffer.firstInvalidRow()).toBe(8)
|
||||
})
|
||||
})
|
||||
})
|
||||
|
||||
describe('when there is a buffer change surrounding an invalid row', () => {
|
||||
it('pushes the invalid row to the end of the change', () => {
|
||||
buffer.setTextInRange([[4, 0], [6, 0]], '\n\n\n')
|
||||
expect(tokenizedBuffer.firstInvalidRow()).toBe(8)
|
||||
})
|
||||
})
|
||||
|
||||
describe('when there is a buffer change inside an invalid region', () => {
|
||||
it('does not attempt to tokenize the lines in the change, and preserves the existing invalid row', () => {
|
||||
expect(tokenizedBuffer.firstInvalidRow()).toBe(5)
|
||||
buffer.setTextInRange([[6, 0], [7, 0]], '\n\n\n')
|
||||
expect(tokenizedBuffer.tokenizedLines[6]).toBeUndefined()
|
||||
expect(tokenizedBuffer.tokenizedLines[7]).toBeUndefined()
|
||||
expect(tokenizedBuffer.firstInvalidRow()).toBe(5)
|
||||
})
|
||||
})
|
||||
})
|
||||
|
||||
describe('when the buffer is fully tokenized', () => {
|
||||
beforeEach(() => fullyTokenize(tokenizedBuffer))
|
||||
|
||||
describe('when there is a buffer change that is smaller than the chunk size', () => {
|
||||
describe('when lines are updated, but none are added or removed', () => {
|
||||
it('updates tokens to reflect the change', () => {
|
||||
buffer.setTextInRange([[0, 0], [2, 0]], 'foo()\n7\n')
|
||||
|
||||
expect(tokenizedBuffer.tokenizedLines[0].tokens[1]).toEqual({value: '(', scopes: ['source.js', 'meta.function-call.js', 'meta.arguments.js', 'punctuation.definition.arguments.begin.bracket.round.js']})
|
||||
expect(tokenizedBuffer.tokenizedLines[1].tokens[0]).toEqual({value: '7', scopes: ['source.js', 'constant.numeric.decimal.js']})
|
||||
// line 2 is unchanged
|
||||
expect(tokenizedBuffer.tokenizedLines[2].tokens[1]).toEqual({value: 'if', scopes: ['source.js', 'keyword.control.js']})
|
||||
})
|
||||
|
||||
describe('when the change invalidates the tokenization of subsequent lines', () => {
|
||||
it('schedules the invalidated lines to be tokenized in the background', () => {
|
||||
buffer.insert([5, 30], '/* */')
|
||||
buffer.insert([2, 0], '/*')
|
||||
expect(tokenizedBuffer.tokenizedLines[3].tokens[0].scopes).toEqual(['source.js'])
|
||||
|
||||
advanceClock()
|
||||
expect(tokenizedBuffer.tokenizedLines[3].tokens[0].scopes).toEqual(['source.js', 'comment.block.js'])
|
||||
expect(tokenizedBuffer.tokenizedLines[4].tokens[0].scopes).toEqual(['source.js', 'comment.block.js'])
|
||||
expect(tokenizedBuffer.tokenizedLines[5].tokens[0].scopes).toEqual(['source.js', 'comment.block.js'])
|
||||
})
|
||||
})
|
||||
|
||||
it('resumes highlighting with the state of the previous line', () => {
|
||||
buffer.insert([0, 0], '/*')
|
||||
buffer.insert([5, 0], '*/')
|
||||
|
||||
buffer.insert([1, 0], 'var ')
|
||||
expect(tokenizedBuffer.tokenizedLines[1].tokens[0].scopes).toEqual(['source.js', 'comment.block.js'])
|
||||
})
|
||||
})
|
||||
|
||||
describe('when lines are both updated and removed', () => {
|
||||
it('updates tokens to reflect the change', () => {
|
||||
buffer.setTextInRange([[1, 0], [3, 0]], 'foo()')
|
||||
|
||||
// previous line 0 remains
|
||||
expect(tokenizedBuffer.tokenizedLines[0].tokens[0]).toEqual({value: 'var', scopes: ['source.js', 'storage.type.var.js']})
|
||||
|
||||
// previous line 3 should be combined with input to form line 1
|
||||
expect(tokenizedBuffer.tokenizedLines[1].tokens[0]).toEqual({value: 'foo', scopes: ['source.js', 'meta.function-call.js', 'entity.name.function.js']})
|
||||
expect(tokenizedBuffer.tokenizedLines[1].tokens[6]).toEqual({value: '=', scopes: ['source.js', 'keyword.operator.assignment.js']})
|
||||
|
||||
// lines below deleted regions should be shifted upward
|
||||
expect(tokenizedBuffer.tokenizedLines[2].tokens[1]).toEqual({value: 'while', scopes: ['source.js', 'keyword.control.js']})
|
||||
expect(tokenizedBuffer.tokenizedLines[3].tokens[1]).toEqual({value: '=', scopes: ['source.js', 'keyword.operator.assignment.js']})
|
||||
expect(tokenizedBuffer.tokenizedLines[4].tokens[1]).toEqual({value: '<', scopes: ['source.js', 'keyword.operator.comparison.js']})
|
||||
})
|
||||
})
|
||||
|
||||
describe('when the change invalidates the tokenization of subsequent lines', () => {
|
||||
it('schedules the invalidated lines to be tokenized in the background', () => {
|
||||
buffer.insert([5, 30], '/* */')
|
||||
buffer.setTextInRange([[2, 0], [3, 0]], '/*')
|
||||
expect(tokenizedBuffer.tokenizedLines[2].tokens[0].scopes).toEqual(['source.js', 'comment.block.js', 'punctuation.definition.comment.begin.js'])
|
||||
expect(tokenizedBuffer.tokenizedLines[3].tokens[0].scopes).toEqual(['source.js'])
|
||||
|
||||
advanceClock()
|
||||
expect(tokenizedBuffer.tokenizedLines[3].tokens[0].scopes).toEqual(['source.js', 'comment.block.js'])
|
||||
expect(tokenizedBuffer.tokenizedLines[4].tokens[0].scopes).toEqual(['source.js', 'comment.block.js'])
|
||||
})
|
||||
})
|
||||
|
||||
describe('when lines are both updated and inserted', () => {
|
||||
it('updates tokens to reflect the change', () => {
|
||||
buffer.setTextInRange([[1, 0], [2, 0]], 'foo()\nbar()\nbaz()\nquux()')
|
||||
|
||||
// previous line 0 remains
|
||||
expect(tokenizedBuffer.tokenizedLines[0].tokens[0]).toEqual({ value: 'var', scopes: ['source.js', 'storage.type.var.js']})
|
||||
|
||||
// 3 new lines inserted
|
||||
expect(tokenizedBuffer.tokenizedLines[1].tokens[0]).toEqual({value: 'foo', scopes: ['source.js', 'meta.function-call.js', 'entity.name.function.js']})
|
||||
expect(tokenizedBuffer.tokenizedLines[2].tokens[0]).toEqual({value: 'bar', scopes: ['source.js', 'meta.function-call.js', 'entity.name.function.js']})
|
||||
expect(tokenizedBuffer.tokenizedLines[3].tokens[0]).toEqual({value: 'baz', scopes: ['source.js', 'meta.function-call.js', 'entity.name.function.js']})
|
||||
|
||||
// previous line 2 is joined with quux() on line 4
|
||||
expect(tokenizedBuffer.tokenizedLines[4].tokens[0]).toEqual({value: 'quux', scopes: ['source.js', 'meta.function-call.js', 'entity.name.function.js']})
|
||||
expect(tokenizedBuffer.tokenizedLines[4].tokens[4]).toEqual({value: 'if', scopes: ['source.js', 'keyword.control.js']})
|
||||
|
||||
// previous line 3 is pushed down to become line 5
|
||||
expect(tokenizedBuffer.tokenizedLines[5].tokens[3]).toEqual({value: '=', scopes: ['source.js', 'keyword.operator.assignment.js']})
|
||||
})
|
||||
})
|
||||
|
||||
describe('when the change invalidates the tokenization of subsequent lines', () => {
|
||||
it('schedules the invalidated lines to be tokenized in the background', () => {
|
||||
buffer.insert([5, 30], '/* */')
|
||||
buffer.insert([2, 0], '/*\nabcde\nabcder')
|
||||
expect(tokenizedBuffer.tokenizedLines[2].tokens[0].scopes).toEqual(['source.js', 'comment.block.js', 'punctuation.definition.comment.begin.js'])
|
||||
expect(tokenizedBuffer.tokenizedLines[3].tokens[0].scopes).toEqual(['source.js', 'comment.block.js'])
|
||||
expect(tokenizedBuffer.tokenizedLines[4].tokens[0].scopes).toEqual(['source.js', 'comment.block.js'])
|
||||
expect(tokenizedBuffer.tokenizedLines[5].tokens[0].scopes).toEqual(['source.js'])
|
||||
|
||||
advanceClock() // tokenize invalidated lines in background
|
||||
expect(tokenizedBuffer.tokenizedLines[5].tokens[0].scopes).toEqual(['source.js', 'comment.block.js'])
|
||||
expect(tokenizedBuffer.tokenizedLines[6].tokens[0].scopes).toEqual(['source.js', 'comment.block.js'])
|
||||
expect(tokenizedBuffer.tokenizedLines[7].tokens[0].scopes).toEqual(['source.js', 'comment.block.js'])
|
||||
expect(tokenizedBuffer.tokenizedLines[8].tokens[0].scopes).not.toBe(['source.js', 'comment.block.js'])
|
||||
})
|
||||
})
|
||||
})
|
||||
|
||||
describe('when there is an insertion that is larger than the chunk size', () =>
|
||||
it('tokenizes the initial chunk synchronously, then tokenizes the remaining lines in the background', () => {
|
||||
const commentBlock = _.multiplyString('// a comment\n', tokenizedBuffer.chunkSize + 2)
|
||||
buffer.insert([0, 0], commentBlock)
|
||||
expect(tokenizedBuffer.tokenizedLines[0].ruleStack != null).toBeTruthy()
|
||||
expect(tokenizedBuffer.tokenizedLines[4].ruleStack != null).toBeTruthy()
|
||||
expect(tokenizedBuffer.tokenizedLines[5]).toBeUndefined()
|
||||
|
||||
advanceClock()
|
||||
expect(tokenizedBuffer.tokenizedLines[5].ruleStack != null).toBeTruthy()
|
||||
expect(tokenizedBuffer.tokenizedLines[6].ruleStack != null).toBeTruthy()
|
||||
})
|
||||
)
|
||||
|
||||
it('does not break out soft tabs across a scope boundary', async () => {
|
||||
await atom.packages.activatePackage('language-gfm')
|
||||
|
||||
tokenizedBuffer.setTabLength(4)
|
||||
tokenizedBuffer.setGrammar(atom.grammars.selectGrammar('.md'))
|
||||
buffer.setText(' <![]()\n ')
|
||||
fullyTokenize(tokenizedBuffer)
|
||||
|
||||
let length = 0
|
||||
for (let tag of tokenizedBuffer.tokenizedLines[1].tags) {
|
||||
if (tag > 0) length += tag
|
||||
}
|
||||
|
||||
expect(length).toBe(4)
|
||||
})
|
||||
})
|
||||
})
|
||||
|
||||
describe('when the buffer contains hard-tabs', () => {
|
||||
beforeEach(async () => {
|
||||
atom.packages.activatePackage('language-coffee-script')
|
||||
|
||||
buffer = atom.project.bufferForPathSync('sample-with-tabs.coffee')
|
||||
tokenizedBuffer = new TokenizedBuffer({buffer, grammar: atom.grammars.grammarForScopeName('source.coffee'), tabLength: 2})
|
||||
startTokenizing(tokenizedBuffer)
|
||||
})
|
||||
|
||||
afterEach(() => {
|
||||
tokenizedBuffer.destroy()
|
||||
buffer.release()
|
||||
})
|
||||
|
||||
describe('when the buffer is fully tokenized', () => {
|
||||
beforeEach(() => fullyTokenize(tokenizedBuffer))
|
||||
})
|
||||
})
|
||||
|
||||
describe('when tokenization completes', () => {
|
||||
it('emits the `tokenized` event', async () => {
|
||||
const editor = await atom.workspace.open('sample.js')
|
||||
|
||||
const tokenizedHandler = jasmine.createSpy('tokenized handler')
|
||||
editor.tokenizedBuffer.onDidTokenize(tokenizedHandler)
|
||||
fullyTokenize(editor.tokenizedBuffer)
|
||||
expect(tokenizedHandler.callCount).toBe(1)
|
||||
})
|
||||
|
||||
it("doesn't re-emit the `tokenized` event when it is re-tokenized", async () => {
|
||||
const editor = await atom.workspace.open('sample.js')
|
||||
fullyTokenize(editor.tokenizedBuffer)
|
||||
|
||||
const tokenizedHandler = jasmine.createSpy('tokenized handler')
|
||||
editor.tokenizedBuffer.onDidTokenize(tokenizedHandler)
|
||||
editor.getBuffer().insert([0, 0], "'")
|
||||
fullyTokenize(editor.tokenizedBuffer)
|
||||
expect(tokenizedHandler).not.toHaveBeenCalled()
|
||||
})
|
||||
})
|
||||
|
||||
describe('when the grammar is updated because a grammar it includes is activated', async () => {
|
||||
it('re-emits the `tokenized` event', async () => {
|
||||
const editor = await atom.workspace.open('coffee.coffee')
|
||||
|
||||
const tokenizedHandler = jasmine.createSpy('tokenized handler')
|
||||
editor.tokenizedBuffer.onDidTokenize(tokenizedHandler)
|
||||
fullyTokenize(editor.tokenizedBuffer)
|
||||
tokenizedHandler.reset()
|
||||
|
||||
await atom.packages.activatePackage('language-coffee-script')
|
||||
fullyTokenize(editor.tokenizedBuffer)
|
||||
expect(tokenizedHandler.callCount).toBe(1)
|
||||
})
|
||||
|
||||
it('retokenizes the buffer', async () => {
|
||||
await atom.packages.activatePackage('language-ruby-on-rails')
|
||||
await atom.packages.activatePackage('language-ruby')
|
||||
|
||||
buffer = atom.project.bufferForPathSync()
|
||||
buffer.setText("<div class='name'><%= User.find(2).full_name %></div>")
|
||||
|
||||
tokenizedBuffer = new TokenizedBuffer({buffer, grammar: atom.grammars.selectGrammar('test.erb'), tabLength: 2})
|
||||
fullyTokenize(tokenizedBuffer)
|
||||
expect(tokenizedBuffer.tokenizedLines[0].tokens[0]).toEqual({
|
||||
value: "<div class='name'>",
|
||||
scopes: ['text.html.ruby']
|
||||
})
|
||||
|
||||
await atom.packages.activatePackage('language-html')
|
||||
fullyTokenize(tokenizedBuffer)
|
||||
expect(tokenizedBuffer.tokenizedLines[0].tokens[0]).toEqual({
|
||||
value: '<',
|
||||
scopes: ['text.html.ruby', 'meta.tag.block.div.html', 'punctuation.definition.tag.begin.html']
|
||||
})
|
||||
})
|
||||
})
|
||||
|
||||
describe('when the buffer is configured with the null grammar', () => {
|
||||
it('does not actually tokenize using the grammar', () => {
|
||||
spyOn(NullGrammar, 'tokenizeLine').andCallThrough()
|
||||
buffer = atom.project.bufferForPathSync('sample.will-use-the-null-grammar')
|
||||
buffer.setText('a\nb\nc')
|
||||
tokenizedBuffer = new TokenizedBuffer({buffer, tabLength: 2})
|
||||
const tokenizeCallback = jasmine.createSpy('onDidTokenize')
|
||||
tokenizedBuffer.onDidTokenize(tokenizeCallback)
|
||||
|
||||
expect(tokenizedBuffer.tokenizedLines[0]).toBeUndefined()
|
||||
expect(tokenizedBuffer.tokenizedLines[1]).toBeUndefined()
|
||||
expect(tokenizedBuffer.tokenizedLines[2]).toBeUndefined()
|
||||
expect(tokenizeCallback.callCount).toBe(0)
|
||||
expect(NullGrammar.tokenizeLine).not.toHaveBeenCalled()
|
||||
|
||||
fullyTokenize(tokenizedBuffer)
|
||||
expect(tokenizedBuffer.tokenizedLines[0]).toBeUndefined()
|
||||
expect(tokenizedBuffer.tokenizedLines[1]).toBeUndefined()
|
||||
expect(tokenizedBuffer.tokenizedLines[2]).toBeUndefined()
|
||||
expect(tokenizeCallback.callCount).toBe(0)
|
||||
expect(NullGrammar.tokenizeLine).not.toHaveBeenCalled()
|
||||
})
|
||||
})
|
||||
})
|
||||
|
||||
describe('.tokenForPosition(position)', () => {
|
||||
afterEach(() => {
|
||||
tokenizedBuffer.destroy()
|
||||
buffer.release()
|
||||
})
|
||||
|
||||
it('returns the correct token (regression)', () => {
|
||||
buffer = atom.project.bufferForPathSync('sample.js')
|
||||
tokenizedBuffer = new TokenizedBuffer({buffer, grammar: atom.grammars.grammarForScopeName('source.js'), tabLength: 2})
|
||||
fullyTokenize(tokenizedBuffer)
|
||||
expect(tokenizedBuffer.tokenForPosition([1, 0]).scopes).toEqual(['source.js'])
|
||||
expect(tokenizedBuffer.tokenForPosition([1, 1]).scopes).toEqual(['source.js'])
|
||||
expect(tokenizedBuffer.tokenForPosition([1, 2]).scopes).toEqual(['source.js', 'storage.type.var.js'])
|
||||
})
|
||||
})
|
||||
|
||||
describe('.bufferRangeForScopeAtPosition(selector, position)', () => {
|
||||
beforeEach(() => {
|
||||
buffer = atom.project.bufferForPathSync('sample.js')
|
||||
tokenizedBuffer = new TokenizedBuffer({buffer, grammar: atom.grammars.grammarForScopeName('source.js'), tabLength: 2})
|
||||
fullyTokenize(tokenizedBuffer)
|
||||
})
|
||||
|
||||
describe('when the selector does not match the token at the position', () =>
|
||||
it('returns a falsy value', () => expect(tokenizedBuffer.bufferRangeForScopeAtPosition('.bogus', [0, 1])).toBeUndefined())
|
||||
)
|
||||
|
||||
describe('when the selector matches a single token at the position', () => {
|
||||
it('returns the range covered by the token', () => {
|
||||
expect(tokenizedBuffer.bufferRangeForScopeAtPosition('.storage.type.var.js', [0, 1])).toEqual([[0, 0], [0, 3]])
|
||||
expect(tokenizedBuffer.bufferRangeForScopeAtPosition('.storage.type.var.js', [0, 3])).toEqual([[0, 0], [0, 3]])
|
||||
})
|
||||
})
|
||||
|
||||
describe('when the selector matches a run of multiple tokens at the position', () => {
|
||||
it('returns the range covered by all contiguous tokens (within a single line)', () => {
|
||||
expect(tokenizedBuffer.bufferRangeForScopeAtPosition('.function', [1, 18])).toEqual([[1, 6], [1, 28]])
|
||||
})
|
||||
})
|
||||
})
|
||||
|
||||
describe('.tokenizedLineForRow(row)', () => {
|
||||
it("returns the tokenized line for a row, or a placeholder line if it hasn't been tokenized yet", () => {
|
||||
buffer = atom.project.bufferForPathSync('sample.js')
|
||||
const grammar = atom.grammars.grammarForScopeName('source.js')
|
||||
tokenizedBuffer = new TokenizedBuffer({buffer, grammar, tabLength: 2})
|
||||
const line0 = buffer.lineForRow(0)
|
||||
|
||||
const jsScopeStartId = grammar.startIdForScope(grammar.scopeName)
|
||||
const jsScopeEndId = grammar.endIdForScope(grammar.scopeName)
|
||||
startTokenizing(tokenizedBuffer)
|
||||
expect(tokenizedBuffer.tokenizedLines[0]).toBeUndefined()
|
||||
expect(tokenizedBuffer.tokenizedLineForRow(0).text).toBe(line0)
|
||||
expect(tokenizedBuffer.tokenizedLineForRow(0).tags).toEqual([jsScopeStartId, line0.length, jsScopeEndId])
|
||||
advanceClock(1)
|
||||
expect(tokenizedBuffer.tokenizedLines[0]).not.toBeUndefined()
|
||||
expect(tokenizedBuffer.tokenizedLineForRow(0).text).toBe(line0)
|
||||
expect(tokenizedBuffer.tokenizedLineForRow(0).tags).not.toEqual([jsScopeStartId, line0.length, jsScopeEndId])
|
||||
|
||||
const nullScopeStartId = NullGrammar.startIdForScope(NullGrammar.scopeName)
|
||||
const nullScopeEndId = NullGrammar.endIdForScope(NullGrammar.scopeName)
|
||||
tokenizedBuffer.setGrammar(NullGrammar)
|
||||
startTokenizing(tokenizedBuffer)
|
||||
expect(tokenizedBuffer.tokenizedLines[0]).toBeUndefined()
|
||||
expect(tokenizedBuffer.tokenizedLineForRow(0).text).toBe(line0)
|
||||
expect(tokenizedBuffer.tokenizedLineForRow(0).tags).toEqual([nullScopeStartId, line0.length, nullScopeEndId])
|
||||
advanceClock(1)
|
||||
expect(tokenizedBuffer.tokenizedLineForRow(0).text).toBe(line0)
|
||||
expect(tokenizedBuffer.tokenizedLineForRow(0).tags).toEqual([nullScopeStartId, line0.length, nullScopeEndId])
|
||||
})
|
||||
|
||||
it('returns undefined if the requested row is outside the buffer range', () => {
|
||||
buffer = atom.project.bufferForPathSync('sample.js')
|
||||
const grammar = atom.grammars.grammarForScopeName('source.js')
|
||||
tokenizedBuffer = new TokenizedBuffer({buffer, grammar, tabLength: 2})
|
||||
fullyTokenize(tokenizedBuffer)
|
||||
expect(tokenizedBuffer.tokenizedLineForRow(999)).toBeUndefined()
|
||||
})
|
||||
})
|
||||
|
||||
describe('text decoration layer API', () => {
|
||||
describe('iterator', () => {
|
||||
it('iterates over the syntactic scope boundaries', () => {
|
||||
buffer = new TextBuffer({text: 'var foo = 1 /*\nhello*/var bar = 2\n'})
|
||||
tokenizedBuffer = new TokenizedBuffer({buffer, grammar: atom.grammars.grammarForScopeName('source.js'), tabLength: 2})
|
||||
fullyTokenize(tokenizedBuffer)
|
||||
|
||||
const iterator = tokenizedBuffer.buildIterator()
|
||||
iterator.seek(Point(0, 0))
|
||||
|
||||
const expectedBoundaries = [
|
||||
{position: Point(0, 0), closeTags: [], openTags: ['syntax--source syntax--js', 'syntax--storage syntax--type syntax--var syntax--js']},
|
||||
{position: Point(0, 3), closeTags: ['syntax--storage syntax--type syntax--var syntax--js'], openTags: []},
|
||||
{position: Point(0, 8), closeTags: [], openTags: ['syntax--keyword syntax--operator syntax--assignment syntax--js']},
|
||||
{position: Point(0, 9), closeTags: ['syntax--keyword syntax--operator syntax--assignment syntax--js'], openTags: []},
|
||||
{position: Point(0, 10), closeTags: [], openTags: ['syntax--constant syntax--numeric syntax--decimal syntax--js']},
|
||||
{position: Point(0, 11), closeTags: ['syntax--constant syntax--numeric syntax--decimal syntax--js'], openTags: []},
|
||||
{position: Point(0, 12), closeTags: [], openTags: ['syntax--comment syntax--block syntax--js', 'syntax--punctuation syntax--definition syntax--comment syntax--begin syntax--js']},
|
||||
{position: Point(0, 14), closeTags: ['syntax--punctuation syntax--definition syntax--comment syntax--begin syntax--js'], openTags: []},
|
||||
{position: Point(1, 5), closeTags: [], openTags: ['syntax--punctuation syntax--definition syntax--comment syntax--end syntax--js']},
|
||||
{position: Point(1, 7), closeTags: ['syntax--punctuation syntax--definition syntax--comment syntax--end syntax--js', 'syntax--comment syntax--block syntax--js'], openTags: ['syntax--storage syntax--type syntax--var syntax--js']},
|
||||
{position: Point(1, 10), closeTags: ['syntax--storage syntax--type syntax--var syntax--js'], openTags: []},
|
||||
{position: Point(1, 15), closeTags: [], openTags: ['syntax--keyword syntax--operator syntax--assignment syntax--js']},
|
||||
{position: Point(1, 16), closeTags: ['syntax--keyword syntax--operator syntax--assignment syntax--js'], openTags: []},
|
||||
{position: Point(1, 17), closeTags: [], openTags: ['syntax--constant syntax--numeric syntax--decimal syntax--js']},
|
||||
{position: Point(1, 18), closeTags: ['syntax--constant syntax--numeric syntax--decimal syntax--js'], openTags: []}
|
||||
]
|
||||
|
||||
while (true) {
|
||||
const boundary = {
|
||||
position: iterator.getPosition(),
|
||||
closeTags: iterator.getCloseScopeIds().map(scopeId => tokenizedBuffer.classNameForScopeId(scopeId)),
|
||||
openTags: iterator.getOpenScopeIds().map(scopeId => tokenizedBuffer.classNameForScopeId(scopeId))
|
||||
}
|
||||
|
||||
expect(boundary).toEqual(expectedBoundaries.shift())
|
||||
if (!iterator.moveToSuccessor()) { break }
|
||||
}
|
||||
|
||||
expect(iterator.seek(Point(0, 1)).map(scopeId => tokenizedBuffer.classNameForScopeId(scopeId))).toEqual([
|
||||
'syntax--source syntax--js',
|
||||
'syntax--storage syntax--type syntax--var syntax--js'
|
||||
])
|
||||
expect(iterator.getPosition()).toEqual(Point(0, 3))
|
||||
expect(iterator.seek(Point(0, 8)).map(scopeId => tokenizedBuffer.classNameForScopeId(scopeId))).toEqual([
|
||||
'syntax--source syntax--js'
|
||||
])
|
||||
expect(iterator.getPosition()).toEqual(Point(0, 8))
|
||||
expect(iterator.seek(Point(1, 0)).map(scopeId => tokenizedBuffer.classNameForScopeId(scopeId))).toEqual([
|
||||
'syntax--source syntax--js',
|
||||
'syntax--comment syntax--block syntax--js'
|
||||
])
|
||||
expect(iterator.getPosition()).toEqual(Point(1, 0))
|
||||
expect(iterator.seek(Point(1, 18)).map(scopeId => tokenizedBuffer.classNameForScopeId(scopeId))).toEqual([
|
||||
'syntax--source syntax--js',
|
||||
'syntax--constant syntax--numeric syntax--decimal syntax--js'
|
||||
])
|
||||
expect(iterator.getPosition()).toEqual(Point(1, 18))
|
||||
|
||||
expect(iterator.seek(Point(2, 0)).map(scopeId => tokenizedBuffer.classNameForScopeId(scopeId))).toEqual([
|
||||
'syntax--source syntax--js'
|
||||
])
|
||||
iterator.moveToSuccessor()
|
||||
}) // ensure we don't infinitely loop (regression test)
|
||||
|
||||
it('does not report columns beyond the length of the line', async () => {
|
||||
await atom.packages.activatePackage('language-coffee-script')
|
||||
|
||||
buffer = new TextBuffer({text: '# hello\n# world'})
|
||||
tokenizedBuffer = new TokenizedBuffer({buffer, grammar: atom.grammars.grammarForScopeName('source.coffee'), tabLength: 2})
|
||||
fullyTokenize(tokenizedBuffer)
|
||||
|
||||
const iterator = tokenizedBuffer.buildIterator()
|
||||
iterator.seek(Point(0, 0))
|
||||
iterator.moveToSuccessor()
|
||||
iterator.moveToSuccessor()
|
||||
expect(iterator.getPosition().column).toBe(7)
|
||||
|
||||
iterator.moveToSuccessor()
|
||||
expect(iterator.getPosition().column).toBe(0)
|
||||
|
||||
iterator.seek(Point(0, 7))
|
||||
expect(iterator.getPosition().column).toBe(7)
|
||||
|
||||
iterator.seek(Point(0, 8))
|
||||
expect(iterator.getPosition().column).toBe(7)
|
||||
})
|
||||
|
||||
it('correctly terminates scopes at the beginning of the line (regression)', () => {
|
||||
const grammar = atom.grammars.createGrammar('test', {
|
||||
'scopeName': 'text.broken',
|
||||
'name': 'Broken grammar',
|
||||
'patterns': [
|
||||
{'begin': 'start', 'end': '(?=end)', 'name': 'blue.broken'},
|
||||
{'match': '.', 'name': 'yellow.broken'}
|
||||
]
|
||||
})
|
||||
|
||||
buffer = new TextBuffer({text: 'start x\nend x\nx'})
|
||||
tokenizedBuffer = new TokenizedBuffer({buffer, grammar, tabLength: 2})
|
||||
fullyTokenize(tokenizedBuffer)
|
||||
|
||||
const iterator = tokenizedBuffer.buildIterator()
|
||||
iterator.seek(Point(1, 0))
|
||||
|
||||
expect(iterator.getPosition()).toEqual([1, 0])
|
||||
expect(iterator.getCloseScopeIds().map(scopeId => tokenizedBuffer.classNameForScopeId(scopeId))).toEqual(['syntax--blue syntax--broken'])
|
||||
expect(iterator.getOpenScopeIds().map(scopeId => tokenizedBuffer.classNameForScopeId(scopeId))).toEqual(['syntax--yellow syntax--broken'])
|
||||
})
|
||||
})
|
||||
})
|
||||
|
||||
describe('.suggestedIndentForBufferRow', () => {
|
||||
let editor
|
||||
|
||||
describe('javascript', () => {
|
||||
beforeEach(async () => {
|
||||
editor = await atom.workspace.open('sample.js', {autoIndent: false})
|
||||
await atom.packages.activatePackage('language-javascript')
|
||||
})
|
||||
|
||||
it('bases indentation off of the previous non-blank line', () => {
|
||||
expect(editor.suggestedIndentForBufferRow(0)).toBe(0)
|
||||
expect(editor.suggestedIndentForBufferRow(1)).toBe(1)
|
||||
expect(editor.suggestedIndentForBufferRow(2)).toBe(2)
|
||||
expect(editor.suggestedIndentForBufferRow(5)).toBe(3)
|
||||
expect(editor.suggestedIndentForBufferRow(7)).toBe(2)
|
||||
expect(editor.suggestedIndentForBufferRow(9)).toBe(1)
|
||||
expect(editor.suggestedIndentForBufferRow(11)).toBe(1)
|
||||
})
|
||||
|
||||
it('does not take invisibles into account', () => {
|
||||
editor.update({showInvisibles: true})
|
||||
expect(editor.suggestedIndentForBufferRow(0)).toBe(0)
|
||||
expect(editor.suggestedIndentForBufferRow(1)).toBe(1)
|
||||
expect(editor.suggestedIndentForBufferRow(2)).toBe(2)
|
||||
expect(editor.suggestedIndentForBufferRow(5)).toBe(3)
|
||||
expect(editor.suggestedIndentForBufferRow(7)).toBe(2)
|
||||
expect(editor.suggestedIndentForBufferRow(9)).toBe(1)
|
||||
expect(editor.suggestedIndentForBufferRow(11)).toBe(1)
|
||||
})
|
||||
})
|
||||
|
||||
describe('css', () => {
|
||||
beforeEach(async () => {
|
||||
editor = await atom.workspace.open('css.css', {autoIndent: true})
|
||||
await atom.packages.activatePackage('language-source')
|
||||
await atom.packages.activatePackage('language-css')
|
||||
})
|
||||
|
||||
it('does not return negative values (regression)', () => {
|
||||
editor.setText('.test {\npadding: 0;\n}')
|
||||
expect(editor.suggestedIndentForBufferRow(2)).toBe(0)
|
||||
})
|
||||
})
|
||||
})
|
||||
|
||||
describe('.isFoldableAtRow(row)', () => {
|
||||
beforeEach(() => {
|
||||
buffer = atom.project.bufferForPathSync('sample.js')
|
||||
buffer.insert([10, 0], ' // multi-line\n // comment\n // block\n')
|
||||
buffer.insert([0, 0], '// multi-line\n// comment\n// block\n')
|
||||
tokenizedBuffer = new TokenizedBuffer({buffer, grammar: atom.grammars.grammarForScopeName('source.js'), tabLength: 2})
|
||||
fullyTokenize(tokenizedBuffer)
|
||||
})
|
||||
|
||||
it('includes the first line of multi-line comments', () => {
|
||||
expect(tokenizedBuffer.isFoldableAtRow(0)).toBe(true)
|
||||
expect(tokenizedBuffer.isFoldableAtRow(1)).toBe(false)
|
||||
expect(tokenizedBuffer.isFoldableAtRow(2)).toBe(false)
|
||||
expect(tokenizedBuffer.isFoldableAtRow(3)).toBe(true) // because of indent
|
||||
expect(tokenizedBuffer.isFoldableAtRow(13)).toBe(true)
|
||||
expect(tokenizedBuffer.isFoldableAtRow(14)).toBe(false)
|
||||
expect(tokenizedBuffer.isFoldableAtRow(15)).toBe(false)
|
||||
expect(tokenizedBuffer.isFoldableAtRow(16)).toBe(false)
|
||||
|
||||
buffer.insert([0, Infinity], '\n')
|
||||
|
||||
expect(tokenizedBuffer.isFoldableAtRow(0)).toBe(false)
|
||||
expect(tokenizedBuffer.isFoldableAtRow(1)).toBe(false)
|
||||
expect(tokenizedBuffer.isFoldableAtRow(2)).toBe(true)
|
||||
expect(tokenizedBuffer.isFoldableAtRow(3)).toBe(false)
|
||||
|
||||
buffer.undo()
|
||||
|
||||
expect(tokenizedBuffer.isFoldableAtRow(0)).toBe(true)
|
||||
expect(tokenizedBuffer.isFoldableAtRow(1)).toBe(false)
|
||||
expect(tokenizedBuffer.isFoldableAtRow(2)).toBe(false)
|
||||
expect(tokenizedBuffer.isFoldableAtRow(3)).toBe(true)
|
||||
}) // because of indent
|
||||
|
||||
it('includes non-comment lines that precede an increase in indentation', () => {
|
||||
buffer.insert([2, 0], ' ') // commented lines preceding an indent aren't foldable
|
||||
|
||||
expect(tokenizedBuffer.isFoldableAtRow(1)).toBe(false)
|
||||
expect(tokenizedBuffer.isFoldableAtRow(2)).toBe(false)
|
||||
expect(tokenizedBuffer.isFoldableAtRow(3)).toBe(true)
|
||||
expect(tokenizedBuffer.isFoldableAtRow(4)).toBe(true)
|
||||
expect(tokenizedBuffer.isFoldableAtRow(5)).toBe(false)
|
||||
expect(tokenizedBuffer.isFoldableAtRow(6)).toBe(false)
|
||||
expect(tokenizedBuffer.isFoldableAtRow(7)).toBe(true)
|
||||
expect(tokenizedBuffer.isFoldableAtRow(8)).toBe(false)
|
||||
|
||||
buffer.insert([7, 0], ' ')
|
||||
|
||||
expect(tokenizedBuffer.isFoldableAtRow(6)).toBe(true)
|
||||
expect(tokenizedBuffer.isFoldableAtRow(7)).toBe(false)
|
||||
expect(tokenizedBuffer.isFoldableAtRow(8)).toBe(false)
|
||||
|
||||
buffer.undo()
|
||||
|
||||
expect(tokenizedBuffer.isFoldableAtRow(6)).toBe(false)
|
||||
expect(tokenizedBuffer.isFoldableAtRow(7)).toBe(true)
|
||||
expect(tokenizedBuffer.isFoldableAtRow(8)).toBe(false)
|
||||
|
||||
buffer.insert([7, 0], ' \n x\n')
|
||||
|
||||
expect(tokenizedBuffer.isFoldableAtRow(6)).toBe(true)
|
||||
expect(tokenizedBuffer.isFoldableAtRow(7)).toBe(false)
|
||||
expect(tokenizedBuffer.isFoldableAtRow(8)).toBe(false)
|
||||
|
||||
buffer.insert([9, 0], ' ')
|
||||
|
||||
expect(tokenizedBuffer.isFoldableAtRow(6)).toBe(true)
|
||||
expect(tokenizedBuffer.isFoldableAtRow(7)).toBe(false)
|
||||
expect(tokenizedBuffer.isFoldableAtRow(8)).toBe(false)
|
||||
})
|
||||
})
|
||||
|
||||
describe('.getFoldableRangesAtIndentLevel', () => {
|
||||
it('returns the ranges that can be folded at the given indent level', () => {
|
||||
buffer = new TextBuffer(dedent `
|
||||
if (a) {
|
||||
b();
|
||||
if (c) {
|
||||
d()
|
||||
if (e) {
|
||||
f()
|
||||
}
|
||||
g()
|
||||
}
|
||||
h()
|
||||
}
|
||||
i()
|
||||
if (j) {
|
||||
k()
|
||||
}
|
||||
`)
|
||||
|
||||
tokenizedBuffer = new TokenizedBuffer({buffer})
|
||||
|
||||
expect(simulateFold(tokenizedBuffer.getFoldableRangesAtIndentLevel(0, 2))).toBe(dedent `
|
||||
if (a) {⋯
|
||||
}
|
||||
i()
|
||||
if (j) {⋯
|
||||
}
|
||||
`)
|
||||
|
||||
expect(simulateFold(tokenizedBuffer.getFoldableRangesAtIndentLevel(1, 2))).toBe(dedent `
|
||||
if (a) {
|
||||
b();
|
||||
if (c) {⋯
|
||||
}
|
||||
h()
|
||||
}
|
||||
i()
|
||||
if (j) {
|
||||
k()
|
||||
}
|
||||
`)
|
||||
|
||||
expect(simulateFold(tokenizedBuffer.getFoldableRangesAtIndentLevel(2, 2))).toBe(dedent `
|
||||
if (a) {
|
||||
b();
|
||||
if (c) {
|
||||
d()
|
||||
if (e) {⋯
|
||||
}
|
||||
g()
|
||||
}
|
||||
h()
|
||||
}
|
||||
i()
|
||||
if (j) {
|
||||
k()
|
||||
}
|
||||
`)
|
||||
})
|
||||
})
|
||||
|
||||
describe('.getFoldableRanges', () => {
|
||||
it('returns the ranges that can be folded', () => {
|
||||
buffer = new TextBuffer(dedent `
|
||||
if (a) {
|
||||
b();
|
||||
if (c) {
|
||||
d()
|
||||
if (e) {
|
||||
f()
|
||||
}
|
||||
g()
|
||||
}
|
||||
h()
|
||||
}
|
||||
i()
|
||||
if (j) {
|
||||
k()
|
||||
}
|
||||
`)
|
||||
|
||||
tokenizedBuffer = new TokenizedBuffer({buffer})
|
||||
|
||||
expect(tokenizedBuffer.getFoldableRanges(2).map(r => r.toString())).toEqual([
|
||||
...tokenizedBuffer.getFoldableRangesAtIndentLevel(0, 2),
|
||||
...tokenizedBuffer.getFoldableRangesAtIndentLevel(1, 2),
|
||||
...tokenizedBuffer.getFoldableRangesAtIndentLevel(2, 2),
|
||||
].sort((a, b) => (a.start.row - b.start.row) || (a.end.row - b.end.row)).map(r => r.toString()))
|
||||
})
|
||||
})
|
||||
|
||||
describe('.getFoldableRangeContainingPoint', () => {
|
||||
it('returns the range for the smallest fold that contains the given range', () => {
|
||||
buffer = new TextBuffer(dedent `
|
||||
if (a) {
|
||||
b();
|
||||
if (c) {
|
||||
d()
|
||||
if (e) {
|
||||
f()
|
||||
}
|
||||
g()
|
||||
}
|
||||
h()
|
||||
}
|
||||
i()
|
||||
if (j) {
|
||||
k()
|
||||
}
|
||||
`)
|
||||
|
||||
tokenizedBuffer = new TokenizedBuffer({buffer})
|
||||
|
||||
expect(tokenizedBuffer.getFoldableRangeContainingPoint(Point(0, 5), 2)).toBeNull()
|
||||
|
||||
let range = tokenizedBuffer.getFoldableRangeContainingPoint(Point(0, 10), 2)
|
||||
expect(simulateFold([range])).toBe(dedent `
|
||||
if (a) {⋯
|
||||
}
|
||||
i()
|
||||
if (j) {
|
||||
k()
|
||||
}
|
||||
`)
|
||||
|
||||
range = tokenizedBuffer.getFoldableRangeContainingPoint(Point(1, Infinity), 2)
|
||||
expect(simulateFold([range])).toBe(dedent `
|
||||
if (a) {⋯
|
||||
}
|
||||
i()
|
||||
if (j) {
|
||||
k()
|
||||
}
|
||||
`)
|
||||
|
||||
range = tokenizedBuffer.getFoldableRangeContainingPoint(Point(2, 20), 2)
|
||||
expect(simulateFold([range])).toBe(dedent `
|
||||
if (a) {
|
||||
b();
|
||||
if (c) {⋯
|
||||
}
|
||||
h()
|
||||
}
|
||||
i()
|
||||
if (j) {
|
||||
k()
|
||||
}
|
||||
`)
|
||||
})
|
||||
|
||||
it('works for coffee-script', async () => {
|
||||
const editor = await atom.workspace.open('coffee.coffee')
|
||||
await atom.packages.activatePackage('language-coffee-script')
|
||||
buffer = editor.buffer
|
||||
tokenizedBuffer = editor.tokenizedBuffer
|
||||
|
||||
expect(tokenizedBuffer.getFoldableRangeContainingPoint(Point(0, Infinity))).toEqual([[0, Infinity], [20, Infinity]])
|
||||
expect(tokenizedBuffer.getFoldableRangeContainingPoint(Point(1, Infinity))).toEqual([[1, Infinity], [17, Infinity]])
|
||||
expect(tokenizedBuffer.getFoldableRangeContainingPoint(Point(2, Infinity))).toEqual([[1, Infinity], [17, Infinity]])
|
||||
expect(tokenizedBuffer.getFoldableRangeContainingPoint(Point(19, Infinity))).toEqual([[19, Infinity], [20, Infinity]])
|
||||
})
|
||||
|
||||
it('works for javascript', async () => {
|
||||
const editor = await atom.workspace.open('sample.js')
|
||||
await atom.packages.activatePackage('language-javascript')
|
||||
buffer = editor.buffer
|
||||
tokenizedBuffer = editor.tokenizedBuffer
|
||||
|
||||
expect(editor.tokenizedBuffer.getFoldableRangeContainingPoint(Point(0, Infinity))).toEqual([[0, Infinity], [12, Infinity]])
|
||||
expect(editor.tokenizedBuffer.getFoldableRangeContainingPoint(Point(1, Infinity))).toEqual([[1, Infinity], [9, Infinity]])
|
||||
expect(editor.tokenizedBuffer.getFoldableRangeContainingPoint(Point(2, Infinity))).toEqual([[1, Infinity], [9, Infinity]])
|
||||
expect(editor.tokenizedBuffer.getFoldableRangeContainingPoint(Point(4, Infinity))).toEqual([[4, Infinity], [7, Infinity]])
|
||||
})
|
||||
})
|
||||
|
||||
function simulateFold (ranges) {
|
||||
buffer.transact(() => {
|
||||
for (const range of ranges.reverse()) {
|
||||
buffer.setTextInRange(range, '⋯')
|
||||
}
|
||||
})
|
||||
let text = buffer.getText()
|
||||
buffer.undo()
|
||||
return text
|
||||
}
|
||||
})
|
||||
@@ -1,5 +1,7 @@
|
||||
const path = require('path')
|
||||
const temp = require('temp').track()
|
||||
const dedent = require('dedent')
|
||||
const TextBuffer = require('text-buffer')
|
||||
const TextEditor = require('../src/text-editor')
|
||||
const Workspace = require('../src/workspace')
|
||||
const Project = require('../src/project')
|
||||
@@ -43,7 +45,8 @@ describe('Workspace', () => {
|
||||
notificationManager: atom.notifications,
|
||||
packageManager: atom.packages,
|
||||
confirm: atom.confirm.bind(atom),
|
||||
applicationDelegate: atom.applicationDelegate
|
||||
applicationDelegate: atom.applicationDelegate,
|
||||
grammarRegistry: atom.grammars
|
||||
})
|
||||
return atom.project.deserialize(projectState).then(() => {
|
||||
workspace = atom.workspace = new Workspace({
|
||||
@@ -656,17 +659,6 @@ describe('Workspace', () => {
|
||||
})
|
||||
})
|
||||
|
||||
describe('when the file is over 2MB', () => {
|
||||
it('opens the editor with largeFileMode: true', () => {
|
||||
spyOn(fs, 'getSizeSync').andReturn(2 * 1048577) // 2MB
|
||||
|
||||
let editor = null
|
||||
waitsForPromise(() => workspace.open('sample.js').then(e => { editor = e }))
|
||||
|
||||
runs(() => expect(editor.largeFileMode).toBe(true))
|
||||
})
|
||||
})
|
||||
|
||||
describe('when the file is over user-defined limit', () => {
|
||||
const shouldPromptForFileOfSize = (size, shouldPrompt) => {
|
||||
spyOn(fs, 'getSizeSync').andReturn(size * 1048577)
|
||||
@@ -689,7 +681,6 @@ describe('Workspace', () => {
|
||||
|
||||
runs(() => {
|
||||
expect(atom.applicationDelegate.confirm).toHaveBeenCalled()
|
||||
expect(editor.largeFileMode).toBe(true)
|
||||
})
|
||||
} else {
|
||||
runs(() => expect(editor).not.toBeUndefined())
|
||||
@@ -943,6 +934,18 @@ describe('Workspace', () => {
|
||||
})
|
||||
})
|
||||
})
|
||||
|
||||
describe('when opening an editor with a buffer that isn\'t part of the project', () => {
|
||||
it('adds the buffer to the project', async () => {
|
||||
const buffer = new TextBuffer()
|
||||
const editor = new TextEditor({buffer})
|
||||
|
||||
await atom.workspace.open(editor)
|
||||
|
||||
expect(atom.project.getBuffers().map(buffer => buffer.id)).toContain(buffer.id)
|
||||
expect(buffer.getLanguageMode().getLanguageId()).toBe('text.plain.null-grammar')
|
||||
})
|
||||
})
|
||||
})
|
||||
|
||||
describe('finding items in the workspace', () => {
|
||||
@@ -1217,8 +1220,8 @@ describe('Workspace', () => {
|
||||
})
|
||||
})
|
||||
|
||||
describe('::onDidStopChangingActivePaneItem()', function () {
|
||||
it('invokes observers when the active item of the active pane stops changing', function () {
|
||||
describe('::onDidStopChangingActivePaneItem()', () => {
|
||||
it('invokes observers when the active item of the active pane stops changing', () => {
|
||||
const pane1 = atom.workspace.getCenter().getActivePane()
|
||||
const pane2 = pane1.splitRight({items: [document.createElement('div'), document.createElement('div')]});
|
||||
atom.workspace.getLeftDock().getActivePane().addItem(document.createElement('div'))
|
||||
@@ -1237,29 +1240,22 @@ describe('Workspace', () => {
|
||||
})
|
||||
|
||||
describe('the grammar-used hook', () => {
|
||||
it('fires when opening a file or changing the grammar of an open file', () => {
|
||||
let editor = null
|
||||
let javascriptGrammarUsed = false
|
||||
let coffeescriptGrammarUsed = false
|
||||
it('fires when opening a file or changing the grammar of an open file', async () => {
|
||||
let resolveJavascriptGrammarUsed, resolveCoffeeScriptGrammarUsed
|
||||
const javascriptGrammarUsed = new Promise(resolve => { resolveJavascriptGrammarUsed = resolve })
|
||||
const coffeescriptGrammarUsed = new Promise(resolve => { resolveCoffeeScriptGrammarUsed = resolve })
|
||||
|
||||
atom.packages.triggerDeferredActivationHooks()
|
||||
atom.packages.onDidTriggerActivationHook('language-javascript:grammar-used', resolveJavascriptGrammarUsed)
|
||||
atom.packages.onDidTriggerActivationHook('language-coffee-script:grammar-used', resolveCoffeeScriptGrammarUsed)
|
||||
|
||||
runs(() => {
|
||||
atom.packages.onDidTriggerActivationHook('language-javascript:grammar-used', () => { javascriptGrammarUsed = true })
|
||||
atom.packages.onDidTriggerActivationHook('language-coffee-script:grammar-used', () => { coffeescriptGrammarUsed = true })
|
||||
})
|
||||
const editor = await atom.workspace.open('sample.js', {autoIndent: false})
|
||||
await atom.packages.activatePackage('language-javascript')
|
||||
await javascriptGrammarUsed
|
||||
|
||||
waitsForPromise(() => atom.workspace.open('sample.js', {autoIndent: false}).then(o => { editor = o }))
|
||||
|
||||
waitsForPromise(() => atom.packages.activatePackage('language-javascript'))
|
||||
|
||||
waitsFor(() => javascriptGrammarUsed)
|
||||
|
||||
waitsForPromise(() => atom.packages.activatePackage('language-coffee-script'))
|
||||
|
||||
runs(() => editor.setGrammar(atom.grammars.selectGrammar('.coffee')))
|
||||
|
||||
waitsFor(() => coffeescriptGrammarUsed)
|
||||
await atom.packages.activatePackage('language-coffee-script')
|
||||
atom.grammars.assignLanguageMode(editor, 'source.coffee')
|
||||
await coffeescriptGrammarUsed
|
||||
})
|
||||
})
|
||||
|
||||
@@ -1382,7 +1378,7 @@ describe('Workspace', () => {
|
||||
|
||||
describe('::getActiveTextEditor()', () => {
|
||||
describe("when the workspace center's active pane item is a text editor", () => {
|
||||
describe('when the workspace center has focus', function () {
|
||||
describe('when the workspace center has focus', () => {
|
||||
it('returns the text editor', () => {
|
||||
const workspaceCenter = workspace.getCenter()
|
||||
const editor = new TextEditor()
|
||||
@@ -1393,7 +1389,7 @@ describe('Workspace', () => {
|
||||
})
|
||||
})
|
||||
|
||||
describe('when a dock has focus', function () {
|
||||
describe('when a dock has focus', () => {
|
||||
it('returns the text editor', () => {
|
||||
const workspaceCenter = workspace.getCenter()
|
||||
const editor = new TextEditor()
|
||||
@@ -1521,34 +1517,27 @@ describe('Workspace', () => {
|
||||
})
|
||||
|
||||
describe('when an editor is destroyed', () => {
|
||||
it('removes the editor', () => {
|
||||
let editor = null
|
||||
|
||||
waitsForPromise(() => workspace.open('a').then(e => { editor = e }))
|
||||
|
||||
runs(() => {
|
||||
expect(workspace.getTextEditors()).toHaveLength(1)
|
||||
editor.destroy()
|
||||
expect(workspace.getTextEditors()).toHaveLength(0)
|
||||
})
|
||||
it('removes the editor', async () => {
|
||||
const editor = await workspace.open('a')
|
||||
expect(workspace.getTextEditors()).toHaveLength(1)
|
||||
editor.destroy()
|
||||
expect(workspace.getTextEditors()).toHaveLength(0)
|
||||
})
|
||||
})
|
||||
|
||||
describe('when an editor is copied because its pane is split', () => {
|
||||
it('sets up the new editor to be configured by the text editor registry', () => {
|
||||
waitsForPromise(() => atom.packages.activatePackage('language-javascript'))
|
||||
it('sets up the new editor to be configured by the text editor registry', async () => {
|
||||
await atom.packages.activatePackage('language-javascript')
|
||||
|
||||
waitsForPromise(() =>
|
||||
workspace.open('a').then(editor => {
|
||||
atom.textEditors.setGrammarOverride(editor, 'source.js')
|
||||
expect(editor.getGrammar().name).toBe('JavaScript')
|
||||
const editor = await workspace.open('a')
|
||||
|
||||
workspace.getActivePane().splitRight({copyActiveItem: true})
|
||||
const newEditor = workspace.getActiveTextEditor()
|
||||
expect(newEditor).not.toBe(editor)
|
||||
expect(newEditor.getGrammar().name).toBe('JavaScript')
|
||||
})
|
||||
)
|
||||
atom.grammars.assignLanguageMode(editor, 'source.js')
|
||||
expect(editor.getGrammar().name).toBe('JavaScript')
|
||||
|
||||
workspace.getActivePane().splitRight({copyActiveItem: true})
|
||||
const newEditor = workspace.getActiveTextEditor()
|
||||
expect(newEditor).not.toBe(editor)
|
||||
expect(newEditor.getGrammar().name).toBe('JavaScript')
|
||||
})
|
||||
})
|
||||
|
||||
@@ -1561,11 +1550,10 @@ describe('Workspace', () => {
|
||||
|
||||
waitsForPromise(() => atom.workspace.open('sample.coffee'))
|
||||
|
||||
runs(function () {
|
||||
atom.workspace.getActiveTextEditor().setText(`\
|
||||
i = /test/; #FIXME\
|
||||
`
|
||||
)
|
||||
runs(() => {
|
||||
atom.workspace.getActiveTextEditor().setText(dedent `
|
||||
i = /test/; #FIXME\
|
||||
`)
|
||||
|
||||
const atom2 = new AtomEnvironment({applicationDelegate: atom.applicationDelegate})
|
||||
atom2.initialize({
|
||||
@@ -2789,7 +2777,7 @@ i = /test/; #FIXME\
|
||||
})
|
||||
|
||||
describe('grammar activation', () => {
|
||||
it('notifies the workspace of which grammar is used', () => {
|
||||
it('notifies the workspace of which grammar is used', async () => {
|
||||
atom.packages.triggerDeferredActivationHooks()
|
||||
|
||||
const javascriptGrammarUsed = jasmine.createSpy('js grammar used')
|
||||
@@ -2800,24 +2788,22 @@ i = /test/; #FIXME\
|
||||
atom.packages.onDidTriggerActivationHook('language-ruby:grammar-used', rubyGrammarUsed)
|
||||
atom.packages.onDidTriggerActivationHook('language-c:grammar-used', cGrammarUsed)
|
||||
|
||||
waitsForPromise(() => atom.packages.activatePackage('language-ruby'))
|
||||
waitsForPromise(() => atom.packages.activatePackage('language-javascript'))
|
||||
waitsForPromise(() => atom.packages.activatePackage('language-c'))
|
||||
waitsForPromise(() => atom.workspace.open('sample-with-comments.js'))
|
||||
await atom.packages.activatePackage('language-ruby')
|
||||
await atom.packages.activatePackage('language-javascript')
|
||||
await atom.packages.activatePackage('language-c')
|
||||
await atom.workspace.open('sample-with-comments.js')
|
||||
|
||||
runs(() => {
|
||||
// Hooks are triggered when opening new editors
|
||||
expect(javascriptGrammarUsed).toHaveBeenCalled()
|
||||
// Hooks are triggered when opening new editors
|
||||
expect(javascriptGrammarUsed).toHaveBeenCalled()
|
||||
|
||||
// Hooks are triggered when changing existing editors grammars
|
||||
atom.workspace.getActiveTextEditor().setGrammar(atom.grammars.grammarForScopeName('source.c'))
|
||||
expect(cGrammarUsed).toHaveBeenCalled()
|
||||
// Hooks are triggered when changing existing editors grammars
|
||||
atom.grammars.assignLanguageMode(atom.workspace.getActiveTextEditor(), 'source.c')
|
||||
expect(cGrammarUsed).toHaveBeenCalled()
|
||||
|
||||
// Hooks are triggered when editors are added in other ways.
|
||||
atom.workspace.getActivePane().splitRight({copyActiveItem: true})
|
||||
atom.workspace.getActiveTextEditor().setGrammar(atom.grammars.grammarForScopeName('source.ruby'))
|
||||
expect(rubyGrammarUsed).toHaveBeenCalled()
|
||||
})
|
||||
// Hooks are triggered when editors are added in other ways.
|
||||
atom.workspace.getActivePane().splitRight({copyActiveItem: true})
|
||||
atom.grammars.assignLanguageMode(atom.workspace.getActiveTextEditor(), 'source.ruby')
|
||||
expect(rubyGrammarUsed).toHaveBeenCalled()
|
||||
})
|
||||
})
|
||||
|
||||
@@ -2894,4 +2880,6 @@ i = /test/; #FIXME\
|
||||
})
|
||||
})
|
||||
|
||||
const escapeStringRegex = str => str.replace(/[|\\{}()[\]^$+*?.]/g, '\\$&')
|
||||
function escapeStringRegex (string) {
|
||||
return string.replace(/[|\\{}()[\]^$+*?.]/g, '\\$&')
|
||||
}
|
||||
|
||||
@@ -71,7 +71,6 @@ class AtomEnvironment {
|
||||
this.deserializers = new DeserializerManager(this)
|
||||
this.deserializeTimings = {}
|
||||
this.views = new ViewRegistry(this)
|
||||
TextEditor.setScheduler(this.views)
|
||||
this.notifications = new NotificationManager()
|
||||
|
||||
this.stateStore = new StateStore('AtomEnvironments', 1)
|
||||
@@ -112,7 +111,13 @@ class AtomEnvironment {
|
||||
this.packages.setContextMenuManager(this.contextMenu)
|
||||
this.packages.setThemeManager(this.themes)
|
||||
|
||||
this.project = new Project({notificationManager: this.notifications, packageManager: this.packages, config: this.config, applicationDelegate: this.applicationDelegate})
|
||||
this.project = new Project({
|
||||
notificationManager: this.notifications,
|
||||
packageManager: this.packages,
|
||||
grammarRegistry: this.grammars,
|
||||
config: this.config,
|
||||
applicationDelegate: this.applicationDelegate
|
||||
})
|
||||
this.commandInstaller = new CommandInstaller(this.applicationDelegate)
|
||||
this.protocolHandlerInstaller = new ProtocolHandlerInstaller()
|
||||
|
||||
@@ -815,10 +820,9 @@ class AtomEnvironment {
|
||||
project: this.project.serialize(options),
|
||||
workspace: this.workspace.serialize(),
|
||||
packageStates: this.packages.serialize(),
|
||||
grammars: {grammarOverridesByPath: this.grammars.grammarOverridesByPath},
|
||||
grammars: this.grammars.serialize(),
|
||||
fullScreen: this.isFullScreen(),
|
||||
windowDimensions: this.windowDimensions,
|
||||
textEditors: this.textEditors.serialize()
|
||||
windowDimensions: this.windowDimensions
|
||||
}
|
||||
}
|
||||
|
||||
@@ -1104,11 +1108,6 @@ class AtomEnvironment {
|
||||
async deserialize (state) {
|
||||
if (!state) return Promise.resolve()
|
||||
|
||||
const grammarOverridesByPath = state.grammars && state.grammars.grammarOverridesByPath
|
||||
if (grammarOverridesByPath) {
|
||||
this.grammars.grammarOverridesByPath = grammarOverridesByPath
|
||||
}
|
||||
|
||||
this.setFullScreen(state.fullScreen)
|
||||
|
||||
const missingProjectPaths = []
|
||||
@@ -1133,7 +1132,7 @@ class AtomEnvironment {
|
||||
|
||||
this.deserializeTimings.project = Date.now() - startTime
|
||||
|
||||
if (state.textEditors) this.textEditors.deserialize(state.textEditors)
|
||||
if (state.grammars) this.grammars.deserialize(state.grammars)
|
||||
|
||||
startTime = Date.now()
|
||||
if (state.workspace) this.workspace.deserialize(state.workspace, this.deserializers)
|
||||
|
||||
@@ -705,7 +705,7 @@ class Cursor extends Model {
|
||||
*/
|
||||
|
||||
getNonWordCharacters () {
|
||||
return this.editor.getNonWordCharacters(this.getScopeDescriptor().getScopesArray())
|
||||
return this.editor.getNonWordCharacters(this.getBufferPosition())
|
||||
}
|
||||
|
||||
changePosition (options, fn) {
|
||||
|
||||
@@ -1,28 +1,154 @@
|
||||
const _ = require('underscore-plus')
|
||||
const Grim = require('grim')
|
||||
const FirstMate = require('first-mate')
|
||||
const {Disposable, CompositeDisposable} = require('event-kit')
|
||||
const TextMateLanguageMode = require('./text-mate-language-mode')
|
||||
const Token = require('./token')
|
||||
const fs = require('fs-plus')
|
||||
const Grim = require('grim')
|
||||
const {Point, Range} = require('text-buffer')
|
||||
|
||||
const PathSplitRegex = new RegExp('[/.]')
|
||||
const GRAMMAR_SELECTION_RANGE = Range(Point.ZERO, Point(10, 0)).freeze()
|
||||
const PATH_SPLIT_REGEX = new RegExp('[/.]')
|
||||
|
||||
// Extended: Syntax class holding the grammars used for tokenizing.
|
||||
// Extended: This class holds the grammars used for tokenizing.
|
||||
//
|
||||
// An instance of this class is always available as the `atom.grammars` global.
|
||||
//
|
||||
// The Syntax class also contains properties for things such as the
|
||||
// language-specific comment regexes. See {::getProperty} for more details.
|
||||
module.exports =
|
||||
class GrammarRegistry extends FirstMate.GrammarRegistry {
|
||||
class GrammarRegistry {
|
||||
constructor ({config} = {}) {
|
||||
super({maxTokensPerLine: 100, maxLineLength: 1000})
|
||||
this.config = config
|
||||
this.subscriptions = new CompositeDisposable()
|
||||
this.textmateRegistry = new FirstMate.GrammarRegistry({maxTokensPerLine: 100, maxLineLength: 1000})
|
||||
this.clear()
|
||||
}
|
||||
|
||||
clear () {
|
||||
this.textmateRegistry.clear()
|
||||
if (this.subscriptions) this.subscriptions.dispose()
|
||||
this.subscriptions = new CompositeDisposable()
|
||||
this.languageOverridesByBufferId = new Map()
|
||||
this.grammarScoresByBuffer = new Map()
|
||||
|
||||
const grammarAddedOrUpdated = this.grammarAddedOrUpdated.bind(this)
|
||||
this.textmateRegistry.onDidAddGrammar(grammarAddedOrUpdated)
|
||||
this.textmateRegistry.onDidUpdateGrammar(grammarAddedOrUpdated)
|
||||
}
|
||||
|
||||
serialize () {
|
||||
const languageOverridesByBufferId = {}
|
||||
this.languageOverridesByBufferId.forEach((languageId, bufferId) => {
|
||||
languageOverridesByBufferId[bufferId] = languageId
|
||||
})
|
||||
return {languageOverridesByBufferId}
|
||||
}
|
||||
|
||||
deserialize (params) {
|
||||
for (const bufferId in params.languageOverridesByBufferId || {}) {
|
||||
this.languageOverridesByBufferId.set(
|
||||
bufferId,
|
||||
params.languageOverridesByBufferId[bufferId]
|
||||
)
|
||||
}
|
||||
}
|
||||
|
||||
createToken (value, scopes) {
|
||||
return new Token({value, scopes})
|
||||
}
|
||||
|
||||
// Extended: set a {TextBuffer}'s language mode based on its path and content,
|
||||
// and continue to update its language mode as grammars are added or updated, or
|
||||
// the buffer's file path changes.
|
||||
//
|
||||
// * `buffer` The {TextBuffer} whose language mode will be maintained.
|
||||
//
|
||||
// Returns a {Disposable} that can be used to stop updating the buffer's
|
||||
// language mode.
|
||||
maintainLanguageMode (buffer) {
|
||||
this.grammarScoresByBuffer.set(buffer, null)
|
||||
|
||||
const languageOverride = this.languageOverridesByBufferId.get(buffer.id)
|
||||
if (languageOverride) {
|
||||
this.assignLanguageMode(buffer, languageOverride)
|
||||
} else {
|
||||
this.autoAssignLanguageMode(buffer)
|
||||
}
|
||||
|
||||
const pathChangeSubscription = buffer.onDidChangePath(() => {
|
||||
this.grammarScoresByBuffer.delete(buffer)
|
||||
if (!this.languageOverridesByBufferId.has(buffer.id)) {
|
||||
this.autoAssignLanguageMode(buffer)
|
||||
}
|
||||
})
|
||||
|
||||
const destroySubscription = buffer.onDidDestroy(() => {
|
||||
this.grammarScoresByBuffer.delete(buffer)
|
||||
this.languageOverridesByBufferId.delete(buffer.id)
|
||||
this.subscriptions.remove(destroySubscription)
|
||||
this.subscriptions.remove(pathChangeSubscription)
|
||||
})
|
||||
|
||||
this.subscriptions.add(pathChangeSubscription, destroySubscription)
|
||||
|
||||
return new Disposable(() => {
|
||||
destroySubscription.dispose()
|
||||
pathChangeSubscription.dispose()
|
||||
this.subscriptions.remove(pathChangeSubscription)
|
||||
this.subscriptions.remove(destroySubscription)
|
||||
this.grammarScoresByBuffer.delete(buffer)
|
||||
this.languageOverridesByBufferId.delete(buffer.id)
|
||||
})
|
||||
}
|
||||
|
||||
// Extended: Force a {TextBuffer} to use a different grammar than the
|
||||
// one that would otherwise be selected for it.
|
||||
//
|
||||
// * `buffer` The {TextBuffer} whose gramamr will be set.
|
||||
// * `languageId` The {String} id of the desired language.
|
||||
//
|
||||
// Returns a {Boolean} that indicates whether the language was successfully
|
||||
// found.
|
||||
assignLanguageMode (buffer, languageId) {
|
||||
if (buffer.getBuffer) buffer = buffer.getBuffer()
|
||||
|
||||
let grammar = null
|
||||
if (languageId != null) {
|
||||
grammar = this.textmateRegistry.grammarForScopeName(languageId)
|
||||
if (!grammar) return false
|
||||
this.languageOverridesByBufferId.set(buffer.id, languageId)
|
||||
} else {
|
||||
this.languageOverridesByBufferId.set(buffer.id, null)
|
||||
grammar = this.textmateRegistry.nullGrammar
|
||||
}
|
||||
|
||||
this.grammarScoresByBuffer.set(buffer, null)
|
||||
if (grammar.scopeName !== buffer.getLanguageMode().getLanguageId()) {
|
||||
buffer.setLanguageMode(this.languageModeForGrammarAndBuffer(grammar, buffer))
|
||||
}
|
||||
|
||||
return true
|
||||
}
|
||||
|
||||
// Extended: Remove any language mode override that has been set for the
|
||||
// given {TextBuffer}. This will assign to the buffer the best language
|
||||
// mode available.
|
||||
//
|
||||
// * `buffer` The {TextBuffer}.
|
||||
autoAssignLanguageMode (buffer) {
|
||||
const result = this.selectGrammarWithScore(
|
||||
buffer.getPath(),
|
||||
buffer.getTextInRange(GRAMMAR_SELECTION_RANGE)
|
||||
)
|
||||
this.languageOverridesByBufferId.delete(buffer.id)
|
||||
this.grammarScoresByBuffer.set(buffer, result.score)
|
||||
if (result.grammar.scopeName !== buffer.getLanguageMode().getLanguageId()) {
|
||||
buffer.setLanguageMode(this.languageModeForGrammarAndBuffer(result.grammar, buffer))
|
||||
}
|
||||
}
|
||||
|
||||
languageModeForGrammarAndBuffer (grammar, buffer) {
|
||||
return new TextMateLanguageMode({grammar, buffer, config: this.config})
|
||||
}
|
||||
|
||||
// Extended: Select a grammar for the given file path and file contents.
|
||||
//
|
||||
// This picks the best match by checking the file path and contents against
|
||||
@@ -39,7 +165,7 @@ class GrammarRegistry extends FirstMate.GrammarRegistry {
|
||||
selectGrammarWithScore (filePath, fileContents) {
|
||||
let bestMatch = null
|
||||
let highestScore = -Infinity
|
||||
for (let grammar of this.grammars) {
|
||||
for (let grammar of this.textmateRegistry.grammars) {
|
||||
const score = this.getGrammarScore(grammar, filePath, fileContents)
|
||||
if ((score > highestScore) || (bestMatch == null)) {
|
||||
bestMatch = grammar
|
||||
@@ -70,7 +196,7 @@ class GrammarRegistry extends FirstMate.GrammarRegistry {
|
||||
if (!filePath) { return -1 }
|
||||
if (process.platform === 'win32') { filePath = filePath.replace(/\\/g, '/') }
|
||||
|
||||
const pathComponents = filePath.toLowerCase().split(PathSplitRegex)
|
||||
const pathComponents = filePath.toLowerCase().split(PATH_SPLIT_REGEX)
|
||||
let pathScore = -1
|
||||
|
||||
let customFileTypes
|
||||
@@ -85,7 +211,7 @@ class GrammarRegistry extends FirstMate.GrammarRegistry {
|
||||
|
||||
for (let i = 0; i < fileTypes.length; i++) {
|
||||
const fileType = fileTypes[i]
|
||||
const fileTypeComponents = fileType.toLowerCase().split(PathSplitRegex)
|
||||
const fileTypeComponents = fileType.toLowerCase().split(PATH_SPLIT_REGEX)
|
||||
const pathSuffix = pathComponents.slice(-fileTypeComponents.length)
|
||||
if (_.isEqual(pathSuffix, fileTypeComponents)) {
|
||||
pathScore = Math.max(pathScore, fileType.length)
|
||||
@@ -126,46 +252,169 @@ class GrammarRegistry extends FirstMate.GrammarRegistry {
|
||||
//
|
||||
// Returns a {String} such as `"source.js"`.
|
||||
grammarOverrideForPath (filePath) {
|
||||
Grim.deprecate('Use atom.textEditors.getGrammarOverride(editor) instead')
|
||||
|
||||
const editor = getEditorForPath(filePath)
|
||||
if (editor) {
|
||||
return atom.textEditors.getGrammarOverride(editor)
|
||||
}
|
||||
Grim.deprecate('Use buffer.getLanguageMode().getLanguageId() instead')
|
||||
const buffer = atom.project.findBufferForPath(filePath)
|
||||
if (buffer) return this.languageOverridesByBufferId.get(buffer.id)
|
||||
}
|
||||
|
||||
// Deprecated: Set the grammar override for the given file path.
|
||||
//
|
||||
// * `filePath` A non-empty {String} file path.
|
||||
// * `scopeName` A {String} such as `"source.js"`.
|
||||
// * `languageId` A {String} such as `"source.js"`.
|
||||
//
|
||||
// Returns undefined.
|
||||
setGrammarOverrideForPath (filePath, scopeName) {
|
||||
Grim.deprecate('Use atom.textEditors.setGrammarOverride(editor, scopeName) instead')
|
||||
|
||||
const editor = getEditorForPath(filePath)
|
||||
if (editor) {
|
||||
atom.textEditors.setGrammarOverride(editor, scopeName)
|
||||
setGrammarOverrideForPath (filePath, languageId) {
|
||||
Grim.deprecate('Use atom.grammars.assignLanguageMode(buffer, languageId) instead')
|
||||
const buffer = atom.project.findBufferForPath(filePath)
|
||||
if (buffer) {
|
||||
const grammar = this.grammarForScopeName(languageId)
|
||||
if (grammar) this.languageOverridesByBufferId.set(buffer.id, grammar.name)
|
||||
}
|
||||
}
|
||||
|
||||
// Deprecated: Remove the grammar override for the given file path.
|
||||
// Remove the grammar override for the given file path.
|
||||
//
|
||||
// * `filePath` A {String} file path.
|
||||
//
|
||||
// Returns undefined.
|
||||
clearGrammarOverrideForPath (filePath) {
|
||||
Grim.deprecate('Use atom.textEditors.clearGrammarOverride(editor) instead')
|
||||
Grim.deprecate('Use atom.grammars.autoAssignLanguageMode(buffer) instead')
|
||||
const buffer = atom.project.findBufferForPath(filePath)
|
||||
if (buffer) this.languageOverridesByBufferId.delete(buffer.id)
|
||||
}
|
||||
|
||||
const editor = getEditorForPath(filePath)
|
||||
if (editor) {
|
||||
atom.textEditors.clearGrammarOverride(editor)
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
function getEditorForPath (filePath) {
|
||||
if (filePath != null) {
|
||||
return atom.workspace.getTextEditors().find(editor => editor.getPath() === filePath)
|
||||
grammarAddedOrUpdated (grammar) {
|
||||
this.grammarScoresByBuffer.forEach((score, buffer) => {
|
||||
const languageMode = buffer.getLanguageMode()
|
||||
if (grammar.injectionSelector) {
|
||||
if (languageMode.hasTokenForSelector(grammar.injectionSelector)) {
|
||||
languageMode.retokenizeLines()
|
||||
}
|
||||
return
|
||||
}
|
||||
|
||||
const languageOverride = this.languageOverridesByBufferId.get(buffer.id)
|
||||
|
||||
if ((grammar.scopeName === buffer.getLanguageMode().getLanguageId() ||
|
||||
grammar.scopeName === languageOverride)) {
|
||||
buffer.setLanguageMode(this.languageModeForGrammarAndBuffer(grammar, buffer))
|
||||
} else if (!languageOverride) {
|
||||
const score = this.getGrammarScore(
|
||||
grammar,
|
||||
buffer.getPath(),
|
||||
buffer.getTextInRange(GRAMMAR_SELECTION_RANGE)
|
||||
)
|
||||
|
||||
const currentScore = this.grammarScoresByBuffer.get(buffer)
|
||||
if (currentScore == null || score > currentScore) {
|
||||
buffer.setLanguageMode(this.languageModeForGrammarAndBuffer(grammar, buffer))
|
||||
this.grammarScoresByBuffer.set(buffer, score)
|
||||
}
|
||||
}
|
||||
})
|
||||
}
|
||||
|
||||
// Extended: Invoke the given callback when a grammar is added to the registry.
|
||||
//
|
||||
// * `callback` {Function} to call when a grammar is added.
|
||||
// * `grammar` {Grammar} that was added.
|
||||
//
|
||||
// Returns a {Disposable} on which `.dispose()` can be called to unsubscribe.
|
||||
onDidAddGrammar (callback) {
|
||||
return this.textmateRegistry.onDidAddGrammar(callback)
|
||||
}
|
||||
|
||||
// Extended: Invoke the given callback when a grammar is updated due to a grammar
|
||||
// it depends on being added or removed from the registry.
|
||||
//
|
||||
// * `callback` {Function} to call when a grammar is updated.
|
||||
// * `grammar` {Grammar} that was updated.
|
||||
//
|
||||
// Returns a {Disposable} on which `.dispose()` can be called to unsubscribe.
|
||||
onDidUpdateGrammar (callback) {
|
||||
return this.textmateRegistry.onDidUpdateGrammar(callback)
|
||||
}
|
||||
|
||||
get nullGrammar () {
|
||||
return this.textmateRegistry.nullGrammar
|
||||
}
|
||||
|
||||
get grammars () {
|
||||
return this.textmateRegistry.grammars
|
||||
}
|
||||
|
||||
decodeTokens () {
|
||||
return this.textmateRegistry.decodeTokens.apply(this.textmateRegistry, arguments)
|
||||
}
|
||||
|
||||
grammarForScopeName (scopeName) {
|
||||
return this.textmateRegistry.grammarForScopeName(scopeName)
|
||||
}
|
||||
|
||||
addGrammar (grammar) {
|
||||
return this.textmateRegistry.addGrammar(grammar)
|
||||
}
|
||||
|
||||
removeGrammar (grammar) {
|
||||
return this.textmateRegistry.removeGrammar(grammar)
|
||||
}
|
||||
|
||||
removeGrammarForScopeName (scopeName) {
|
||||
return this.textmateRegistry.removeGrammarForScopeName(scopeName)
|
||||
}
|
||||
|
||||
// Extended: Read a grammar asynchronously and add it to the registry.
|
||||
//
|
||||
// * `grammarPath` A {String} absolute file path to a grammar file.
|
||||
// * `callback` A {Function} to call when loaded with the following arguments:
|
||||
// * `error` An {Error}, may be null.
|
||||
// * `grammar` A {Grammar} or null if an error occured.
|
||||
loadGrammar (grammarPath, callback) {
|
||||
return this.textmateRegistry.loadGrammar(grammarPath, callback)
|
||||
}
|
||||
|
||||
// Extended: Read a grammar synchronously and add it to this registry.
|
||||
//
|
||||
// * `grammarPath` A {String} absolute file path to a grammar file.
|
||||
//
|
||||
// Returns a {Grammar}.
|
||||
loadGrammarSync (grammarPath) {
|
||||
return this.textmateRegistry.loadGrammarSync(grammarPath)
|
||||
}
|
||||
|
||||
// Extended: Read a grammar asynchronously but don't add it to the registry.
|
||||
//
|
||||
// * `grammarPath` A {String} absolute file path to a grammar file.
|
||||
// * `callback` A {Function} to call when read with the following arguments:
|
||||
// * `error` An {Error}, may be null.
|
||||
// * `grammar` A {Grammar} or null if an error occured.
|
||||
//
|
||||
// Returns undefined.
|
||||
readGrammar (grammarPath, callback) {
|
||||
return this.textmateRegistry.readGrammar(grammarPath, callback)
|
||||
}
|
||||
|
||||
// Extended: Read a grammar synchronously but don't add it to the registry.
|
||||
//
|
||||
// * `grammarPath` A {String} absolute file path to a grammar file.
|
||||
//
|
||||
// Returns a {Grammar}.
|
||||
readGrammarSync (grammarPath) {
|
||||
return this.textmateRegistry.readGrammarSync(grammarPath)
|
||||
}
|
||||
|
||||
createGrammar (grammarPath, params) {
|
||||
return this.textmateRegistry.createGrammar(grammarPath, params)
|
||||
}
|
||||
|
||||
// Extended: Get all the grammars in this registry.
|
||||
//
|
||||
// Returns a non-empty {Array} of {Grammar} instances.
|
||||
getGrammars () {
|
||||
return this.textmateRegistry.getGrammars()
|
||||
}
|
||||
|
||||
scopeForId (id) {
|
||||
return this.textmateRegistry.scopeForId(id)
|
||||
}
|
||||
}
|
||||
|
||||
@@ -50,8 +50,8 @@ export class HistoryManager {
|
||||
return this.emitter.on('did-change-projects', callback)
|
||||
}
|
||||
|
||||
didChangeProjects (args) {
|
||||
this.emitter.emit('did-change-projects', args || { reloaded: false })
|
||||
didChangeProjects (args = {reloaded: false}) {
|
||||
this.emitter.emit('did-change-projects', args)
|
||||
}
|
||||
|
||||
async addProject (paths, lastOpened) {
|
||||
@@ -93,7 +93,7 @@ export class HistoryManager {
|
||||
}
|
||||
|
||||
async loadState () {
|
||||
let history = await this.stateStore.load('history-manager')
|
||||
const history = await this.stateStore.load('history-manager')
|
||||
if (history && history.projects) {
|
||||
this.projects = history.projects.filter(p => Array.isArray(p.paths) && p.paths.length > 0).map(p => new HistoryProject(p.paths, new Date(p.lastOpened)))
|
||||
this.didChangeProjects({reloaded: true})
|
||||
|
||||
@@ -67,6 +67,7 @@ global.atom = new AtomEnvironment({
|
||||
enablePersistence: true
|
||||
})
|
||||
|
||||
TextEditor.setScheduler(global.atom.views)
|
||||
global.atom.preloadPackages()
|
||||
|
||||
# Like sands through the hourglass, so are the days of our lives.
|
||||
|
||||
@@ -82,6 +82,7 @@ module.exports = ({blobStore}) ->
|
||||
params.onlyLoadBaseStyleSheets = true unless params.hasOwnProperty("onlyLoadBaseStyleSheets")
|
||||
atomEnvironment = new AtomEnvironment(params)
|
||||
atomEnvironment.initialize(params)
|
||||
TextEditor.setScheduler(atomEnvironment.views)
|
||||
atomEnvironment
|
||||
|
||||
promise = testRunner({
|
||||
|
||||
@@ -1,848 +0,0 @@
|
||||
path = require 'path'
|
||||
|
||||
_ = require 'underscore-plus'
|
||||
async = require 'async'
|
||||
CSON = require 'season'
|
||||
fs = require 'fs-plus'
|
||||
{Emitter, CompositeDisposable} = require 'event-kit'
|
||||
|
||||
CompileCache = require './compile-cache'
|
||||
ModuleCache = require './module-cache'
|
||||
ScopedProperties = require './scoped-properties'
|
||||
BufferedProcess = require './buffered-process'
|
||||
|
||||
# Extended: Loads and activates a package's main module and resources such as
|
||||
# stylesheets, keymaps, grammar, editor properties, and menus.
|
||||
module.exports =
|
||||
class Package
|
||||
keymaps: null
|
||||
menus: null
|
||||
stylesheets: null
|
||||
stylesheetDisposables: null
|
||||
grammars: null
|
||||
settings: null
|
||||
mainModulePath: null
|
||||
resolvedMainModulePath: false
|
||||
mainModule: null
|
||||
mainInitialized: false
|
||||
mainActivated: false
|
||||
|
||||
###
|
||||
Section: Construction
|
||||
###
|
||||
|
||||
constructor: (params) ->
|
||||
{
|
||||
@path, @metadata, @bundledPackage, @preloadedPackage, @packageManager, @config, @styleManager, @commandRegistry,
|
||||
@keymapManager, @notificationManager, @grammarRegistry, @themeManager,
|
||||
@menuManager, @contextMenuManager, @deserializerManager, @viewRegistry
|
||||
} = params
|
||||
|
||||
@emitter = new Emitter
|
||||
@metadata ?= @packageManager.loadPackageMetadata(@path)
|
||||
@bundledPackage ?= @packageManager.isBundledPackagePath(@path)
|
||||
@name = @metadata?.name ? params.name ? path.basename(@path)
|
||||
@reset()
|
||||
|
||||
###
|
||||
Section: Event Subscription
|
||||
###
|
||||
|
||||
# Essential: Invoke the given callback when all packages have been activated.
|
||||
#
|
||||
# * `callback` {Function}
|
||||
#
|
||||
# Returns a {Disposable} on which `.dispose()` can be called to unsubscribe.
|
||||
onDidDeactivate: (callback) ->
|
||||
@emitter.on 'did-deactivate', callback
|
||||
|
||||
###
|
||||
Section: Instance Methods
|
||||
###
|
||||
|
||||
enable: ->
|
||||
@config.removeAtKeyPath('core.disabledPackages', @name)
|
||||
|
||||
disable: ->
|
||||
@config.pushAtKeyPath('core.disabledPackages', @name)
|
||||
|
||||
isTheme: ->
|
||||
@metadata?.theme?
|
||||
|
||||
measure: (key, fn) ->
|
||||
startTime = Date.now()
|
||||
value = fn()
|
||||
@[key] = Date.now() - startTime
|
||||
value
|
||||
|
||||
getType: -> 'atom'
|
||||
|
||||
getStyleSheetPriority: -> 0
|
||||
|
||||
preload: ->
|
||||
@loadKeymaps()
|
||||
@loadMenus()
|
||||
@registerDeserializerMethods()
|
||||
@activateCoreStartupServices()
|
||||
@registerURIHandler()
|
||||
@configSchemaRegisteredOnLoad = @registerConfigSchemaFromMetadata()
|
||||
@requireMainModule()
|
||||
@settingsPromise = @loadSettings()
|
||||
|
||||
@activationDisposables = new CompositeDisposable
|
||||
@activateKeymaps()
|
||||
@activateMenus()
|
||||
settings.activate() for settings in @settings
|
||||
@settingsActivated = true
|
||||
|
||||
finishLoading: ->
|
||||
@measure 'loadTime', =>
|
||||
@path = path.join(@packageManager.resourcePath, @path)
|
||||
ModuleCache.add(@path, @metadata)
|
||||
|
||||
@loadStylesheets()
|
||||
# Unfortunately some packages are accessing `@mainModulePath`, so we need
|
||||
# to compute that variable eagerly also for preloaded packages.
|
||||
@getMainModulePath()
|
||||
|
||||
load: ->
|
||||
@measure 'loadTime', =>
|
||||
try
|
||||
ModuleCache.add(@path, @metadata)
|
||||
|
||||
@loadKeymaps()
|
||||
@loadMenus()
|
||||
@loadStylesheets()
|
||||
@registerDeserializerMethods()
|
||||
@activateCoreStartupServices()
|
||||
@registerURIHandler()
|
||||
@registerTranspilerConfig()
|
||||
@configSchemaRegisteredOnLoad = @registerConfigSchemaFromMetadata()
|
||||
@settingsPromise = @loadSettings()
|
||||
if @shouldRequireMainModuleOnLoad() and not @mainModule?
|
||||
@requireMainModule()
|
||||
catch error
|
||||
@handleError("Failed to load the #{@name} package", error)
|
||||
this
|
||||
|
||||
unload: ->
|
||||
@unregisterTranspilerConfig()
|
||||
|
||||
shouldRequireMainModuleOnLoad: ->
|
||||
not (
|
||||
@metadata.deserializers? or
|
||||
@metadata.viewProviders? or
|
||||
@metadata.configSchema? or
|
||||
@activationShouldBeDeferred() or
|
||||
localStorage.getItem(@getCanDeferMainModuleRequireStorageKey()) is 'true'
|
||||
)
|
||||
|
||||
reset: ->
|
||||
@stylesheets = []
|
||||
@keymaps = []
|
||||
@menus = []
|
||||
@grammars = []
|
||||
@settings = []
|
||||
@mainInitialized = false
|
||||
@mainActivated = false
|
||||
|
||||
initializeIfNeeded: ->
|
||||
return if @mainInitialized
|
||||
@measure 'initializeTime', =>
|
||||
try
|
||||
# The main module's `initialize()` method is guaranteed to be called
|
||||
# before its `activate()`. This gives you a chance to handle the
|
||||
# serialized package state before the package's derserializers and view
|
||||
# providers are used.
|
||||
@requireMainModule() unless @mainModule?
|
||||
@mainModule.initialize?(@packageManager.getPackageState(@name) ? {})
|
||||
@mainInitialized = true
|
||||
catch error
|
||||
@handleError("Failed to initialize the #{@name} package", error)
|
||||
return
|
||||
|
||||
activate: ->
|
||||
@grammarsPromise ?= @loadGrammars()
|
||||
@activationPromise ?=
|
||||
new Promise (resolve, reject) =>
|
||||
@resolveActivationPromise = resolve
|
||||
@measure 'activateTime', =>
|
||||
try
|
||||
@activateResources()
|
||||
if @activationShouldBeDeferred()
|
||||
@subscribeToDeferredActivation()
|
||||
else
|
||||
@activateNow()
|
||||
catch error
|
||||
@handleError("Failed to activate the #{@name} package", error)
|
||||
|
||||
Promise.all([@grammarsPromise, @settingsPromise, @activationPromise])
|
||||
|
||||
activateNow: ->
|
||||
try
|
||||
@requireMainModule() unless @mainModule?
|
||||
@configSchemaRegisteredOnActivate = @registerConfigSchemaFromMainModule()
|
||||
@registerViewProviders()
|
||||
@activateStylesheets()
|
||||
if @mainModule? and not @mainActivated
|
||||
@initializeIfNeeded()
|
||||
@mainModule.activateConfig?()
|
||||
@mainModule.activate?(@packageManager.getPackageState(@name) ? {})
|
||||
@mainActivated = true
|
||||
@activateServices()
|
||||
@activationCommandSubscriptions?.dispose()
|
||||
@activationHookSubscriptions?.dispose()
|
||||
catch error
|
||||
@handleError("Failed to activate the #{@name} package", error)
|
||||
|
||||
@resolveActivationPromise?()
|
||||
|
||||
registerConfigSchemaFromMetadata: ->
|
||||
if configSchema = @metadata.configSchema
|
||||
@config.setSchema @name, {type: 'object', properties: configSchema}
|
||||
true
|
||||
else
|
||||
false
|
||||
|
||||
registerConfigSchemaFromMainModule: ->
|
||||
if @mainModule? and not @configSchemaRegisteredOnLoad
|
||||
if @mainModule.config? and typeof @mainModule.config is 'object'
|
||||
@config.setSchema @name, {type: 'object', properties: @mainModule.config}
|
||||
return true
|
||||
false
|
||||
|
||||
# TODO: Remove. Settings view calls this method currently.
|
||||
activateConfig: ->
|
||||
return if @configSchemaRegisteredOnLoad
|
||||
@requireMainModule()
|
||||
@registerConfigSchemaFromMainModule()
|
||||
|
||||
activateStylesheets: ->
|
||||
return if @stylesheetsActivated
|
||||
|
||||
@stylesheetDisposables = new CompositeDisposable
|
||||
|
||||
priority = @getStyleSheetPriority()
|
||||
for [sourcePath, source] in @stylesheets
|
||||
if match = path.basename(sourcePath).match(/[^.]*\.([^.]*)\./)
|
||||
context = match[1]
|
||||
else if @metadata.theme is 'syntax'
|
||||
context = 'atom-text-editor'
|
||||
else
|
||||
context = undefined
|
||||
|
||||
@stylesheetDisposables.add(
|
||||
@styleManager.addStyleSheet(
|
||||
source,
|
||||
{
|
||||
sourcePath,
|
||||
priority,
|
||||
context,
|
||||
skipDeprecatedSelectorsTransformation: @bundledPackage
|
||||
}
|
||||
)
|
||||
)
|
||||
@stylesheetsActivated = true
|
||||
|
||||
activateResources: ->
|
||||
@activationDisposables ?= new CompositeDisposable
|
||||
|
||||
keymapIsDisabled = _.include(@config.get("core.packagesWithKeymapsDisabled") ? [], @name)
|
||||
if keymapIsDisabled
|
||||
@deactivateKeymaps()
|
||||
else unless @keymapActivated
|
||||
@activateKeymaps()
|
||||
|
||||
unless @menusActivated
|
||||
@activateMenus()
|
||||
|
||||
unless @grammarsActivated
|
||||
grammar.activate() for grammar in @grammars
|
||||
@grammarsActivated = true
|
||||
|
||||
unless @settingsActivated
|
||||
settings.activate() for settings in @settings
|
||||
@settingsActivated = true
|
||||
|
||||
activateKeymaps: ->
|
||||
return if @keymapActivated
|
||||
|
||||
@keymapDisposables = new CompositeDisposable()
|
||||
|
||||
validateSelectors = not @preloadedPackage
|
||||
@keymapDisposables.add(@keymapManager.add(keymapPath, map, 0, validateSelectors)) for [keymapPath, map] in @keymaps
|
||||
@menuManager.update()
|
||||
|
||||
@keymapActivated = true
|
||||
|
||||
deactivateKeymaps: ->
|
||||
return if not @keymapActivated
|
||||
|
||||
@keymapDisposables?.dispose()
|
||||
@menuManager.update()
|
||||
|
||||
@keymapActivated = false
|
||||
|
||||
hasKeymaps: ->
|
||||
for [path, map] in @keymaps
|
||||
if map.length > 0
|
||||
return true
|
||||
false
|
||||
|
||||
activateMenus: ->
|
||||
validateSelectors = not @preloadedPackage
|
||||
for [menuPath, map] in @menus when map['context-menu']?
|
||||
try
|
||||
itemsBySelector = map['context-menu']
|
||||
@activationDisposables.add(@contextMenuManager.add(itemsBySelector, validateSelectors))
|
||||
catch error
|
||||
if error.code is 'EBADSELECTOR'
|
||||
error.message += " in #{menuPath}"
|
||||
error.stack += "\n at #{menuPath}:1:1"
|
||||
throw error
|
||||
|
||||
for [menuPath, map] in @menus when map['menu']?
|
||||
@activationDisposables.add(@menuManager.add(map['menu']))
|
||||
|
||||
@menusActivated = true
|
||||
|
||||
activateServices: ->
|
||||
for name, {versions} of @metadata.providedServices
|
||||
servicesByVersion = {}
|
||||
for version, methodName of versions
|
||||
if typeof @mainModule[methodName] is 'function'
|
||||
servicesByVersion[version] = @mainModule[methodName]()
|
||||
@activationDisposables.add @packageManager.serviceHub.provide(name, servicesByVersion)
|
||||
|
||||
for name, {versions} of @metadata.consumedServices
|
||||
for version, methodName of versions
|
||||
if typeof @mainModule[methodName] is 'function'
|
||||
@activationDisposables.add @packageManager.serviceHub.consume(name, version, @mainModule[methodName].bind(@mainModule))
|
||||
return
|
||||
|
||||
registerURIHandler: ->
|
||||
handlerConfig = @getURIHandler()
|
||||
if methodName = handlerConfig?.method
|
||||
@uriHandlerSubscription = @packageManager.registerURIHandlerForPackage @name, (args...) =>
|
||||
@handleURI(methodName, args)
|
||||
|
||||
unregisterURIHandler: ->
|
||||
@uriHandlerSubscription?.dispose()
|
||||
|
||||
handleURI: (methodName, args) ->
|
||||
@activate().then => @mainModule[methodName]?.apply(@mainModule, args)
|
||||
@activateNow() unless @mainActivated
|
||||
|
||||
registerTranspilerConfig: ->
|
||||
if @metadata.atomTranspilers
|
||||
CompileCache.addTranspilerConfigForPath(@path, @name, @metadata, @metadata.atomTranspilers)
|
||||
|
||||
unregisterTranspilerConfig: ->
|
||||
if @metadata.atomTranspilers
|
||||
CompileCache.removeTranspilerConfigForPath(@path)
|
||||
|
||||
loadKeymaps: ->
|
||||
if @bundledPackage and @packageManager.packagesCache[@name]?
|
||||
@keymaps = (["core:#{keymapPath}", keymapObject] for keymapPath, keymapObject of @packageManager.packagesCache[@name].keymaps)
|
||||
else
|
||||
@keymaps = @getKeymapPaths().map (keymapPath) -> [keymapPath, CSON.readFileSync(keymapPath, allowDuplicateKeys: false) ? {}]
|
||||
return
|
||||
|
||||
loadMenus: ->
|
||||
if @bundledPackage and @packageManager.packagesCache[@name]?
|
||||
@menus = (["core:#{menuPath}", menuObject] for menuPath, menuObject of @packageManager.packagesCache[@name].menus)
|
||||
else
|
||||
@menus = @getMenuPaths().map (menuPath) -> [menuPath, CSON.readFileSync(menuPath) ? {}]
|
||||
return
|
||||
|
||||
getKeymapPaths: ->
|
||||
keymapsDirPath = path.join(@path, 'keymaps')
|
||||
if @metadata.keymaps
|
||||
@metadata.keymaps.map (name) -> fs.resolve(keymapsDirPath, name, ['json', 'cson', ''])
|
||||
else
|
||||
fs.listSync(keymapsDirPath, ['cson', 'json'])
|
||||
|
||||
getMenuPaths: ->
|
||||
menusDirPath = path.join(@path, 'menus')
|
||||
if @metadata.menus
|
||||
@metadata.menus.map (name) -> fs.resolve(menusDirPath, name, ['json', 'cson', ''])
|
||||
else
|
||||
fs.listSync(menusDirPath, ['cson', 'json'])
|
||||
|
||||
loadStylesheets: ->
|
||||
@stylesheets = @getStylesheetPaths().map (stylesheetPath) =>
|
||||
[stylesheetPath, @themeManager.loadStylesheet(stylesheetPath, true)]
|
||||
|
||||
registerDeserializerMethods: ->
|
||||
if @metadata.deserializers?
|
||||
Object.keys(@metadata.deserializers).forEach (deserializerName) =>
|
||||
methodName = @metadata.deserializers[deserializerName]
|
||||
@deserializerManager.add
|
||||
name: deserializerName,
|
||||
deserialize: (state, atomEnvironment) =>
|
||||
@registerViewProviders()
|
||||
@requireMainModule()
|
||||
@initializeIfNeeded()
|
||||
@mainModule[methodName](state, atomEnvironment)
|
||||
return
|
||||
|
||||
activateCoreStartupServices: ->
|
||||
if directoryProviderService = @metadata.providedServices?['atom.directory-provider']
|
||||
@requireMainModule()
|
||||
servicesByVersion = {}
|
||||
for version, methodName of directoryProviderService.versions
|
||||
if typeof @mainModule[methodName] is 'function'
|
||||
servicesByVersion[version] = @mainModule[methodName]()
|
||||
@packageManager.serviceHub.provide('atom.directory-provider', servicesByVersion)
|
||||
|
||||
registerViewProviders: ->
|
||||
if @metadata.viewProviders? and not @registeredViewProviders
|
||||
@requireMainModule()
|
||||
@metadata.viewProviders.forEach (methodName) =>
|
||||
@viewRegistry.addViewProvider (model) =>
|
||||
@initializeIfNeeded()
|
||||
@mainModule[methodName](model)
|
||||
@registeredViewProviders = true
|
||||
|
||||
getStylesheetsPath: ->
|
||||
path.join(@path, 'styles')
|
||||
|
||||
getStylesheetPaths: ->
|
||||
if @bundledPackage and @packageManager.packagesCache[@name]?.styleSheetPaths?
|
||||
styleSheetPaths = @packageManager.packagesCache[@name].styleSheetPaths
|
||||
styleSheetPaths.map (styleSheetPath) => path.join(@path, styleSheetPath)
|
||||
else
|
||||
stylesheetDirPath = @getStylesheetsPath()
|
||||
if @metadata.mainStyleSheet
|
||||
[fs.resolve(@path, @metadata.mainStyleSheet)]
|
||||
else if @metadata.styleSheets
|
||||
@metadata.styleSheets.map (name) -> fs.resolve(stylesheetDirPath, name, ['css', 'less', ''])
|
||||
else if indexStylesheet = fs.resolve(@path, 'index', ['css', 'less'])
|
||||
[indexStylesheet]
|
||||
else
|
||||
fs.listSync(stylesheetDirPath, ['css', 'less'])
|
||||
|
||||
loadGrammarsSync: ->
|
||||
return if @grammarsLoaded
|
||||
|
||||
if @preloadedPackage and @packageManager.packagesCache[@name]?
|
||||
grammarPaths = @packageManager.packagesCache[@name].grammarPaths
|
||||
else
|
||||
grammarPaths = fs.listSync(path.join(@path, 'grammars'), ['json', 'cson'])
|
||||
|
||||
for grammarPath in grammarPaths
|
||||
if @preloadedPackage and @packageManager.packagesCache[@name]?
|
||||
grammarPath = path.resolve(@packageManager.resourcePath, grammarPath)
|
||||
|
||||
try
|
||||
grammar = @grammarRegistry.readGrammarSync(grammarPath)
|
||||
grammar.packageName = @name
|
||||
grammar.bundledPackage = @bundledPackage
|
||||
@grammars.push(grammar)
|
||||
grammar.activate()
|
||||
catch error
|
||||
console.warn("Failed to load grammar: #{grammarPath}", error.stack ? error)
|
||||
|
||||
@grammarsLoaded = true
|
||||
@grammarsActivated = true
|
||||
|
||||
loadGrammars: ->
|
||||
return Promise.resolve() if @grammarsLoaded
|
||||
|
||||
loadGrammar = (grammarPath, callback) =>
|
||||
if @preloadedPackage
|
||||
grammarPath = path.resolve(@packageManager.resourcePath, grammarPath)
|
||||
|
||||
@grammarRegistry.readGrammar grammarPath, (error, grammar) =>
|
||||
if error?
|
||||
detail = "#{error.message} in #{grammarPath}"
|
||||
stack = "#{error.stack}\n at #{grammarPath}:1:1"
|
||||
@notificationManager.addFatalError("Failed to load a #{@name} package grammar", {stack, detail, packageName: @name, dismissable: true})
|
||||
else
|
||||
grammar.packageName = @name
|
||||
grammar.bundledPackage = @bundledPackage
|
||||
@grammars.push(grammar)
|
||||
grammar.activate() if @grammarsActivated
|
||||
callback()
|
||||
|
||||
new Promise (resolve) =>
|
||||
if @preloadedPackage and @packageManager.packagesCache[@name]?
|
||||
grammarPaths = @packageManager.packagesCache[@name].grammarPaths
|
||||
async.each grammarPaths, loadGrammar, -> resolve()
|
||||
else
|
||||
grammarsDirPath = path.join(@path, 'grammars')
|
||||
fs.exists grammarsDirPath, (grammarsDirExists) ->
|
||||
return resolve() unless grammarsDirExists
|
||||
|
||||
fs.list grammarsDirPath, ['json', 'cson'], (error, grammarPaths=[]) ->
|
||||
async.each grammarPaths, loadGrammar, -> resolve()
|
||||
|
||||
loadSettings: ->
|
||||
@settings = []
|
||||
|
||||
loadSettingsFile = (settingsPath, callback) =>
|
||||
ScopedProperties.load settingsPath, @config, (error, settings) =>
|
||||
if error?
|
||||
detail = "#{error.message} in #{settingsPath}"
|
||||
stack = "#{error.stack}\n at #{settingsPath}:1:1"
|
||||
@notificationManager.addFatalError("Failed to load the #{@name} package settings", {stack, detail, packageName: @name, dismissable: true})
|
||||
else
|
||||
@settings.push(settings)
|
||||
settings.activate() if @settingsActivated
|
||||
callback()
|
||||
|
||||
new Promise (resolve) =>
|
||||
if @preloadedPackage and @packageManager.packagesCache[@name]?
|
||||
for settingsPath, scopedProperties of @packageManager.packagesCache[@name].settings
|
||||
settings = new ScopedProperties("core:#{settingsPath}", scopedProperties ? {}, @config)
|
||||
@settings.push(settings)
|
||||
settings.activate() if @settingsActivated
|
||||
resolve()
|
||||
else
|
||||
settingsDirPath = path.join(@path, 'settings')
|
||||
fs.exists settingsDirPath, (settingsDirExists) ->
|
||||
return resolve() unless settingsDirExists
|
||||
|
||||
fs.list settingsDirPath, ['json', 'cson'], (error, settingsPaths=[]) ->
|
||||
async.each settingsPaths, loadSettingsFile, -> resolve()
|
||||
|
||||
serialize: ->
|
||||
if @mainActivated
|
||||
try
|
||||
@mainModule?.serialize?()
|
||||
catch e
|
||||
console.error "Error serializing package '#{@name}'", e.stack
|
||||
|
||||
deactivate: ->
|
||||
@activationPromise = null
|
||||
@resolveActivationPromise = null
|
||||
@activationCommandSubscriptions?.dispose()
|
||||
@activationHookSubscriptions?.dispose()
|
||||
@configSchemaRegisteredOnActivate = false
|
||||
@unregisterURIHandler()
|
||||
@deactivateResources()
|
||||
@deactivateKeymaps()
|
||||
|
||||
unless @mainActivated
|
||||
@emitter.emit 'did-deactivate'
|
||||
return
|
||||
|
||||
try
|
||||
deactivationResult = @mainModule?.deactivate?()
|
||||
catch e
|
||||
console.error "Error deactivating package '#{@name}'", e.stack
|
||||
|
||||
# We support then-able async promises as well as sync ones from deactivate
|
||||
if typeof deactivationResult?.then is 'function'
|
||||
deactivationResult.then => @afterDeactivation()
|
||||
else
|
||||
@afterDeactivation()
|
||||
|
||||
afterDeactivation: ->
|
||||
try
|
||||
@mainModule?.deactivateConfig?()
|
||||
catch e
|
||||
console.error "Error deactivating package '#{@name}'", e.stack
|
||||
@mainActivated = false
|
||||
@mainInitialized = false
|
||||
@emitter.emit 'did-deactivate'
|
||||
|
||||
deactivateResources: ->
|
||||
grammar.deactivate() for grammar in @grammars
|
||||
settings.deactivate() for settings in @settings
|
||||
@stylesheetDisposables?.dispose()
|
||||
@activationDisposables?.dispose()
|
||||
@keymapDisposables?.dispose()
|
||||
@stylesheetsActivated = false
|
||||
@grammarsActivated = false
|
||||
@settingsActivated = false
|
||||
@menusActivated = false
|
||||
|
||||
reloadStylesheets: ->
|
||||
try
|
||||
@loadStylesheets()
|
||||
catch error
|
||||
@handleError("Failed to reload the #{@name} package stylesheets", error)
|
||||
|
||||
@stylesheetDisposables?.dispose()
|
||||
@stylesheetDisposables = new CompositeDisposable
|
||||
@stylesheetsActivated = false
|
||||
@activateStylesheets()
|
||||
|
||||
requireMainModule: ->
|
||||
if @bundledPackage and @packageManager.packagesCache[@name]?
|
||||
if @packageManager.packagesCache[@name].main?
|
||||
@mainModule = require(@packageManager.packagesCache[@name].main)
|
||||
else if @mainModuleRequired
|
||||
@mainModule
|
||||
else if not @isCompatible()
|
||||
console.warn """
|
||||
Failed to require the main module of '#{@name}' because it requires one or more incompatible native modules (#{_.pluck(@incompatibleModules, 'name').join(', ')}).
|
||||
Run `apm rebuild` in the package directory and restart Atom to resolve.
|
||||
"""
|
||||
return
|
||||
else
|
||||
mainModulePath = @getMainModulePath()
|
||||
if fs.isFileSync(mainModulePath)
|
||||
@mainModuleRequired = true
|
||||
|
||||
previousViewProviderCount = @viewRegistry.getViewProviderCount()
|
||||
previousDeserializerCount = @deserializerManager.getDeserializerCount()
|
||||
@mainModule = require(mainModulePath)
|
||||
if (@viewRegistry.getViewProviderCount() is previousViewProviderCount and
|
||||
@deserializerManager.getDeserializerCount() is previousDeserializerCount)
|
||||
localStorage.setItem(@getCanDeferMainModuleRequireStorageKey(), 'true')
|
||||
|
||||
getMainModulePath: ->
|
||||
return @mainModulePath if @resolvedMainModulePath
|
||||
@resolvedMainModulePath = true
|
||||
|
||||
if @bundledPackage and @packageManager.packagesCache[@name]?
|
||||
if @packageManager.packagesCache[@name].main
|
||||
@mainModulePath = path.resolve(@packageManager.resourcePath, 'static', @packageManager.packagesCache[@name].main)
|
||||
else
|
||||
@mainModulePath = null
|
||||
else
|
||||
mainModulePath =
|
||||
if @metadata.main
|
||||
path.join(@path, @metadata.main)
|
||||
else
|
||||
path.join(@path, 'index')
|
||||
@mainModulePath = fs.resolveExtension(mainModulePath, ["", CompileCache.supportedExtensions...])
|
||||
|
||||
activationShouldBeDeferred: ->
|
||||
@hasActivationCommands() or @hasActivationHooks() or @hasDeferredURIHandler()
|
||||
|
||||
hasActivationHooks: ->
|
||||
@getActivationHooks()?.length > 0
|
||||
|
||||
hasActivationCommands: ->
|
||||
for selector, commands of @getActivationCommands()
|
||||
return true if commands.length > 0
|
||||
false
|
||||
|
||||
hasDeferredURIHandler: ->
|
||||
@getURIHandler() and @getURIHandler().deferActivation isnt false
|
||||
|
||||
subscribeToDeferredActivation: ->
|
||||
@subscribeToActivationCommands()
|
||||
@subscribeToActivationHooks()
|
||||
|
||||
subscribeToActivationCommands: ->
|
||||
@activationCommandSubscriptions = new CompositeDisposable
|
||||
for selector, commands of @getActivationCommands()
|
||||
for command in commands
|
||||
do (selector, command) =>
|
||||
# Add dummy command so it appears in menu.
|
||||
# The real command will be registered on package activation
|
||||
try
|
||||
@activationCommandSubscriptions.add @commandRegistry.add selector, command, ->
|
||||
catch error
|
||||
if error.code is 'EBADSELECTOR'
|
||||
metadataPath = path.join(@path, 'package.json')
|
||||
error.message += " in #{metadataPath}"
|
||||
error.stack += "\n at #{metadataPath}:1:1"
|
||||
throw error
|
||||
|
||||
@activationCommandSubscriptions.add @commandRegistry.onWillDispatch (event) =>
|
||||
return unless event.type is command
|
||||
currentTarget = event.target
|
||||
while currentTarget
|
||||
if currentTarget.webkitMatchesSelector(selector)
|
||||
@activationCommandSubscriptions.dispose()
|
||||
@activateNow()
|
||||
break
|
||||
currentTarget = currentTarget.parentElement
|
||||
return
|
||||
return
|
||||
|
||||
getActivationCommands: ->
|
||||
return @activationCommands if @activationCommands?
|
||||
|
||||
@activationCommands = {}
|
||||
|
||||
if @metadata.activationCommands?
|
||||
for selector, commands of @metadata.activationCommands
|
||||
@activationCommands[selector] ?= []
|
||||
if _.isString(commands)
|
||||
@activationCommands[selector].push(commands)
|
||||
else if _.isArray(commands)
|
||||
@activationCommands[selector].push(commands...)
|
||||
|
||||
@activationCommands
|
||||
|
||||
subscribeToActivationHooks: ->
|
||||
@activationHookSubscriptions = new CompositeDisposable
|
||||
for hook in @getActivationHooks()
|
||||
do (hook) =>
|
||||
@activationHookSubscriptions.add(@packageManager.onDidTriggerActivationHook(hook, => @activateNow())) if hook? and _.isString(hook) and hook.trim().length > 0
|
||||
|
||||
return
|
||||
|
||||
getActivationHooks: ->
|
||||
return @activationHooks if @metadata? and @activationHooks?
|
||||
|
||||
@activationHooks = []
|
||||
|
||||
if @metadata.activationHooks?
|
||||
if _.isArray(@metadata.activationHooks)
|
||||
@activationHooks.push(@metadata.activationHooks...)
|
||||
else if _.isString(@metadata.activationHooks)
|
||||
@activationHooks.push(@metadata.activationHooks)
|
||||
|
||||
@activationHooks = _.uniq(@activationHooks)
|
||||
|
||||
getURIHandler: ->
|
||||
@metadata?.uriHandler
|
||||
|
||||
# Does the given module path contain native code?
|
||||
isNativeModule: (modulePath) ->
|
||||
try
|
||||
fs.listSync(path.join(modulePath, 'build', 'Release'), ['.node']).length > 0
|
||||
catch error
|
||||
false
|
||||
|
||||
# Get an array of all the native modules that this package depends on.
|
||||
#
|
||||
# First try to get this information from
|
||||
# @metadata._atomModuleCache.extensions. If @metadata._atomModuleCache doesn't
|
||||
# exist, recurse through all dependencies.
|
||||
getNativeModuleDependencyPaths: ->
|
||||
nativeModulePaths = []
|
||||
|
||||
if @metadata._atomModuleCache?
|
||||
relativeNativeModuleBindingPaths = @metadata._atomModuleCache.extensions?['.node'] ? []
|
||||
for relativeNativeModuleBindingPath in relativeNativeModuleBindingPaths
|
||||
nativeModulePath = path.join(@path, relativeNativeModuleBindingPath, '..', '..', '..')
|
||||
nativeModulePaths.push(nativeModulePath)
|
||||
return nativeModulePaths
|
||||
|
||||
traversePath = (nodeModulesPath) =>
|
||||
try
|
||||
for modulePath in fs.listSync(nodeModulesPath)
|
||||
nativeModulePaths.push(modulePath) if @isNativeModule(modulePath)
|
||||
traversePath(path.join(modulePath, 'node_modules'))
|
||||
return
|
||||
|
||||
traversePath(path.join(@path, 'node_modules'))
|
||||
nativeModulePaths
|
||||
|
||||
###
|
||||
Section: Native Module Compatibility
|
||||
###
|
||||
|
||||
# Extended: Are all native modules depended on by this package correctly
|
||||
# compiled against the current version of Atom?
|
||||
#
|
||||
# Incompatible packages cannot be activated.
|
||||
#
|
||||
# Returns a {Boolean}, true if compatible, false if incompatible.
|
||||
isCompatible: ->
|
||||
return @compatible if @compatible?
|
||||
|
||||
if @preloadedPackage
|
||||
# Preloaded packages are always considered compatible
|
||||
@compatible = true
|
||||
else if @getMainModulePath()
|
||||
@incompatibleModules = @getIncompatibleNativeModules()
|
||||
@compatible = @incompatibleModules.length is 0 and not @getBuildFailureOutput()?
|
||||
else
|
||||
@compatible = true
|
||||
|
||||
# Extended: Rebuild native modules in this package's dependencies for the
|
||||
# current version of Atom.
|
||||
#
|
||||
# Returns a {Promise} that resolves with an object containing `code`,
|
||||
# `stdout`, and `stderr` properties based on the results of running
|
||||
# `apm rebuild` on the package.
|
||||
rebuild: ->
|
||||
new Promise (resolve) =>
|
||||
@runRebuildProcess (result) =>
|
||||
if result.code is 0
|
||||
global.localStorage.removeItem(@getBuildFailureOutputStorageKey())
|
||||
else
|
||||
@compatible = false
|
||||
global.localStorage.setItem(@getBuildFailureOutputStorageKey(), result.stderr)
|
||||
global.localStorage.setItem(@getIncompatibleNativeModulesStorageKey(), '[]')
|
||||
resolve(result)
|
||||
|
||||
# Extended: If a previous rebuild failed, get the contents of stderr.
|
||||
#
|
||||
# Returns a {String} or null if no previous build failure occurred.
|
||||
getBuildFailureOutput: ->
|
||||
global.localStorage.getItem(@getBuildFailureOutputStorageKey())
|
||||
|
||||
runRebuildProcess: (callback) ->
|
||||
stderr = ''
|
||||
stdout = ''
|
||||
new BufferedProcess({
|
||||
command: @packageManager.getApmPath()
|
||||
args: ['rebuild', '--no-color']
|
||||
options: {cwd: @path}
|
||||
stderr: (output) -> stderr += output
|
||||
stdout: (output) -> stdout += output
|
||||
exit: (code) -> callback({code, stdout, stderr})
|
||||
})
|
||||
|
||||
getBuildFailureOutputStorageKey: ->
|
||||
"installed-packages:#{@name}:#{@metadata.version}:build-error"
|
||||
|
||||
getIncompatibleNativeModulesStorageKey: ->
|
||||
electronVersion = process.versions.electron
|
||||
"installed-packages:#{@name}:#{@metadata.version}:electron-#{electronVersion}:incompatible-native-modules"
|
||||
|
||||
getCanDeferMainModuleRequireStorageKey: ->
|
||||
"installed-packages:#{@name}:#{@metadata.version}:can-defer-main-module-require"
|
||||
|
||||
# Get the incompatible native modules that this package depends on.
|
||||
# This recurses through all dependencies and requires all modules that
|
||||
# contain a `.node` file.
|
||||
#
|
||||
# This information is cached in local storage on a per package/version basis
|
||||
# to minimize the impact on startup time.
|
||||
getIncompatibleNativeModules: ->
|
||||
unless @packageManager.devMode
|
||||
try
|
||||
if arrayAsString = global.localStorage.getItem(@getIncompatibleNativeModulesStorageKey())
|
||||
return JSON.parse(arrayAsString)
|
||||
|
||||
incompatibleNativeModules = []
|
||||
for nativeModulePath in @getNativeModuleDependencyPaths()
|
||||
try
|
||||
require(nativeModulePath)
|
||||
catch error
|
||||
try
|
||||
version = require("#{nativeModulePath}/package.json").version
|
||||
incompatibleNativeModules.push
|
||||
path: nativeModulePath
|
||||
name: path.basename(nativeModulePath)
|
||||
version: version
|
||||
error: error.message
|
||||
|
||||
global.localStorage.setItem(@getIncompatibleNativeModulesStorageKey(), JSON.stringify(incompatibleNativeModules))
|
||||
incompatibleNativeModules
|
||||
|
||||
handleError: (message, error) ->
|
||||
if atom.inSpecMode()
|
||||
throw error
|
||||
|
||||
if error.filename and error.location and (error instanceof SyntaxError)
|
||||
location = "#{error.filename}:#{error.location.first_line + 1}:#{error.location.first_column + 1}"
|
||||
detail = "#{error.message} in #{location}"
|
||||
stack = """
|
||||
SyntaxError: #{error.message}
|
||||
at #{location}
|
||||
"""
|
||||
else if error.less and error.filename and error.column? and error.line?
|
||||
# Less errors
|
||||
location = "#{error.filename}:#{error.line}:#{error.column}"
|
||||
detail = "#{error.message} in #{location}"
|
||||
stack = """
|
||||
LessError: #{error.message}
|
||||
at #{location}
|
||||
"""
|
||||
else
|
||||
detail = error.message
|
||||
stack = error.stack ? error
|
||||
|
||||
@notificationManager.addFatalError(message, {stack, detail, packageName: @name, dismissable: true})
|
||||
1107
src/package.js
Normal file
1107
src/package.js
Normal file
File diff suppressed because it is too large
Load Diff
@@ -2,7 +2,7 @@ const path = require('path')
|
||||
|
||||
const _ = require('underscore-plus')
|
||||
const fs = require('fs-plus')
|
||||
const {Emitter, Disposable} = require('event-kit')
|
||||
const {Emitter, Disposable, CompositeDisposable} = require('event-kit')
|
||||
const TextBuffer = require('text-buffer')
|
||||
const {watchPath} = require('./path-watcher')
|
||||
|
||||
@@ -19,10 +19,12 @@ class Project extends Model {
|
||||
Section: Construction and Destruction
|
||||
*/
|
||||
|
||||
constructor ({notificationManager, packageManager, config, applicationDelegate}) {
|
||||
constructor ({notificationManager, packageManager, config, applicationDelegate, grammarRegistry}) {
|
||||
super()
|
||||
this.notificationManager = notificationManager
|
||||
this.applicationDelegate = applicationDelegate
|
||||
this.grammarRegistry = grammarRegistry
|
||||
|
||||
this.emitter = new Emitter()
|
||||
this.buffers = []
|
||||
this.rootDirectories = []
|
||||
@@ -35,6 +37,7 @@ class Project extends Model {
|
||||
this.watcherPromisesByPath = {}
|
||||
this.retiredBufferIDs = new Set()
|
||||
this.retiredBufferPaths = new Set()
|
||||
this.subscriptions = new CompositeDisposable()
|
||||
this.consumeServices(packageManager)
|
||||
}
|
||||
|
||||
@@ -54,6 +57,9 @@ class Project extends Model {
|
||||
this.emitter.dispose()
|
||||
this.emitter = new Emitter()
|
||||
|
||||
this.subscriptions.dispose()
|
||||
this.subscriptions = new CompositeDisposable()
|
||||
|
||||
for (let buffer of this.buffers) {
|
||||
if (buffer != null) buffer.destroy()
|
||||
}
|
||||
@@ -104,6 +110,7 @@ class Project extends Model {
|
||||
return Promise.all(bufferPromises).then(buffers => {
|
||||
this.buffers = buffers.filter(Boolean)
|
||||
for (let buffer of this.buffers) {
|
||||
this.grammarRegistry.maintainLanguageMode(buffer)
|
||||
this.subscribeToBuffer(buffer)
|
||||
}
|
||||
this.setPaths(state.paths || [], {mustExist: true, exact: true})
|
||||
@@ -654,11 +661,8 @@ class Project extends Model {
|
||||
}
|
||||
|
||||
addBuffer (buffer, options = {}) {
|
||||
return this.addBufferAtIndex(buffer, this.buffers.length, options)
|
||||
}
|
||||
|
||||
addBufferAtIndex (buffer, index, options = {}) {
|
||||
this.buffers.splice(index, 0, buffer)
|
||||
this.buffers.push(buffer)
|
||||
this.subscriptions.add(this.grammarRegistry.maintainLanguageMode(buffer))
|
||||
this.subscribeToBuffer(buffer)
|
||||
this.emitter.emit('did-add-buffer', buffer)
|
||||
return buffer
|
||||
|
||||
@@ -448,9 +448,19 @@ class Selection {
|
||||
if (options.autoIndent && textIsAutoIndentable && !NonWhitespaceRegExp.test(precedingText) && (remainingLines.length > 0)) {
|
||||
autoIndentFirstLine = true
|
||||
const firstLine = precedingText + firstInsertedLine
|
||||
desiredIndentLevel = this.editor.tokenizedBuffer.suggestedIndentForLineAtBufferRow(oldBufferRange.start.row, firstLine)
|
||||
indentAdjustment = desiredIndentLevel - this.editor.indentLevelForLine(firstLine)
|
||||
this.adjustIndent(remainingLines, indentAdjustment)
|
||||
const languageMode = this.editor.buffer.getLanguageMode()
|
||||
desiredIndentLevel = (
|
||||
languageMode.suggestedIndentForLineAtBufferRow &&
|
||||
languageMode.suggestedIndentForLineAtBufferRow(
|
||||
oldBufferRange.start.row,
|
||||
firstLine,
|
||||
this.editor.getTabLength()
|
||||
)
|
||||
)
|
||||
if (desiredIndentLevel != null) {
|
||||
indentAdjustment = desiredIndentLevel - this.editor.indentLevelForLine(firstLine)
|
||||
this.adjustIndent(remainingLines, indentAdjustment)
|
||||
}
|
||||
}
|
||||
|
||||
text = firstInsertedLine
|
||||
|
||||
@@ -1,9 +1,6 @@
|
||||
/** @babel */
|
||||
|
||||
import {Emitter, Disposable, CompositeDisposable} from 'event-kit'
|
||||
import {Point, Range} from 'text-buffer'
|
||||
import TextEditor from './text-editor'
|
||||
import ScopeDescriptor from './scope-descriptor'
|
||||
const {Emitter, Disposable, CompositeDisposable} = require('event-kit')
|
||||
const TextEditor = require('./text-editor')
|
||||
const ScopeDescriptor = require('./scope-descriptor')
|
||||
|
||||
const EDITOR_PARAMS_BY_SETTING_KEY = [
|
||||
['core.fileEncoding', 'encoding'],
|
||||
@@ -23,12 +20,9 @@ const EDITOR_PARAMS_BY_SETTING_KEY = [
|
||||
['editor.autoIndentOnPaste', 'autoIndentOnPaste'],
|
||||
['editor.scrollPastEnd', 'scrollPastEnd'],
|
||||
['editor.undoGroupingInterval', 'undoGroupingInterval'],
|
||||
['editor.nonWordCharacters', 'nonWordCharacters'],
|
||||
['editor.scrollSensitivity', 'scrollSensitivity']
|
||||
]
|
||||
|
||||
const GRAMMAR_SELECTION_RANGE = Range(Point.ZERO, Point(10, 0)).freeze()
|
||||
|
||||
// Experimental: This global registry tracks registered `TextEditors`.
|
||||
//
|
||||
// If you want to add functionality to a wider set of text editors than just
|
||||
@@ -40,13 +34,11 @@ const GRAMMAR_SELECTION_RANGE = Range(Point.ZERO, Point(10, 0)).freeze()
|
||||
// them for observation via `atom.textEditors.add`. **Important:** When you're
|
||||
// done using your editor, be sure to call `dispose` on the returned disposable
|
||||
// to avoid leaking editors.
|
||||
export default class TextEditorRegistry {
|
||||
constructor ({config, grammarRegistry, assert, packageManager}) {
|
||||
module.exports =
|
||||
class TextEditorRegistry {
|
||||
constructor ({config, assert, packageManager}) {
|
||||
this.assert = assert
|
||||
this.config = config
|
||||
this.grammarRegistry = grammarRegistry
|
||||
this.scopedSettingsDelegate = new ScopedSettingsDelegate(config)
|
||||
this.grammarAddedOrUpdated = this.grammarAddedOrUpdated.bind(this)
|
||||
this.clear()
|
||||
|
||||
this.initialPackageActivationPromise = new Promise((resolve) => {
|
||||
@@ -83,10 +75,6 @@ export default class TextEditorRegistry {
|
||||
this.editorsWithMaintainedGrammar = new Set()
|
||||
this.editorGrammarOverrides = {}
|
||||
this.editorGrammarScores = new WeakMap()
|
||||
this.subscriptions.add(
|
||||
this.grammarRegistry.onDidAddGrammar(this.grammarAddedOrUpdated),
|
||||
this.grammarRegistry.onDidUpdateGrammar(this.grammarAddedOrUpdated)
|
||||
)
|
||||
}
|
||||
|
||||
destroy () {
|
||||
@@ -114,10 +102,10 @@ export default class TextEditorRegistry {
|
||||
|
||||
let scope = null
|
||||
if (params.buffer) {
|
||||
const filePath = params.buffer.getPath()
|
||||
const headContent = params.buffer.getTextInRange(GRAMMAR_SELECTION_RANGE)
|
||||
params.grammar = this.grammarRegistry.selectGrammar(filePath, headContent)
|
||||
scope = new ScopeDescriptor({scopes: [params.grammar.scopeName]})
|
||||
const {grammar} = params.buffer.getLanguageMode()
|
||||
if (grammar) {
|
||||
scope = new ScopeDescriptor({scopes: [grammar.scopeName]})
|
||||
}
|
||||
}
|
||||
|
||||
Object.assign(params, this.textEditorParamsForScope(scope))
|
||||
@@ -159,8 +147,6 @@ export default class TextEditorRegistry {
|
||||
}
|
||||
this.editorsWithMaintainedConfig.add(editor)
|
||||
|
||||
editor.setScopedSettingsDelegate(this.scopedSettingsDelegate)
|
||||
|
||||
this.subscribeToSettingsForEditorScope(editor)
|
||||
const grammarChangeSubscription = editor.onDidChangeGrammar(() => {
|
||||
this.subscribeToSettingsForEditorScope(editor)
|
||||
@@ -182,7 +168,6 @@ export default class TextEditorRegistry {
|
||||
|
||||
return new Disposable(() => {
|
||||
this.editorsWithMaintainedConfig.delete(editor)
|
||||
editor.setScopedSettingsDelegate(null)
|
||||
tokenizeSubscription.dispose()
|
||||
grammarChangeSubscription.dispose()
|
||||
this.subscriptions.remove(grammarChangeSubscription)
|
||||
@@ -190,134 +175,43 @@ export default class TextEditorRegistry {
|
||||
})
|
||||
}
|
||||
|
||||
// Set a {TextEditor}'s grammar based on its path and content, and continue
|
||||
// to update its grammar as grammars are added or updated, or the editor's
|
||||
// file path changes.
|
||||
// Deprecated: set a {TextEditor}'s grammar based on its path and content,
|
||||
// and continue to update its grammar as grammars are added or updated, or
|
||||
// the editor's file path changes.
|
||||
//
|
||||
// * `editor` The editor whose grammar will be maintained.
|
||||
//
|
||||
// Returns a {Disposable} that can be used to stop updating the editor's
|
||||
// grammar.
|
||||
maintainGrammar (editor) {
|
||||
if (this.editorsWithMaintainedGrammar.has(editor)) {
|
||||
return new Disposable(noop)
|
||||
}
|
||||
|
||||
this.editorsWithMaintainedGrammar.add(editor)
|
||||
|
||||
const buffer = editor.getBuffer()
|
||||
for (let existingEditor of this.editorsWithMaintainedGrammar) {
|
||||
if (existingEditor.getBuffer() === buffer) {
|
||||
const existingOverride = this.editorGrammarOverrides[existingEditor.id]
|
||||
if (existingOverride) {
|
||||
this.editorGrammarOverrides[editor.id] = existingOverride
|
||||
}
|
||||
break
|
||||
}
|
||||
}
|
||||
|
||||
this.selectGrammarForEditor(editor)
|
||||
|
||||
const pathChangeSubscription = editor.onDidChangePath(() => {
|
||||
this.editorGrammarScores.delete(editor)
|
||||
this.selectGrammarForEditor(editor)
|
||||
})
|
||||
|
||||
this.subscriptions.add(pathChangeSubscription)
|
||||
|
||||
return new Disposable(() => {
|
||||
delete this.editorGrammarOverrides[editor.id]
|
||||
this.editorsWithMaintainedGrammar.delete(editor)
|
||||
this.subscriptions.remove(pathChangeSubscription)
|
||||
pathChangeSubscription.dispose()
|
||||
})
|
||||
atom.grammars.maintainGrammar(editor.getBuffer())
|
||||
}
|
||||
|
||||
// Force a {TextEditor} to use a different grammar than the one that would
|
||||
// otherwise be selected for it.
|
||||
// Deprecated: Force a {TextEditor} to use a different grammar than the
|
||||
// one that would otherwise be selected for it.
|
||||
//
|
||||
// * `editor` The editor whose gramamr will be set.
|
||||
// * `scopeName` The {String} root scope name for the desired {Grammar}.
|
||||
setGrammarOverride (editor, scopeName) {
|
||||
this.editorGrammarOverrides[editor.id] = scopeName
|
||||
this.editorGrammarScores.delete(editor)
|
||||
editor.setGrammar(this.grammarRegistry.grammarForScopeName(scopeName))
|
||||
// * `languageId` The {String} language ID for the desired {Grammar}.
|
||||
setGrammarOverride (editor, languageId) {
|
||||
atom.grammars.assignLanguageMode(editor.getBuffer(), languageId)
|
||||
}
|
||||
|
||||
// Retrieve the grammar scope name that has been set as a grammar override
|
||||
// for the given {TextEditor}.
|
||||
// Deprecated: Retrieve the grammar scope name that has been set as a
|
||||
// grammar override for the given {TextEditor}.
|
||||
//
|
||||
// * `editor` The editor.
|
||||
//
|
||||
// Returns a {String} scope name, or `null` if no override has been set
|
||||
// for the given editor.
|
||||
getGrammarOverride (editor) {
|
||||
return this.editorGrammarOverrides[editor.id]
|
||||
return editor.getBuffer().getLanguageMode().grammar.scopeName
|
||||
}
|
||||
|
||||
// Remove any grammar override that has been set for the given {TextEditor}.
|
||||
// Deprecated: Remove any grammar override that has been set for the given {TextEditor}.
|
||||
//
|
||||
// * `editor` The editor.
|
||||
clearGrammarOverride (editor) {
|
||||
delete this.editorGrammarOverrides[editor.id]
|
||||
this.selectGrammarForEditor(editor)
|
||||
}
|
||||
|
||||
// Private
|
||||
|
||||
grammarAddedOrUpdated (grammar) {
|
||||
this.editorsWithMaintainedGrammar.forEach((editor) => {
|
||||
if (grammar.injectionSelector) {
|
||||
if (editor.tokenizedBuffer.hasTokenForSelector(grammar.injectionSelector)) {
|
||||
editor.tokenizedBuffer.retokenizeLines()
|
||||
}
|
||||
return
|
||||
}
|
||||
|
||||
const grammarOverride = this.editorGrammarOverrides[editor.id]
|
||||
if (grammarOverride) {
|
||||
if (grammar.scopeName === grammarOverride) {
|
||||
editor.setGrammar(grammar)
|
||||
}
|
||||
} else {
|
||||
const score = this.grammarRegistry.getGrammarScore(
|
||||
grammar,
|
||||
editor.getPath(),
|
||||
editor.getTextInBufferRange(GRAMMAR_SELECTION_RANGE)
|
||||
)
|
||||
|
||||
let currentScore = this.editorGrammarScores.get(editor)
|
||||
if (currentScore == null || score > currentScore) {
|
||||
editor.setGrammar(grammar)
|
||||
this.editorGrammarScores.set(editor, score)
|
||||
}
|
||||
}
|
||||
})
|
||||
}
|
||||
|
||||
selectGrammarForEditor (editor) {
|
||||
const grammarOverride = this.editorGrammarOverrides[editor.id]
|
||||
|
||||
if (grammarOverride) {
|
||||
const grammar = this.grammarRegistry.grammarForScopeName(grammarOverride)
|
||||
editor.setGrammar(grammar)
|
||||
return
|
||||
}
|
||||
|
||||
const {grammar, score} = this.grammarRegistry.selectGrammarWithScore(
|
||||
editor.getPath(),
|
||||
editor.getTextInBufferRange(GRAMMAR_SELECTION_RANGE)
|
||||
)
|
||||
|
||||
if (!grammar) {
|
||||
throw new Error(`No grammar found for path: ${editor.getPath()}`)
|
||||
}
|
||||
|
||||
const currentScore = this.editorGrammarScores.get(editor)
|
||||
if (currentScore == null || score > currentScore) {
|
||||
editor.setGrammar(grammar)
|
||||
this.editorGrammarScores.set(editor, score)
|
||||
}
|
||||
atom.grammars.autoAssignLanguageMode(editor.getBuffer())
|
||||
}
|
||||
|
||||
async subscribeToSettingsForEditorScope (editor) {
|
||||
@@ -390,44 +284,3 @@ function shouldEditorUseSoftTabs (editor, tabType, softTabs) {
|
||||
}
|
||||
|
||||
function noop () {}
|
||||
|
||||
class ScopedSettingsDelegate {
|
||||
constructor (config) {
|
||||
this.config = config
|
||||
}
|
||||
|
||||
getNonWordCharacters (scope) {
|
||||
return this.config.get('editor.nonWordCharacters', {scope: scope})
|
||||
}
|
||||
|
||||
getIncreaseIndentPattern (scope) {
|
||||
return this.config.get('editor.increaseIndentPattern', {scope: scope})
|
||||
}
|
||||
|
||||
getDecreaseIndentPattern (scope) {
|
||||
return this.config.get('editor.decreaseIndentPattern', {scope: scope})
|
||||
}
|
||||
|
||||
getDecreaseNextIndentPattern (scope) {
|
||||
return this.config.get('editor.decreaseNextIndentPattern', {scope: scope})
|
||||
}
|
||||
|
||||
getFoldEndPattern (scope) {
|
||||
return this.config.get('editor.foldEndPattern', {scope: scope})
|
||||
}
|
||||
|
||||
getCommentStrings (scope) {
|
||||
const commentStartEntries = this.config.getAll('editor.commentStart', {scope})
|
||||
const commentEndEntries = this.config.getAll('editor.commentEnd', {scope})
|
||||
const commentStartEntry = commentStartEntries[0]
|
||||
const commentEndEntry = commentEndEntries.find((entry) => {
|
||||
return entry.scopeSelector === commentStartEntry.scopeSelector
|
||||
})
|
||||
return {
|
||||
commentStartString: commentStartEntry && commentStartEntry.value,
|
||||
commentEndString: commentEndEntry && commentEndEntry.value
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
TextEditorRegistry.ScopedSettingsDelegate = ScopedSettingsDelegate
|
||||
|
||||
@@ -7,9 +7,10 @@ const {CompositeDisposable, Disposable, Emitter} = require('event-kit')
|
||||
const TextBuffer = require('text-buffer')
|
||||
const {Point, Range} = TextBuffer
|
||||
const DecorationManager = require('./decoration-manager')
|
||||
const TokenizedBuffer = require('./tokenized-buffer')
|
||||
const Cursor = require('./cursor')
|
||||
const Selection = require('./selection')
|
||||
const NullGrammar = require('./null-grammar')
|
||||
const TextMateLanguageMode = require('./text-mate-language-mode')
|
||||
|
||||
const TextMateScopeSelector = require('first-mate').ScopeSelector
|
||||
const GutterContainer = require('./gutter-container')
|
||||
@@ -22,6 +23,8 @@ const NON_WHITESPACE_REGEXP = /\S/
|
||||
const ZERO_WIDTH_NBSP = '\ufeff'
|
||||
let nextId = 0
|
||||
|
||||
const DEFAULT_NON_WORD_CHARACTERS = "/\\()\"':,.;<>~!@#$%^&*|+=[]{}`?-…"
|
||||
|
||||
// Essential: This class represents all essential editing state for a single
|
||||
// {TextBuffer}, including cursor and selection positions, folds, and soft wraps.
|
||||
// If you're manipulating the state of an editor, use this class.
|
||||
@@ -86,12 +89,13 @@ class TextEditor {
|
||||
static deserialize (state, atomEnvironment) {
|
||||
if (state.version !== SERIALIZATION_VERSION) return null
|
||||
|
||||
try {
|
||||
const tokenizedBuffer = TokenizedBuffer.deserialize(state.tokenizedBuffer, atomEnvironment)
|
||||
if (!tokenizedBuffer) return null
|
||||
let bufferId = state.tokenizedBuffer
|
||||
? state.tokenizedBuffer.bufferId
|
||||
: state.bufferId
|
||||
|
||||
state.tokenizedBuffer = tokenizedBuffer
|
||||
state.tabLength = state.tokenizedBuffer.getTabLength()
|
||||
try {
|
||||
state.buffer = atomEnvironment.project.bufferForIdSync(bufferId)
|
||||
if (!state.buffer) return null
|
||||
} catch (error) {
|
||||
if (error.syscall === 'read') {
|
||||
return // Error reading the file, don't deserialize an editor for it
|
||||
@@ -100,7 +104,6 @@ class TextEditor {
|
||||
}
|
||||
}
|
||||
|
||||
state.buffer = state.tokenizedBuffer.buffer
|
||||
state.assert = atomEnvironment.assert.bind(atomEnvironment)
|
||||
const editor = new TextEditor(state)
|
||||
if (state.registered) {
|
||||
@@ -123,7 +126,6 @@ class TextEditor {
|
||||
this.mini = (params.mini != null) ? params.mini : false
|
||||
this.placeholderText = params.placeholderText
|
||||
this.showLineNumbers = params.showLineNumbers
|
||||
this.largeFileMode = params.largeFileMode
|
||||
this.assert = params.assert || (condition => condition)
|
||||
this.showInvisibles = (params.showInvisibles != null) ? params.showInvisibles : true
|
||||
this.autoHeight = params.autoHeight
|
||||
@@ -142,7 +144,6 @@ class TextEditor {
|
||||
this.autoIndent = (params.autoIndent != null) ? params.autoIndent : true
|
||||
this.autoIndentOnPaste = (params.autoIndentOnPaste != null) ? params.autoIndentOnPaste : true
|
||||
this.undoGroupingInterval = (params.undoGroupingInterval != null) ? params.undoGroupingInterval : 300
|
||||
this.nonWordCharacters = (params.nonWordCharacters != null) ? params.nonWordCharacters : "/\\()\"':,.;<>~!@#$%^&*|+=[]{}`?-…"
|
||||
this.softWrapped = (params.softWrapped != null) ? params.softWrapped : false
|
||||
this.softWrapAtPreferredLineLength = (params.softWrapAtPreferredLineLength != null) ? params.softWrapAtPreferredLineLength : false
|
||||
this.preferredLineLength = (params.preferredLineLength != null) ? params.preferredLineLength : 80
|
||||
@@ -171,17 +172,20 @@ class TextEditor {
|
||||
this.selections = []
|
||||
this.hasTerminatedPendingState = false
|
||||
|
||||
this.buffer = params.buffer || new TextBuffer({
|
||||
shouldDestroyOnFileDelete () { return atom.config.get('core.closeDeletedFileTabs') }
|
||||
})
|
||||
if (params.buffer) {
|
||||
this.buffer = params.buffer
|
||||
} else {
|
||||
this.buffer = new TextBuffer({
|
||||
shouldDestroyOnFileDelete () { return atom.config.get('core.closeDeletedFileTabs') }
|
||||
})
|
||||
this.buffer.setLanguageMode(new TextMateLanguageMode({buffer: this.buffer, config: atom.config}))
|
||||
}
|
||||
|
||||
this.tokenizedBuffer = params.tokenizedBuffer || new TokenizedBuffer({
|
||||
grammar: params.grammar,
|
||||
tabLength,
|
||||
buffer: this.buffer,
|
||||
largeFileMode: this.largeFileMode,
|
||||
assert: this.assert
|
||||
const languageMode = this.buffer.getLanguageMode()
|
||||
this.languageModeSubscription = languageMode.onDidTokenize && languageMode.onDidTokenize(() => {
|
||||
this.emitter.emit('did-tokenize')
|
||||
})
|
||||
if (this.languageModeSubscription) this.disposables.add(this.languageModeSubscription)
|
||||
|
||||
if (params.displayLayer) {
|
||||
this.displayLayer = params.displayLayer
|
||||
@@ -217,8 +221,6 @@ class TextEditor {
|
||||
this.selectionsMarkerLayer = this.addMarkerLayer({maintainHistory: true, persistent: true})
|
||||
}
|
||||
|
||||
this.displayLayer.setTextDecorationLayer(this.tokenizedBuffer)
|
||||
|
||||
this.decorationManager = new DecorationManager(this)
|
||||
this.decorateMarkerLayer(this.selectionsMarkerLayer, {type: 'cursor'})
|
||||
if (!this.isMini()) this.decorateCursorLine()
|
||||
@@ -271,9 +273,8 @@ class TextEditor {
|
||||
return this
|
||||
}
|
||||
|
||||
get languageMode () {
|
||||
return this.tokenizedBuffer
|
||||
}
|
||||
get languageMode () { return this.buffer.getLanguageMode() }
|
||||
get tokenizedBuffer () { return this.buffer.getLanguageMode() }
|
||||
|
||||
get rowsPerPage () {
|
||||
return this.getRowsPerPage()
|
||||
@@ -319,10 +320,6 @@ class TextEditor {
|
||||
this.undoGroupingInterval = value
|
||||
break
|
||||
|
||||
case 'nonWordCharacters':
|
||||
this.nonWordCharacters = value
|
||||
break
|
||||
|
||||
case 'scrollSensitivity':
|
||||
this.scrollSensitivity = value
|
||||
break
|
||||
@@ -344,8 +341,7 @@ class TextEditor {
|
||||
break
|
||||
|
||||
case 'tabLength':
|
||||
if (value > 0 && value !== this.tokenizedBuffer.getTabLength()) {
|
||||
this.tokenizedBuffer.setTabLength(value)
|
||||
if (value > 0 && value !== this.displayLayer.tabLength) {
|
||||
displayLayerParams.tabLength = value
|
||||
}
|
||||
break
|
||||
@@ -513,26 +509,22 @@ class TextEditor {
|
||||
}
|
||||
|
||||
serialize () {
|
||||
const tokenizedBufferState = this.tokenizedBuffer.serialize()
|
||||
|
||||
return {
|
||||
deserializer: 'TextEditor',
|
||||
version: SERIALIZATION_VERSION,
|
||||
|
||||
// TODO: Remove this forward-compatible fallback once 1.8 reaches stable.
|
||||
displayBuffer: {tokenizedBuffer: tokenizedBufferState},
|
||||
|
||||
tokenizedBuffer: tokenizedBufferState,
|
||||
displayLayerId: this.displayLayer.id,
|
||||
selectionsMarkerLayerId: this.selectionsMarkerLayer.id,
|
||||
|
||||
initialScrollTopRow: this.getScrollTopRow(),
|
||||
initialScrollLeftColumn: this.getScrollLeftColumn(),
|
||||
|
||||
tabLength: this.displayLayer.tabLength,
|
||||
atomicSoftTabs: this.displayLayer.atomicSoftTabs,
|
||||
softWrapHangingIndentLength: this.displayLayer.softWrapHangingIndent,
|
||||
|
||||
id: this.id,
|
||||
bufferId: this.buffer.id,
|
||||
softTabs: this.softTabs,
|
||||
softWrapped: this.softWrapped,
|
||||
softWrapAtPreferredLineLength: this.softWrapAtPreferredLineLength,
|
||||
@@ -540,7 +532,6 @@ class TextEditor {
|
||||
mini: this.mini,
|
||||
editorWidthInChars: this.editorWidthInChars,
|
||||
width: this.width,
|
||||
largeFileMode: this.largeFileMode,
|
||||
maxScreenLineLength: this.maxScreenLineLength,
|
||||
registered: this.registered,
|
||||
invisibles: this.invisibles,
|
||||
@@ -553,6 +544,7 @@ class TextEditor {
|
||||
|
||||
subscribeToBuffer () {
|
||||
this.buffer.retain()
|
||||
this.disposables.add(this.buffer.onDidChangeLanguageMode(this.handleLanguageModeChange.bind(this)))
|
||||
this.disposables.add(this.buffer.onDidChangePath(() => {
|
||||
this.emitter.emit('did-change-title', this.getTitle())
|
||||
this.emitter.emit('did-change-path', this.getPath())
|
||||
@@ -576,7 +568,6 @@ class TextEditor {
|
||||
}
|
||||
|
||||
subscribeToDisplayLayer () {
|
||||
this.disposables.add(this.tokenizedBuffer.onDidChangeGrammar(this.handleGrammarChange.bind(this)))
|
||||
this.disposables.add(this.displayLayer.onDidChange(changes => {
|
||||
this.mergeIntersectingSelections()
|
||||
if (this.component) this.component.didChangeDisplayLayer(changes)
|
||||
@@ -596,7 +587,6 @@ class TextEditor {
|
||||
this.alive = false
|
||||
this.disposables.dispose()
|
||||
this.displayLayer.destroy()
|
||||
this.tokenizedBuffer.destroy()
|
||||
for (let selection of this.selections.slice()) {
|
||||
selection.destroy()
|
||||
}
|
||||
@@ -731,7 +721,9 @@ class TextEditor {
|
||||
//
|
||||
// Returns a {Disposable} on which `.dispose()` can be called to unsubscribe.
|
||||
onDidChangeGrammar (callback) {
|
||||
return this.emitter.on('did-change-grammar', callback)
|
||||
return this.buffer.onDidChangeLanguageMode(() => {
|
||||
callback(this.buffer.getLanguageMode().grammar)
|
||||
})
|
||||
}
|
||||
|
||||
// Extended: Calls your `callback` when the result of {::isModified} changes.
|
||||
@@ -947,7 +939,7 @@ class TextEditor {
|
||||
selectionsMarkerLayer,
|
||||
softTabs,
|
||||
suppressCursorCreation: true,
|
||||
tabLength: this.tokenizedBuffer.getTabLength(),
|
||||
tabLength: this.getTabLength(),
|
||||
initialScrollTopRow: this.getScrollTopRow(),
|
||||
initialScrollLeftColumn: this.getScrollLeftColumn(),
|
||||
assert: this.assert,
|
||||
@@ -960,7 +952,12 @@ class TextEditor {
|
||||
}
|
||||
|
||||
// Controls visibility based on the given {Boolean}.
|
||||
setVisible (visible) { this.tokenizedBuffer.setVisible(visible) }
|
||||
setVisible (visible) {
|
||||
if (visible) {
|
||||
const languageMode = this.buffer.getLanguageMode()
|
||||
if (languageMode.startTokenizing) languageMode.startTokenizing()
|
||||
}
|
||||
}
|
||||
|
||||
setMini (mini) {
|
||||
this.update({mini})
|
||||
@@ -3353,7 +3350,7 @@ class TextEditor {
|
||||
// Essential: Get the on-screen length of tab characters.
|
||||
//
|
||||
// Returns a {Number}.
|
||||
getTabLength () { return this.tokenizedBuffer.getTabLength() }
|
||||
getTabLength () { return this.displayLayer.tabLength }
|
||||
|
||||
// Essential: Set the on-screen length of tab characters. Setting this to a
|
||||
// {Number} This will override the `editor.tabLength` setting.
|
||||
@@ -3384,9 +3381,10 @@ class TextEditor {
|
||||
// Returns a {Boolean} or undefined if no non-comment lines had leading
|
||||
// whitespace.
|
||||
usesSoftTabs () {
|
||||
const languageMode = this.buffer.getLanguageMode()
|
||||
const hasIsRowCommented = languageMode.isRowCommented
|
||||
for (let bufferRow = 0, end = Math.min(1000, this.buffer.getLastRow()); bufferRow <= end; bufferRow++) {
|
||||
const tokenizedLine = this.tokenizedBuffer.tokenizedLines[bufferRow]
|
||||
if (tokenizedLine && tokenizedLine.isComment()) continue
|
||||
if (hasIsRowCommented && languageMode.isRowCommented(bufferRow)) continue
|
||||
const line = this.buffer.lineForRow(bufferRow)
|
||||
if (line[0] === ' ') return true
|
||||
if (line[0] === '\t') return false
|
||||
@@ -3509,7 +3507,19 @@ class TextEditor {
|
||||
//
|
||||
// Returns a {Number}.
|
||||
indentLevelForLine (line) {
|
||||
return this.tokenizedBuffer.indentLevelForLine(line)
|
||||
const tabLength = this.getTabLength()
|
||||
let indentLength = 0
|
||||
for (let i = 0, {length} = line; i < length; i++) {
|
||||
const char = line[i]
|
||||
if (char === '\t') {
|
||||
indentLength += tabLength - (indentLength % tabLength)
|
||||
} else if (char === ' ') {
|
||||
indentLength++
|
||||
} else {
|
||||
break
|
||||
}
|
||||
}
|
||||
return indentLength / tabLength
|
||||
}
|
||||
|
||||
// Extended: Indent rows intersecting selections based on the grammar's suggested
|
||||
@@ -3542,27 +3552,24 @@ class TextEditor {
|
||||
|
||||
// Essential: Get the current {Grammar} of this editor.
|
||||
getGrammar () {
|
||||
return this.tokenizedBuffer.grammar
|
||||
const languageMode = this.buffer.getLanguageMode()
|
||||
return languageMode.getGrammar && languageMode.getGrammar() || NullGrammar
|
||||
}
|
||||
|
||||
// Essential: Set the current {Grammar} of this editor.
|
||||
// Deprecated: Set the current {Grammar} of this editor.
|
||||
//
|
||||
// Assigning a grammar will cause the editor to re-tokenize based on the new
|
||||
// grammar.
|
||||
//
|
||||
// * `grammar` {Grammar}
|
||||
setGrammar (grammar) {
|
||||
return this.tokenizedBuffer.setGrammar(grammar)
|
||||
}
|
||||
|
||||
// Reload the grammar based on the file name.
|
||||
reloadGrammar () {
|
||||
return this.tokenizedBuffer.reloadGrammar()
|
||||
const buffer = this.getBuffer()
|
||||
buffer.setLanguageMode(atom.grammars.languageModeForGrammarAndBuffer(grammar, buffer))
|
||||
}
|
||||
|
||||
// Experimental: Get a notification when async tokenization is completed.
|
||||
onDidTokenize (callback) {
|
||||
return this.tokenizedBuffer.onDidTokenize(callback)
|
||||
return this.emitter.on('did-tokenize', callback)
|
||||
}
|
||||
|
||||
/*
|
||||
@@ -3573,7 +3580,7 @@ class TextEditor {
|
||||
// e.g. `['.source.ruby']`, or `['.source.coffee']`. You can use this with
|
||||
// {Config::get} to get language specific config values.
|
||||
getRootScopeDescriptor () {
|
||||
return this.tokenizedBuffer.rootScopeDescriptor
|
||||
return this.buffer.getLanguageMode().rootScopeDescriptor
|
||||
}
|
||||
|
||||
// Essential: Get the syntactic scopeDescriptor for the given position in buffer
|
||||
@@ -3587,7 +3594,7 @@ class TextEditor {
|
||||
//
|
||||
// Returns a {ScopeDescriptor}.
|
||||
scopeDescriptorForBufferPosition (bufferPosition) {
|
||||
return this.tokenizedBuffer.scopeDescriptorForPosition(bufferPosition)
|
||||
return this.buffer.getLanguageMode().scopeDescriptorForPosition(bufferPosition)
|
||||
}
|
||||
|
||||
// Extended: Get the range in buffer coordinates of all tokens surrounding the
|
||||
@@ -3604,7 +3611,7 @@ class TextEditor {
|
||||
}
|
||||
|
||||
bufferRangeForScopeAtPosition (scopeSelector, position) {
|
||||
return this.tokenizedBuffer.bufferRangeForScopeAtPosition(scopeSelector, position)
|
||||
return this.buffer.getLanguageMode().bufferRangeForScopeAtPosition(scopeSelector, position)
|
||||
}
|
||||
|
||||
// Extended: Determine if the given row is entirely a comment
|
||||
@@ -3622,7 +3629,7 @@ class TextEditor {
|
||||
}
|
||||
|
||||
tokenForBufferPosition (bufferPosition) {
|
||||
return this.tokenizedBuffer.tokenForPosition(bufferPosition)
|
||||
return this.buffer.getLanguageMode().tokenForPosition(bufferPosition)
|
||||
}
|
||||
|
||||
/*
|
||||
@@ -3749,7 +3756,11 @@ class TextEditor {
|
||||
// level.
|
||||
foldCurrentRow () {
|
||||
const {row} = this.getCursorBufferPosition()
|
||||
const range = this.tokenizedBuffer.getFoldableRangeContainingPoint(Point(row, Infinity))
|
||||
const languageMode = this.buffer.getLanguageMode()
|
||||
const range = (
|
||||
languageMode.getFoldableRangeContainingPoint &&
|
||||
languageMode.getFoldableRangeContainingPoint(Point(row, Infinity), this.getTabLength())
|
||||
)
|
||||
if (range) return this.displayLayer.foldBufferRange(range)
|
||||
}
|
||||
|
||||
@@ -3768,8 +3779,12 @@ class TextEditor {
|
||||
// * `bufferRow` A {Number}.
|
||||
foldBufferRow (bufferRow) {
|
||||
let position = Point(bufferRow, Infinity)
|
||||
const languageMode = this.buffer.getLanguageMode()
|
||||
while (true) {
|
||||
const foldableRange = this.tokenizedBuffer.getFoldableRangeContainingPoint(position, this.getTabLength())
|
||||
const foldableRange = (
|
||||
languageMode.getFoldableRangeContainingPoint &&
|
||||
languageMode.getFoldableRangeContainingPoint(position, this.getTabLength())
|
||||
)
|
||||
if (foldableRange) {
|
||||
const existingFolds = this.displayLayer.foldsIntersectingBufferRange(Range(foldableRange.start, foldableRange.start))
|
||||
if (existingFolds.length === 0) {
|
||||
@@ -3803,8 +3818,13 @@ class TextEditor {
|
||||
|
||||
// Extended: Fold all foldable lines.
|
||||
foldAll () {
|
||||
const languageMode = this.buffer.getLanguageMode()
|
||||
const foldableRanges = (
|
||||
languageMode.getFoldableRanges &&
|
||||
languageMode.getFoldableRanges(this.getTabLength())
|
||||
)
|
||||
this.displayLayer.destroyAllFolds()
|
||||
for (let range of this.tokenizedBuffer.getFoldableRanges(this.getTabLength())) {
|
||||
for (let range of foldableRanges || []) {
|
||||
this.displayLayer.foldBufferRange(range)
|
||||
}
|
||||
}
|
||||
@@ -3820,8 +3840,13 @@ class TextEditor {
|
||||
//
|
||||
// * `level` A {Number}.
|
||||
foldAllAtIndentLevel (level) {
|
||||
const languageMode = this.buffer.getLanguageMode()
|
||||
const foldableRanges = (
|
||||
languageMode.getFoldableRangesAtIndentLevel &&
|
||||
languageMode.getFoldableRangesAtIndentLevel(level, this.getTabLength())
|
||||
)
|
||||
this.displayLayer.destroyAllFolds()
|
||||
for (let range of this.tokenizedBuffer.getFoldableRangesAtIndentLevel(level, this.getTabLength())) {
|
||||
for (let range of foldableRanges || []) {
|
||||
this.displayLayer.foldBufferRange(range)
|
||||
}
|
||||
}
|
||||
@@ -3834,7 +3859,8 @@ class TextEditor {
|
||||
//
|
||||
// Returns a {Boolean}.
|
||||
isFoldableAtBufferRow (bufferRow) {
|
||||
return this.tokenizedBuffer.isFoldableAtRow(bufferRow)
|
||||
const languageMode = this.buffer.getLanguageMode()
|
||||
return languageMode.isFoldableAtRow && languageMode.isFoldableAtRow(bufferRow)
|
||||
}
|
||||
|
||||
// Extended: Determine whether the given row in screen coordinates is foldable.
|
||||
@@ -4039,18 +4065,6 @@ class TextEditor {
|
||||
Section: Config
|
||||
*/
|
||||
|
||||
// Experimental: Supply an object that will provide the editor with settings
|
||||
// for specific syntactic scopes. See the `ScopedSettingsDelegate` in
|
||||
// `text-editor-registry.js` for an example implementation.
|
||||
setScopedSettingsDelegate (scopedSettingsDelegate) {
|
||||
this.scopedSettingsDelegate = scopedSettingsDelegate
|
||||
this.tokenizedBuffer.scopedSettingsDelegate = this.scopedSettingsDelegate
|
||||
}
|
||||
|
||||
// Experimental: Retrieve the {Object} that provides the editor with settings
|
||||
// for specific syntactic scopes.
|
||||
getScopedSettingsDelegate () { return this.scopedSettingsDelegate }
|
||||
|
||||
// Experimental: Is auto-indentation enabled for this editor?
|
||||
//
|
||||
// Returns a {Boolean}.
|
||||
@@ -4098,21 +4112,34 @@ class TextEditor {
|
||||
// for the purpose of word-based cursor movements.
|
||||
//
|
||||
// Returns a {String} containing the non-word characters.
|
||||
getNonWordCharacters (scopes) {
|
||||
if (this.scopedSettingsDelegate && this.scopedSettingsDelegate.getNonWordCharacters) {
|
||||
return this.scopedSettingsDelegate.getNonWordCharacters(scopes) || this.nonWordCharacters
|
||||
} else {
|
||||
return this.nonWordCharacters
|
||||
}
|
||||
getNonWordCharacters (position) {
|
||||
const languageMode = this.buffer.getLanguageMode()
|
||||
return (
|
||||
languageMode.getNonWordCharacters &&
|
||||
languageMode.getNonWordCharacters(position || Point(0, 0))
|
||||
) || DEFAULT_NON_WORD_CHARACTERS
|
||||
}
|
||||
|
||||
/*
|
||||
Section: Event Handlers
|
||||
*/
|
||||
|
||||
handleGrammarChange () {
|
||||
handleLanguageModeChange () {
|
||||
this.unfoldAll()
|
||||
return this.emitter.emit('did-change-grammar', this.getGrammar())
|
||||
if (this.languageModeSubscription) {
|
||||
this.languageModeSubscription.dispose()
|
||||
this.disposables.remove(this.languageModeSubscription)
|
||||
}
|
||||
const languageMode = this.buffer.getLanguageMode()
|
||||
|
||||
if (this.component && this.component.visible && languageMode.startTokenizing) {
|
||||
languageMode.startTokenizing()
|
||||
}
|
||||
this.languageModeSubscription = languageMode.onDidTokenize && languageMode.onDidTokenize(() => {
|
||||
this.emitter.emit('did-tokenize')
|
||||
})
|
||||
if (this.languageModeSubscription) this.disposables.add(this.languageModeSubscription)
|
||||
this.emitter.emit('did-change-grammar', languageMode.grammar)
|
||||
}
|
||||
|
||||
/*
|
||||
@@ -4382,7 +4409,11 @@ class TextEditor {
|
||||
*/
|
||||
|
||||
suggestedIndentForBufferRow (bufferRow, options) {
|
||||
return this.tokenizedBuffer.suggestedIndentForBufferRow(bufferRow, options)
|
||||
const languageMode = this.buffer.getLanguageMode()
|
||||
return (
|
||||
languageMode.suggestedIndentForBufferRow &&
|
||||
languageMode.suggestedIndentForBufferRow(bufferRow, this.getTabLength(), options)
|
||||
)
|
||||
}
|
||||
|
||||
// Given a buffer row, indent it.
|
||||
@@ -4407,17 +4438,21 @@ class TextEditor {
|
||||
}
|
||||
|
||||
autoDecreaseIndentForBufferRow (bufferRow) {
|
||||
const indentLevel = this.tokenizedBuffer.suggestedIndentForEditedBufferRow(bufferRow)
|
||||
const languageMode = this.buffer.getLanguageMode()
|
||||
const indentLevel = (
|
||||
languageMode.suggestedIndentForEditedBufferRow &&
|
||||
languageMode.suggestedIndentForEditedBufferRow(bufferRow, this.getTabLength())
|
||||
)
|
||||
if (indentLevel != null) this.setIndentationForBufferRow(bufferRow, indentLevel)
|
||||
}
|
||||
|
||||
toggleLineCommentForBufferRow (row) { this.toggleLineCommentsForBufferRows(row, row) }
|
||||
|
||||
toggleLineCommentsForBufferRows (start, end) {
|
||||
let {
|
||||
commentStartString,
|
||||
commentEndString
|
||||
} = this.tokenizedBuffer.commentStringsForPosition(Point(start, 0))
|
||||
const languageMode = this.buffer.getLanguageMode()
|
||||
let {commentStartString, commentEndString} =
|
||||
languageMode.commentStringsForPosition &&
|
||||
languageMode.commentStringsForPosition(Point(start, 0)) || {}
|
||||
if (!commentStartString) return
|
||||
commentStartString = commentStartString.trim()
|
||||
|
||||
@@ -4508,12 +4543,13 @@ class TextEditor {
|
||||
rowRangeForParagraphAtBufferRow (bufferRow) {
|
||||
if (!NON_WHITESPACE_REGEXP.test(this.lineTextForBufferRow(bufferRow))) return
|
||||
|
||||
const isCommented = this.tokenizedBuffer.isRowCommented(bufferRow)
|
||||
const languageMode = this.buffer.getLanguageMode()
|
||||
const isCommented = languageMode.isRowCommented(bufferRow)
|
||||
|
||||
let startRow = bufferRow
|
||||
while (startRow > 0) {
|
||||
if (!NON_WHITESPACE_REGEXP.test(this.lineTextForBufferRow(startRow - 1))) break
|
||||
if (this.tokenizedBuffer.isRowCommented(startRow - 1) !== isCommented) break
|
||||
if (languageMode.isRowCommented(startRow - 1) !== isCommented) break
|
||||
startRow--
|
||||
}
|
||||
|
||||
@@ -4521,7 +4557,7 @@ class TextEditor {
|
||||
const rowCount = this.getLineCount()
|
||||
while (endRow < rowCount) {
|
||||
if (!NON_WHITESPACE_REGEXP.test(this.lineTextForBufferRow(endRow + 1))) break
|
||||
if (this.tokenizedBuffer.isRowCommented(endRow + 1) !== isCommented) break
|
||||
if (languageMode.isRowCommented(endRow + 1) !== isCommented) break
|
||||
endRow++
|
||||
}
|
||||
|
||||
|
||||
@@ -4,27 +4,16 @@ const {Point, Range} = require('text-buffer')
|
||||
const TokenizedLine = require('./tokenized-line')
|
||||
const TokenIterator = require('./token-iterator')
|
||||
const ScopeDescriptor = require('./scope-descriptor')
|
||||
const TokenizedBufferIterator = require('./tokenized-buffer-iterator')
|
||||
const NullGrammar = require('./null-grammar')
|
||||
const {OnigRegExp} = require('oniguruma')
|
||||
const {toFirstMateScopeId} = require('./first-mate-helpers')
|
||||
const {toFirstMateScopeId, fromFirstMateScopeId} = require('./first-mate-helpers')
|
||||
|
||||
const NON_WHITESPACE_REGEX = /\S/
|
||||
|
||||
let nextId = 0
|
||||
const prefixedScopes = new Map()
|
||||
|
||||
module.exports =
|
||||
class TokenizedBuffer {
|
||||
static deserialize (state, atomEnvironment) {
|
||||
const buffer = atomEnvironment.project.bufferForIdSync(state.bufferId)
|
||||
if (!buffer) return null
|
||||
|
||||
state.buffer = buffer
|
||||
state.assert = atomEnvironment.assert
|
||||
return new TokenizedBuffer(state)
|
||||
}
|
||||
|
||||
class TextMateLanguageMode {
|
||||
constructor (params) {
|
||||
this.emitter = new Emitter()
|
||||
this.disposables = new CompositeDisposable()
|
||||
@@ -32,16 +21,19 @@ class TokenizedBuffer {
|
||||
this.regexesByPattern = {}
|
||||
|
||||
this.alive = true
|
||||
this.visible = false
|
||||
this.tokenizationStarted = false
|
||||
this.id = params.id != null ? params.id : nextId++
|
||||
this.buffer = params.buffer
|
||||
this.tabLength = params.tabLength
|
||||
this.largeFileMode = params.largeFileMode
|
||||
this.assert = params.assert
|
||||
this.scopedSettingsDelegate = params.scopedSettingsDelegate
|
||||
this.config = params.config
|
||||
this.largeFileMode = params.largeFileMode != null
|
||||
? params.largeFileMode
|
||||
: this.buffer.buffer.getLength() >= 2 * 1024 * 1024
|
||||
|
||||
this.setGrammar(params.grammar || NullGrammar)
|
||||
this.disposables.add(this.buffer.registerTextDecorationLayer(this))
|
||||
this.grammar = params.grammar || NullGrammar
|
||||
this.rootScopeDescriptor = new ScopeDescriptor({scopes: [this.grammar.scopeName]})
|
||||
this.disposables.add(this.grammar.onDidUpdate(() => this.retokenizeLines()))
|
||||
this.retokenizeLines()
|
||||
}
|
||||
|
||||
destroy () {
|
||||
@@ -59,6 +51,19 @@ class TokenizedBuffer {
|
||||
return !this.alive
|
||||
}
|
||||
|
||||
getGrammar () {
|
||||
return this.grammar
|
||||
}
|
||||
|
||||
getLanguageId () {
|
||||
return this.grammar.scopeName
|
||||
}
|
||||
|
||||
getNonWordCharacters (position) {
|
||||
const scope = this.scopeDescriptorForPosition(position)
|
||||
return this.config.get('editor.nonWordCharacters', {scope})
|
||||
}
|
||||
|
||||
/*
|
||||
Section - auto-indent
|
||||
*/
|
||||
@@ -68,10 +73,14 @@ class TokenizedBuffer {
|
||||
// * bufferRow - A {Number} indicating the buffer row
|
||||
//
|
||||
// Returns a {Number}.
|
||||
suggestedIndentForBufferRow (bufferRow, options) {
|
||||
const line = this.buffer.lineForRow(bufferRow)
|
||||
const tokenizedLine = this.tokenizedLineForRow(bufferRow)
|
||||
return this._suggestedIndentForTokenizedLineAtBufferRow(bufferRow, line, tokenizedLine, options)
|
||||
suggestedIndentForBufferRow (bufferRow, tabLength, options) {
|
||||
return this._suggestedIndentForTokenizedLineAtBufferRow(
|
||||
bufferRow,
|
||||
this.buffer.lineForRow(bufferRow),
|
||||
this.tokenizedLineForRow(bufferRow),
|
||||
tabLength,
|
||||
options
|
||||
)
|
||||
}
|
||||
|
||||
// Get the suggested indentation level for a given line of text, if it were inserted at the given
|
||||
@@ -80,9 +89,13 @@ class TokenizedBuffer {
|
||||
// * bufferRow - A {Number} indicating the buffer row
|
||||
//
|
||||
// Returns a {Number}.
|
||||
suggestedIndentForLineAtBufferRow (bufferRow, line, options) {
|
||||
const tokenizedLine = this.buildTokenizedLineForRowWithText(bufferRow, line)
|
||||
return this._suggestedIndentForTokenizedLineAtBufferRow(bufferRow, line, tokenizedLine, options)
|
||||
suggestedIndentForLineAtBufferRow (bufferRow, line, tabLength) {
|
||||
return this._suggestedIndentForTokenizedLineAtBufferRow(
|
||||
bufferRow,
|
||||
line,
|
||||
this.buildTokenizedLineForRowWithText(bufferRow, line),
|
||||
tabLength
|
||||
)
|
||||
}
|
||||
|
||||
// Get the suggested indentation level for a line in the buffer on which the user is currently
|
||||
@@ -93,9 +106,9 @@ class TokenizedBuffer {
|
||||
// * bufferRow - The row {Number}
|
||||
//
|
||||
// Returns a {Number}.
|
||||
suggestedIndentForEditedBufferRow (bufferRow) {
|
||||
suggestedIndentForEditedBufferRow (bufferRow, tabLength) {
|
||||
const line = this.buffer.lineForRow(bufferRow)
|
||||
const currentIndentLevel = this.indentLevelForLine(line)
|
||||
const currentIndentLevel = this.indentLevelForLine(line, tabLength)
|
||||
if (currentIndentLevel === 0) return
|
||||
|
||||
const scopeDescriptor = this.scopeDescriptorForPosition([bufferRow, 0])
|
||||
@@ -108,7 +121,7 @@ class TokenizedBuffer {
|
||||
if (precedingRow == null) return
|
||||
|
||||
const precedingLine = this.buffer.lineForRow(precedingRow)
|
||||
let desiredIndentLevel = this.indentLevelForLine(precedingLine)
|
||||
let desiredIndentLevel = this.indentLevelForLine(precedingLine, tabLength)
|
||||
|
||||
const increaseIndentRegex = this.increaseIndentRegexForScopeDescriptor(scopeDescriptor)
|
||||
if (increaseIndentRegex) {
|
||||
@@ -125,7 +138,7 @@ class TokenizedBuffer {
|
||||
return desiredIndentLevel
|
||||
}
|
||||
|
||||
_suggestedIndentForTokenizedLineAtBufferRow (bufferRow, line, tokenizedLine, options) {
|
||||
_suggestedIndentForTokenizedLineAtBufferRow (bufferRow, line, tokenizedLine, tabLength, options) {
|
||||
const iterator = tokenizedLine.getTokenIterator()
|
||||
iterator.next()
|
||||
const scopeDescriptor = new ScopeDescriptor({scopes: iterator.getScopes()})
|
||||
@@ -144,7 +157,7 @@ class TokenizedBuffer {
|
||||
}
|
||||
|
||||
const precedingLine = this.buffer.lineForRow(precedingRow)
|
||||
let desiredIndentLevel = this.indentLevelForLine(precedingLine)
|
||||
let desiredIndentLevel = this.indentLevelForLine(precedingLine, tabLength)
|
||||
if (!increaseIndentRegex) return desiredIndentLevel
|
||||
|
||||
if (!this.isRowCommented(precedingRow)) {
|
||||
@@ -164,16 +177,25 @@ class TokenizedBuffer {
|
||||
*/
|
||||
|
||||
commentStringsForPosition (position) {
|
||||
if (this.scopedSettingsDelegate) {
|
||||
const scope = this.scopeDescriptorForPosition(position)
|
||||
return this.scopedSettingsDelegate.getCommentStrings(scope)
|
||||
} else {
|
||||
return {}
|
||||
const scope = this.scopeDescriptorForPosition(position)
|
||||
const commentStartEntries = this.config.getAll('editor.commentStart', {scope})
|
||||
const commentEndEntries = this.config.getAll('editor.commentEnd', {scope})
|
||||
const commentStartEntry = commentStartEntries[0]
|
||||
const commentEndEntry = commentEndEntries.find((entry) => {
|
||||
return entry.scopeSelector === commentStartEntry.scopeSelector
|
||||
})
|
||||
return {
|
||||
commentStartString: commentStartEntry && commentStartEntry.value,
|
||||
commentEndString: commentEndEntry && commentEndEntry.value
|
||||
}
|
||||
}
|
||||
|
||||
buildIterator () {
|
||||
return new TokenizedBufferIterator(this)
|
||||
/*
|
||||
Section - Syntax Highlighting
|
||||
*/
|
||||
|
||||
buildHighlightIterator () {
|
||||
return new TextMateHighlightIterator(this)
|
||||
}
|
||||
|
||||
classNameForScopeId (id) {
|
||||
@@ -196,47 +218,14 @@ class TokenizedBuffer {
|
||||
return []
|
||||
}
|
||||
|
||||
onDidInvalidateRange (fn) {
|
||||
return this.emitter.on('did-invalidate-range', fn)
|
||||
}
|
||||
|
||||
serialize () {
|
||||
return {
|
||||
deserializer: 'TokenizedBuffer',
|
||||
bufferPath: this.buffer.getPath(),
|
||||
bufferId: this.buffer.getId(),
|
||||
tabLength: this.tabLength,
|
||||
largeFileMode: this.largeFileMode
|
||||
}
|
||||
}
|
||||
|
||||
observeGrammar (callback) {
|
||||
callback(this.grammar)
|
||||
return this.onDidChangeGrammar(callback)
|
||||
}
|
||||
|
||||
onDidChangeGrammar (callback) {
|
||||
return this.emitter.on('did-change-grammar', callback)
|
||||
onDidChangeHighlighting (fn) {
|
||||
return this.emitter.on('did-change-highlighting', fn)
|
||||
}
|
||||
|
||||
onDidTokenize (callback) {
|
||||
return this.emitter.on('did-tokenize', callback)
|
||||
}
|
||||
|
||||
setGrammar (grammar) {
|
||||
if (!grammar || grammar === this.grammar) return
|
||||
|
||||
this.grammar = grammar
|
||||
this.rootScopeDescriptor = new ScopeDescriptor({scopes: [this.grammar.scopeName]})
|
||||
|
||||
if (this.grammarUpdateDisposable) this.grammarUpdateDisposable.dispose()
|
||||
this.grammarUpdateDisposable = this.grammar.onDidUpdate(() => this.retokenizeLines())
|
||||
this.disposables.add(this.grammarUpdateDisposable)
|
||||
|
||||
this.retokenizeLines()
|
||||
this.emitter.emit('did-change-grammar', grammar)
|
||||
}
|
||||
|
||||
getGrammarSelectionContent () {
|
||||
return this.buffer.getTextInRange([[0, 0], [10, 0]])
|
||||
}
|
||||
@@ -264,21 +253,15 @@ class TokenizedBuffer {
|
||||
}
|
||||
}
|
||||
|
||||
setVisible (visible) {
|
||||
this.visible = visible
|
||||
if (this.visible && this.grammar.name !== 'Null Grammar' && !this.largeFileMode) {
|
||||
startTokenizing () {
|
||||
this.tokenizationStarted = true
|
||||
if (this.grammar.name !== 'Null Grammar' && !this.largeFileMode) {
|
||||
this.tokenizeInBackground()
|
||||
}
|
||||
}
|
||||
|
||||
getTabLength () { return this.tabLength }
|
||||
|
||||
setTabLength (tabLength) {
|
||||
this.tabLength = tabLength
|
||||
}
|
||||
|
||||
tokenizeInBackground () {
|
||||
if (!this.visible || this.pendingChunk || !this.alive) return
|
||||
if (!this.tokenizationStarted || this.pendingChunk || !this.alive) return
|
||||
|
||||
this.pendingChunk = true
|
||||
_.defer(() => {
|
||||
@@ -316,7 +299,7 @@ class TokenizedBuffer {
|
||||
this.validateRow(endRow)
|
||||
if (!filledRegion) this.invalidateRow(endRow + 1)
|
||||
|
||||
this.emitter.emit('did-invalidate-range', Range(Point(startRow, 0), Point(endRow + 1, 0)))
|
||||
this.emitter.emit('did-change-highlighting', Range(Point(startRow, 0), Point(endRow + 1, 0)))
|
||||
}
|
||||
|
||||
if (this.firstInvalidRow() != null) {
|
||||
@@ -486,18 +469,6 @@ class TokenizedBuffer {
|
||||
while (true) {
|
||||
if (scopes.pop() === matchingStartTag) break
|
||||
if (scopes.length === 0) {
|
||||
this.assert(false, 'Encountered an unmatched scope end tag.', error => {
|
||||
error.metadata = {
|
||||
grammarScopeName: this.grammar.scopeName,
|
||||
unmatchedEndTag: this.grammar.scopeForId(tag)
|
||||
}
|
||||
const path = require('path')
|
||||
error.privateMetadataDescription = `The contents of \`${path.basename(this.buffer.getPath())}\``
|
||||
error.privateMetadata = {
|
||||
filePath: this.buffer.getPath(),
|
||||
fileContents: this.buffer.getText()
|
||||
}
|
||||
})
|
||||
break
|
||||
}
|
||||
}
|
||||
@@ -507,7 +478,7 @@ class TokenizedBuffer {
|
||||
return scopes
|
||||
}
|
||||
|
||||
indentLevelForLine (line, tabLength = this.tabLength) {
|
||||
indentLevelForLine (line, tabLength) {
|
||||
let indentLength = 0
|
||||
for (let i = 0, {length} = line; i < length; i++) {
|
||||
const char = line[i]
|
||||
@@ -712,28 +683,20 @@ class TokenizedBuffer {
|
||||
return foldEndRow
|
||||
}
|
||||
|
||||
increaseIndentRegexForScopeDescriptor (scopeDescriptor) {
|
||||
if (this.scopedSettingsDelegate) {
|
||||
return this.regexForPattern(this.scopedSettingsDelegate.getIncreaseIndentPattern(scopeDescriptor))
|
||||
}
|
||||
increaseIndentRegexForScopeDescriptor (scope) {
|
||||
return this.regexForPattern(this.config.get('editor.increaseIndentPattern', {scope}))
|
||||
}
|
||||
|
||||
decreaseIndentRegexForScopeDescriptor (scopeDescriptor) {
|
||||
if (this.scopedSettingsDelegate) {
|
||||
return this.regexForPattern(this.scopedSettingsDelegate.getDecreaseIndentPattern(scopeDescriptor))
|
||||
}
|
||||
decreaseIndentRegexForScopeDescriptor (scope) {
|
||||
return this.regexForPattern(this.config.get('editor.decreaseIndentPattern', {scope}))
|
||||
}
|
||||
|
||||
decreaseNextIndentRegexForScopeDescriptor (scopeDescriptor) {
|
||||
if (this.scopedSettingsDelegate) {
|
||||
return this.regexForPattern(this.scopedSettingsDelegate.getDecreaseNextIndentPattern(scopeDescriptor))
|
||||
}
|
||||
decreaseNextIndentRegexForScopeDescriptor (scope) {
|
||||
return this.regexForPattern(this.config.get('editor.decreaseNextIndentPattern', {scope}))
|
||||
}
|
||||
|
||||
foldEndRegexForScopeDescriptor (scopes) {
|
||||
if (this.scopedSettingsDelegate) {
|
||||
return this.regexForPattern(this.scopedSettingsDelegate.getFoldEndPattern(scopes))
|
||||
}
|
||||
foldEndRegexForScopeDescriptor (scope) {
|
||||
return this.regexForPattern(this.config.get('editor.foldEndPattern', {scope}))
|
||||
}
|
||||
|
||||
regexForPattern (pattern) {
|
||||
@@ -753,7 +716,7 @@ class TokenizedBuffer {
|
||||
}
|
||||
}
|
||||
|
||||
module.exports.prototype.chunkSize = 50
|
||||
TextMateLanguageMode.prototype.chunkSize = 50
|
||||
|
||||
function selectorMatchesAnyScope (selector, scopes) {
|
||||
const targetClasses = selector.replace(/^\./, '').split('.')
|
||||
@@ -762,3 +725,142 @@ function selectorMatchesAnyScope (selector, scopes) {
|
||||
return _.isSubset(targetClasses, scopeClasses)
|
||||
})
|
||||
}
|
||||
|
||||
class TextMateHighlightIterator {
|
||||
constructor (languageMode) {
|
||||
this.languageMode = languageMode
|
||||
this.openScopeIds = null
|
||||
this.closeScopeIds = null
|
||||
}
|
||||
|
||||
seek (position) {
|
||||
this.openScopeIds = []
|
||||
this.closeScopeIds = []
|
||||
this.tagIndex = null
|
||||
|
||||
const currentLine = this.languageMode.tokenizedLineForRow(position.row)
|
||||
this.currentLineTags = currentLine.tags
|
||||
this.currentLineLength = currentLine.text.length
|
||||
const containingScopeIds = currentLine.openScopes.map((id) => fromFirstMateScopeId(id))
|
||||
|
||||
let currentColumn = 0
|
||||
for (let index = 0; index < this.currentLineTags.length; index++) {
|
||||
const tag = this.currentLineTags[index]
|
||||
if (tag >= 0) {
|
||||
if (currentColumn >= position.column) {
|
||||
this.tagIndex = index
|
||||
break
|
||||
} else {
|
||||
currentColumn += tag
|
||||
while (this.closeScopeIds.length > 0) {
|
||||
this.closeScopeIds.shift()
|
||||
containingScopeIds.pop()
|
||||
}
|
||||
while (this.openScopeIds.length > 0) {
|
||||
const openTag = this.openScopeIds.shift()
|
||||
containingScopeIds.push(openTag)
|
||||
}
|
||||
}
|
||||
} else {
|
||||
const scopeId = fromFirstMateScopeId(tag)
|
||||
if ((tag & 1) === 0) {
|
||||
if (this.openScopeIds.length > 0) {
|
||||
if (currentColumn >= position.column) {
|
||||
this.tagIndex = index
|
||||
break
|
||||
} else {
|
||||
while (this.closeScopeIds.length > 0) {
|
||||
this.closeScopeIds.shift()
|
||||
containingScopeIds.pop()
|
||||
}
|
||||
while (this.openScopeIds.length > 0) {
|
||||
const openTag = this.openScopeIds.shift()
|
||||
containingScopeIds.push(openTag)
|
||||
}
|
||||
}
|
||||
}
|
||||
this.closeScopeIds.push(scopeId)
|
||||
} else {
|
||||
this.openScopeIds.push(scopeId)
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
if (this.tagIndex == null) {
|
||||
this.tagIndex = this.currentLineTags.length
|
||||
}
|
||||
this.position = Point(position.row, Math.min(this.currentLineLength, currentColumn))
|
||||
return containingScopeIds
|
||||
}
|
||||
|
||||
moveToSuccessor () {
|
||||
this.openScopeIds = []
|
||||
this.closeScopeIds = []
|
||||
while (true) {
|
||||
if (this.tagIndex === this.currentLineTags.length) {
|
||||
if (this.isAtTagBoundary()) {
|
||||
break
|
||||
} else if (!this.moveToNextLine()) {
|
||||
return false
|
||||
}
|
||||
} else {
|
||||
const tag = this.currentLineTags[this.tagIndex]
|
||||
if (tag >= 0) {
|
||||
if (this.isAtTagBoundary()) {
|
||||
break
|
||||
} else {
|
||||
this.position = Point(this.position.row, Math.min(
|
||||
this.currentLineLength,
|
||||
this.position.column + this.currentLineTags[this.tagIndex]
|
||||
))
|
||||
}
|
||||
} else {
|
||||
const scopeId = fromFirstMateScopeId(tag)
|
||||
if ((tag & 1) === 0) {
|
||||
if (this.openScopeIds.length > 0) {
|
||||
break
|
||||
} else {
|
||||
this.closeScopeIds.push(scopeId)
|
||||
}
|
||||
} else {
|
||||
this.openScopeIds.push(scopeId)
|
||||
}
|
||||
}
|
||||
this.tagIndex++
|
||||
}
|
||||
}
|
||||
return true
|
||||
}
|
||||
|
||||
getPosition () {
|
||||
return this.position
|
||||
}
|
||||
|
||||
getCloseScopeIds () {
|
||||
return this.closeScopeIds.slice()
|
||||
}
|
||||
|
||||
getOpenScopeIds () {
|
||||
return this.openScopeIds.slice()
|
||||
}
|
||||
|
||||
moveToNextLine () {
|
||||
this.position = Point(this.position.row + 1, 0)
|
||||
const tokenizedLine = this.languageMode.tokenizedLineForRow(this.position.row)
|
||||
if (tokenizedLine == null) {
|
||||
return false
|
||||
} else {
|
||||
this.currentLineTags = tokenizedLine.tags
|
||||
this.currentLineLength = tokenizedLine.text.length
|
||||
this.tagIndex = 0
|
||||
return true
|
||||
}
|
||||
}
|
||||
|
||||
isAtTagBoundary () {
|
||||
return this.closeScopeIds.length > 0 || this.openScopeIds.length > 0
|
||||
}
|
||||
}
|
||||
|
||||
TextMateLanguageMode.TextMateHighlightIterator = TextMateHighlightIterator
|
||||
module.exports = TextMateLanguageMode
|
||||
@@ -1,7 +1,7 @@
|
||||
module.exports =
|
||||
class TokenIterator {
|
||||
constructor (tokenizedBuffer) {
|
||||
this.tokenizedBuffer = tokenizedBuffer
|
||||
constructor (languageMode) {
|
||||
this.languageMode = languageMode
|
||||
}
|
||||
|
||||
reset (line) {
|
||||
@@ -9,7 +9,7 @@ class TokenIterator {
|
||||
this.index = null
|
||||
this.startColumn = 0
|
||||
this.endColumn = 0
|
||||
this.scopes = this.line.openScopes.map(id => this.tokenizedBuffer.grammar.scopeForId(id))
|
||||
this.scopes = this.line.openScopes.map(id => this.languageMode.grammar.scopeForId(id))
|
||||
this.scopeStarts = this.scopes.slice()
|
||||
this.scopeEnds = []
|
||||
return this
|
||||
@@ -30,7 +30,7 @@ class TokenIterator {
|
||||
while (this.index < tags.length) {
|
||||
const tag = tags[this.index]
|
||||
if (tag < 0) {
|
||||
const scope = this.tokenizedBuffer.grammar.scopeForId(tag)
|
||||
const scope = this.languageMode.grammar.scopeForId(tag)
|
||||
if ((tag % 2) === 0) {
|
||||
if (this.scopeStarts[this.scopeStarts.length - 1] === scope) {
|
||||
this.scopeStarts.pop()
|
||||
|
||||
@@ -1,138 +0,0 @@
|
||||
const {Point} = require('text-buffer')
|
||||
const {fromFirstMateScopeId} = require('./first-mate-helpers')
|
||||
|
||||
module.exports = class TokenizedBufferIterator {
|
||||
constructor (tokenizedBuffer) {
|
||||
this.tokenizedBuffer = tokenizedBuffer
|
||||
this.openScopeIds = null
|
||||
this.closeScopeIds = null
|
||||
}
|
||||
|
||||
seek (position) {
|
||||
this.openScopeIds = []
|
||||
this.closeScopeIds = []
|
||||
this.tagIndex = null
|
||||
|
||||
const currentLine = this.tokenizedBuffer.tokenizedLineForRow(position.row)
|
||||
this.currentLineTags = currentLine.tags
|
||||
this.currentLineLength = currentLine.text.length
|
||||
const containingScopeIds = currentLine.openScopes.map((id) => fromFirstMateScopeId(id))
|
||||
|
||||
let currentColumn = 0
|
||||
for (let index = 0; index < this.currentLineTags.length; index++) {
|
||||
const tag = this.currentLineTags[index]
|
||||
if (tag >= 0) {
|
||||
if (currentColumn >= position.column) {
|
||||
this.tagIndex = index
|
||||
break
|
||||
} else {
|
||||
currentColumn += tag
|
||||
while (this.closeScopeIds.length > 0) {
|
||||
this.closeScopeIds.shift()
|
||||
containingScopeIds.pop()
|
||||
}
|
||||
while (this.openScopeIds.length > 0) {
|
||||
const openTag = this.openScopeIds.shift()
|
||||
containingScopeIds.push(openTag)
|
||||
}
|
||||
}
|
||||
} else {
|
||||
const scopeId = fromFirstMateScopeId(tag)
|
||||
if ((tag & 1) === 0) {
|
||||
if (this.openScopeIds.length > 0) {
|
||||
if (currentColumn >= position.column) {
|
||||
this.tagIndex = index
|
||||
break
|
||||
} else {
|
||||
while (this.closeScopeIds.length > 0) {
|
||||
this.closeScopeIds.shift()
|
||||
containingScopeIds.pop()
|
||||
}
|
||||
while (this.openScopeIds.length > 0) {
|
||||
const openTag = this.openScopeIds.shift()
|
||||
containingScopeIds.push(openTag)
|
||||
}
|
||||
}
|
||||
}
|
||||
this.closeScopeIds.push(scopeId)
|
||||
} else {
|
||||
this.openScopeIds.push(scopeId)
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
if (this.tagIndex == null) {
|
||||
this.tagIndex = this.currentLineTags.length
|
||||
}
|
||||
this.position = Point(position.row, Math.min(this.currentLineLength, currentColumn))
|
||||
return containingScopeIds
|
||||
}
|
||||
|
||||
moveToSuccessor () {
|
||||
this.openScopeIds = []
|
||||
this.closeScopeIds = []
|
||||
while (true) {
|
||||
if (this.tagIndex === this.currentLineTags.length) {
|
||||
if (this.isAtTagBoundary()) {
|
||||
break
|
||||
} else if (!this.moveToNextLine()) {
|
||||
return false
|
||||
}
|
||||
} else {
|
||||
const tag = this.currentLineTags[this.tagIndex]
|
||||
if (tag >= 0) {
|
||||
if (this.isAtTagBoundary()) {
|
||||
break
|
||||
} else {
|
||||
this.position = Point(this.position.row, Math.min(
|
||||
this.currentLineLength,
|
||||
this.position.column + this.currentLineTags[this.tagIndex]
|
||||
))
|
||||
}
|
||||
} else {
|
||||
const scopeId = fromFirstMateScopeId(tag)
|
||||
if ((tag & 1) === 0) {
|
||||
if (this.openScopeIds.length > 0) {
|
||||
break
|
||||
} else {
|
||||
this.closeScopeIds.push(scopeId)
|
||||
}
|
||||
} else {
|
||||
this.openScopeIds.push(scopeId)
|
||||
}
|
||||
}
|
||||
this.tagIndex++
|
||||
}
|
||||
}
|
||||
return true
|
||||
}
|
||||
|
||||
getPosition () {
|
||||
return this.position
|
||||
}
|
||||
|
||||
getCloseScopeIds () {
|
||||
return this.closeScopeIds.slice()
|
||||
}
|
||||
|
||||
getOpenScopeIds () {
|
||||
return this.openScopeIds.slice()
|
||||
}
|
||||
|
||||
moveToNextLine () {
|
||||
this.position = Point(this.position.row + 1, 0)
|
||||
const tokenizedLine = this.tokenizedBuffer.tokenizedLineForRow(this.position.row)
|
||||
if (tokenizedLine == null) {
|
||||
return false
|
||||
} else {
|
||||
this.currentLineTags = tokenizedLine.tags
|
||||
this.currentLineLength = tokenizedLine.text.length
|
||||
this.tagIndex = 0
|
||||
return true
|
||||
}
|
||||
}
|
||||
|
||||
isAtTagBoundary () {
|
||||
return this.closeScopeIds.length > 0 || this.openScopeIds.length > 0
|
||||
}
|
||||
}
|
||||
@@ -494,10 +494,12 @@ module.exports = class Workspace extends Model {
|
||||
if (item instanceof TextEditor) {
|
||||
const subscriptions = new CompositeDisposable(
|
||||
this.textEditorRegistry.add(item),
|
||||
this.textEditorRegistry.maintainGrammar(item),
|
||||
this.textEditorRegistry.maintainConfig(item),
|
||||
item.observeGrammar(this.handleGrammarUsed.bind(this))
|
||||
)
|
||||
if (!this.project.findBufferForId(item.buffer.id)) {
|
||||
this.project.addBuffer(item.buffer)
|
||||
}
|
||||
item.onDidDestroy(() => { subscriptions.dispose() })
|
||||
this.emitter.emit('did-add-text-editor', {textEditor: item, pane, index})
|
||||
}
|
||||
@@ -1212,9 +1214,7 @@ module.exports = class Workspace extends Model {
|
||||
}
|
||||
|
||||
const fileSize = fs.getSizeSync(filePath)
|
||||
|
||||
const largeFileMode = fileSize >= (2 * 1048576) // 2MB
|
||||
if (fileSize >= (this.config.get('core.warnOnLargeFileLimit') * 1048576)) { // 20MB by default
|
||||
if (fileSize >= (this.config.get('core.warnOnLargeFileLimit') * 1048576)) {
|
||||
const choice = this.applicationDelegate.confirm({
|
||||
message: 'Atom will be unresponsive during the loading of very large files.',
|
||||
detailedMessage: 'Do you still want to load this file?',
|
||||
@@ -1229,7 +1229,7 @@ module.exports = class Workspace extends Model {
|
||||
|
||||
return this.project.bufferForPath(filePath, options)
|
||||
.then(buffer => {
|
||||
return this.textEditorRegistry.build(Object.assign({buffer, largeFileMode, autoHeight: false}, options))
|
||||
return this.textEditorRegistry.build(Object.assign({buffer, autoHeight: false}, options))
|
||||
})
|
||||
}
|
||||
|
||||
@@ -1250,11 +1250,8 @@ module.exports = class Workspace extends Model {
|
||||
// Returns a {TextEditor}.
|
||||
buildTextEditor (params) {
|
||||
const editor = this.textEditorRegistry.build(params)
|
||||
const subscriptions = new CompositeDisposable(
|
||||
this.textEditorRegistry.maintainGrammar(editor),
|
||||
this.textEditorRegistry.maintainConfig(editor)
|
||||
)
|
||||
editor.onDidDestroy(() => { subscriptions.dispose() })
|
||||
const subscription = this.textEditorRegistry.maintainConfig(editor)
|
||||
editor.onDidDestroy(() => subscription.dispose())
|
||||
return editor
|
||||
}
|
||||
|
||||
|
||||
Reference in New Issue
Block a user