` tags in the HTML output. For example, this input:
+
+ * Bird
+ * Magic
+
+will turn into:
+
+
+
+
+It's worth noting that it's possible to trigger an ordered list by
+accident, by writing something like this:
+
+ 1986. What a great season.
+
+In other words, a *number-period-space* sequence at the beginning of a
+line. To avoid this, you can backslash-escape the period:
+
+ 1986\. What a great season.
+
+
+
+Code Blocks
+
+Pre-formatted code blocks are used for writing about programming or
+markup source code. Rather than forming normal paragraphs, the lines
+of a code block are interpreted literally. Markdown wraps a code block
+in both `` and `` tags.
+
+To produce a code block in Markdown, simply indent every line of the
+block by at least 4 spaces or 1 tab. For example, given this input:
+
+ This is a normal paragraph:
+
+ This is a code block.
+
+Markdown will generate:
+
+ This is a normal paragraph:
+
+ This is a code block.
+
+
+One level of indentation -- 4 spaces or 1 tab -- is removed from each
+line of the code block. For example, this:
+
+ Here is an example of AppleScript:
+
+ tell application "Foo"
+ beep
+ end tell
+
+will turn into:
+
+ Here is an example of AppleScript:
+
+ tell application "Foo"
+ beep
+ end tell
+
+
+A code block continues until it reaches a line that is not indented
+(or the end of the article).
+
+Within a code block, ampersands (`&`) and angle brackets (`<` and `>`)
+are automatically converted into HTML entities. This makes it very
+easy to include example HTML source code using Markdown -- just paste
+it and indent it, and Markdown will handle the hassle of encoding the
+ampersands and angle brackets. For example, this:
+
+
+
+will turn into:
+
+ <div class="footer">
+ © 2004 Foo Corporation
+ </div>
+
+
+Regular Markdown syntax is not processed within code blocks. E.g.,
+asterisks are just literal asterisks within a code block. This means
+it's also easy to use Markdown to write about Markdown's own syntax.
+
+
+
+Horizontal Rules
+
+You can produce a horizontal rule tag (` `) by placing three or
+more hyphens, asterisks, or underscores on a line by themselves. If you
+wish, you may use spaces between the hyphens or asterisks. Each of the
+following lines will produce a horizontal rule:
+
+ * * *
+
+ ***
+
+ *****
+
+ - - -
+
+ ---------------------------------------
+
+
+* * *
+
+Span Elements
+
+Links
+
+Markdown supports two style of links: *inline* and *reference*.
+
+In both styles, the link text is delimited by [square brackets].
+
+To create an inline link, use a set of regular parentheses immediately
+after the link text's closing square bracket. Inside the parentheses,
+put the URL where you want the link to point, along with an *optional*
+title for the link, surrounded in quotes. For example:
+
+ This is [an example](http://example.com/ "Title") inline link.
+
+ [This link](http://example.net/) has no title attribute.
+
+Will produce:
+
+ This is
+ an example inline link.
+
+ This link has no
+ title attribute.
+
+If you're referring to a local resource on the same server, you can
+use relative paths:
+
+ See my [About](/about/) page for details.
+
+Reference-style links use a second set of square brackets, inside
+which you place a label of your choosing to identify the link:
+
+ This is [an example][id] reference-style link.
+
+You can optionally use a space to separate the sets of brackets:
+
+ This is [an example] [id] reference-style link.
+
+Then, anywhere in the document, you define your link label like this,
+on a line by itself:
+
+ [id]: http://example.com/ "Optional Title Here"
+
+That is:
+
+* Square brackets containing the link identifier (optionally
+ indented from the left margin using up to three spaces);
+* followed by a colon;
+* followed by one or more spaces (or tabs);
+* followed by the URL for the link;
+* optionally followed by a title attribute for the link, enclosed
+ in double or single quotes, or enclosed in parentheses.
+
+The following three link definitions are equivalent:
+
+ [foo]: http://example.com/ "Optional Title Here"
+ [foo]: http://example.com/ 'Optional Title Here'
+ [foo]: http://example.com/ (Optional Title Here)
+
+**Note:** There is a known bug in Markdown.pl 1.0.1 which prevents
+single quotes from being used to delimit link titles.
+
+The link URL may, optionally, be surrounded by angle brackets:
+
+ [id]: "Optional Title Here"
+
+You can put the title attribute on the next line and use extra spaces
+or tabs for padding, which tends to look better with longer URLs:
+
+ [id]: http://example.com/longish/path/to/resource/here
+ "Optional Title Here"
+
+Link definitions are only used for creating links during Markdown
+processing, and are stripped from your document in the HTML output.
+
+Link definition names may constist of letters, numbers, spaces, and
+punctuation -- but they are *not* case sensitive. E.g. these two
+links:
+
+ [link text][a]
+ [link text][A]
+
+are equivalent.
+
+The *implicit link name* shortcut allows you to omit the name of the
+link, in which case the link text itself is used as the name.
+Just use an empty set of square brackets -- e.g., to link the word
+"Google" to the google.com web site, you could simply write:
+
+ [Google][]
+
+And then define the link:
+
+ [Google]: http://google.com/
+
+Because link names may contain spaces, this shortcut even works for
+multiple words in the link text:
+
+ Visit [Daring Fireball][] for more information.
+
+And then define the link:
+
+ [Daring Fireball]: http://daringfireball.net/
+
+Link definitions can be placed anywhere in your Markdown document. I
+tend to put them immediately after each paragraph in which they're
+used, but if you want, you can put them all at the end of your
+document, sort of like footnotes.
+
+Here's an example of reference links in action:
+
+ I get 10 times more traffic from [Google] [1] than from
+ [Yahoo] [2] or [MSN] [3].
+
+ [1]: http://google.com/ "Google"
+ [2]: http://search.yahoo.com/ "Yahoo Search"
+ [3]: http://search.msn.com/ "MSN Search"
+
+Using the implicit link name shortcut, you could instead write:
+
+ I get 10 times more traffic from [Google][] than from
+ [Yahoo][] or [MSN][].
+
+ [google]: http://google.com/ "Google"
+ [yahoo]: http://search.yahoo.com/ "Yahoo Search"
+ [msn]: http://search.msn.com/ "MSN Search"
+
+Both of the above examples will produce the following HTML output:
+
+ I get 10 times more traffic from Google than from
+ Yahoo
+ or MSN .
+
+For comparison, here is the same paragraph written using
+Markdown's inline link style:
+
+ I get 10 times more traffic from [Google](http://google.com/ "Google")
+ than from [Yahoo](http://search.yahoo.com/ "Yahoo Search") or
+ [MSN](http://search.msn.com/ "MSN Search").
+
+The point of reference-style links is not that they're easier to
+write. The point is that with reference-style links, your document
+source is vastly more readable. Compare the above examples: using
+reference-style links, the paragraph itself is only 81 characters
+long; with inline-style links, it's 176 characters; and as raw HTML,
+it's 234 characters. In the raw HTML, there's more markup than there
+is text.
+
+With Markdown's reference-style links, a source document much more
+closely resembles the final output, as rendered in a browser. By
+allowing you to move the markup-related metadata out of the paragraph,
+you can add links without interrupting the narrative flow of your
+prose.
+
+
+Emphasis
+
+Markdown treats asterisks (`*`) and underscores (`_`) as indicators of
+emphasis. Text wrapped with one `*` or `_` will be wrapped with an
+HTML `` tag; double `*`'s or `_`'s will be wrapped with an HTML
+`` tag. E.g., this input:
+
+ *single asterisks*
+
+ _single underscores_
+
+ **double asterisks**
+
+ __double underscores__
+
+will produce:
+
+ single asterisks
+
+ single underscores
+
+ double asterisks
+
+ double underscores
+
+You can use whichever style you prefer; the lone restriction is that
+the same character must be used to open and close an emphasis span.
+
+Emphasis can be used in the middle of a word:
+
+ un*fucking*believable
+
+But if you surround an `*` or `_` with spaces, it'll be treated as a
+literal asterisk or underscore.
+
+To produce a literal asterisk or underscore at a position where it
+would otherwise be used as an emphasis delimiter, you can backslash
+escape it:
+
+ \*this text is surrounded by literal asterisks\*
+
+
+
+Code
+
+To indicate a span of code, wrap it with backtick quotes (`` ` ``).
+Unlike a pre-formatted code block, a code span indicates code within a
+normal paragraph. For example:
+
+ Use the `printf()` function.
+
+will produce:
+
+ Use the printf() function.
+
+To include a literal backtick character within a code span, you can use
+multiple backticks as the opening and closing delimiters:
+
+ ``There is a literal backtick (`) here.``
+
+which will produce this:
+
+ There is a literal backtick (`) here.
+
+The backtick delimiters surrounding a code span may include spaces --
+one after the opening, one before the closing. This allows you to place
+literal backtick characters at the beginning or end of a code span:
+
+ A single backtick in a code span: `` ` ``
+
+ A backtick-delimited string in a code span: `` `foo` ``
+
+will produce:
+
+ A single backtick in a code span: `
+
+ A backtick-delimited string in a code span: `foo`
+
+With a code span, ampersands and angle brackets are encoded as HTML
+entities automatically, which makes it easy to include example HTML
+tags. Markdown will turn this:
+
+ Please don't use any `` tags.
+
+into:
+
+ Please don't use any <blink> tags.
+
+You can write this:
+
+ `—` is the decimal-encoded equivalent of `—`.
+
+to produce:
+
+ — is the decimal-encoded
+ equivalent of —.
+
+
+
+Images
+
+Admittedly, it's fairly difficult to devise a "natural" syntax for
+placing images into a plain text document format.
+
+Markdown uses an image syntax that is intended to resemble the syntax
+for links, allowing for two styles: *inline* and *reference*.
+
+Inline image syntax looks like this:
+
+ 
+
+ 
+
+That is:
+
+* An exclamation mark: `!`;
+* followed by a set of square brackets, containing the `alt`
+ attribute text for the image;
+* followed by a set of parentheses, containing the URL or path to
+ the image, and an optional `title` attribute enclosed in double
+ or single quotes.
+
+Reference-style image syntax looks like this:
+
+ ![Alt text][id]
+
+Where "id" is the name of a defined image reference. Image references
+are defined using syntax identical to link references:
+
+ [id]: url/to/image "Optional title attribute"
+
+As of this writing, Markdown has no syntax for specifying the
+dimensions of an image; if this is important to you, you can simply
+use regular HTML ` ` tags.
+
+
+* * *
+
+
+Miscellaneous
+
+Automatic Links
+
+Markdown supports a shortcut style for creating "automatic" links for URLs and email addresses: simply surround the URL or email address with angle brackets. What this means is that if you want to show the actual text of a URL or email address, and also have it be a clickable link, you can do this:
+
+
+
+Markdown will turn this into:
+
+ http://example.com/
+
+Automatic links for email addresses work similarly, except that
+Markdown will also perform a bit of randomized decimal and hex
+entity-encoding to help obscure your address from address-harvesting
+spambots. For example, Markdown will turn this:
+
+
+
+into something like this:
+
+ address@exa
+ mple.com
+
+which will render in a browser as a clickable link to "address@example.com".
+
+(This sort of entity-encoding trick will indeed fool many, if not
+most, address-harvesting bots, but it definitely won't fool all of
+them. It's better than nothing, but an address published in this way
+will probably eventually start receiving spam.)
+
+
+
+Backslash Escapes
+
+Markdown allows you to use backslash escapes to generate literal
+characters which would otherwise have special meaning in Markdown's
+formatting syntax. For example, if you wanted to surround a word with
+literal asterisks (instead of an HTML `` tag), you can backslashes
+before the asterisks, like this:
+
+ \*literal asterisks\*
+
+Markdown provides backslash escapes for the following characters:
+
+ \ backslash
+ ` backtick
+ * asterisk
+ _ underscore
+ {} curly braces
+ [] square brackets
+ () parentheses
+ # hash mark
+ + plus sign
+ - minus sign (hyphen)
+ . dot
+ ! exclamation mark
+
diff --git a/r2/r2/lib/contrib/discount-1.6.0/tests/tables.t b/r2/r2/lib/contrib/discount-1.6.0/tests/tables.t
new file mode 100644
index 000000000..3ba4c0bae
--- /dev/null
+++ b/r2/r2/lib/contrib/discount-1.6.0/tests/tables.t
@@ -0,0 +1,186 @@
+./echo "tables"
+
+rc=0
+MARKDOWN_FLAGS=
+
+try() {
+ unset FLAGS
+ case "$1" in
+ -*) FLAGS=$1
+ shift ;;
+ esac
+
+ ./echo -n " $1" '..................................' | ./cols 36
+
+ Q=`./echo "$2" | ./markdown $FLAGS`
+
+ if [ "$3" = "$Q" ]; then
+ ./echo " ok"
+ else
+ ./echo " FAILED"
+ ./echo "wanted: $3"
+ ./echo "got : $Q"
+ rc=1
+ fi
+}
+
+
+try 'single-column table' \
+ '|hello
+|-----
+|sailor' \
+ '
+
+
+
+hello
+
+
+
+
+
+sailor
+
+
+
'
+
+
+try 'two-column table' \
+ '
+ a | b
+-----|------
+hello|sailor' \
+ '
+
+
+ a
+ b
+
+
+
+
+hello
+sailor
+
+
+
'
+
+try 'three-column table' \
+'a|b|c
+-|-|-
+hello||sailor'\
+ '
+
+
+a
+b
+c
+
+
+
+
+hello
+
+sailor
+
+
+
'
+
+try 'two-column table with empty cells' \
+ '
+ a | b
+-----|------
+hello|
+ |sailor' \
+ '
+
+
+ a
+ b
+
+
+
+
+hello
+
+
+
+
+sailor
+
+
+
'
+
+try 'two-column table with alignment' \
+ '
+ a | b
+----:|:-----
+hello|sailor' \
+ '
+
+
+ a
+ b
+
+
+
+
+hello
+sailor
+
+
+
'
+
+try 'table with extra data column' \
+ '
+ a | b
+-----|------
+hello|sailor|boy' \
+ '
+
+
+ a
+ b
+
+
+
+
+hello
+sailor|boy
+
+
+
'
+
+
+try -fnotables 'tables with -fnotables' \
+ 'a|b
+-|-
+hello|sailor' \
+ 'a|b
+–|–
+hello|sailor
'
+
+try 'deceptive non-table text' \
+ 'a | b | c
+
+text' \
+ 'a | b | c
+
+text
'
+
+try 'table headers only' \
+ 'a|b|c
+-|-|-' \
+ ''
+
+exit $rc
diff --git a/r2/r2/lib/contrib/discount-1.6.0/tests/tabstop.t b/r2/r2/lib/contrib/discount-1.6.0/tests/tabstop.t
new file mode 100644
index 000000000..c577f8b4d
--- /dev/null
+++ b/r2/r2/lib/contrib/discount-1.6.0/tests/tabstop.t
@@ -0,0 +1,66 @@
+rc=0
+unset MARKDOWN_FLAGS
+unset MKD_TABSTOP
+
+try() {
+ unset FLAGS
+ case "$1" in
+ -*) FLAGS=$1
+ shift ;;
+ esac
+
+ ./echo -n " $1" '..................................' | ./cols 36
+
+ Q=`./echo "$2" | ./markdown $FLAGS`
+
+ if [ "$3" = "$Q" ]; then
+ ./echo " ok"
+ else
+ ./echo " FAILED"
+ ./echo "wanted: $3"
+ ./echo "got : $Q"
+ rc=1
+ fi
+}
+
+eval `./markdown -V | tr ' ' '\n' | grep TAB`
+
+if [ "${TAB:-4}" -eq 8 ]; then
+ ./echo "dealing with tabstop derangement"
+
+ LIST='
+ * A
+ * B
+ * C'
+
+ try 'markdown with TAB=8' \
+ "$LIST" \
+ ''
+
+ try -F0x0200 'markdown with TAB=4' \
+ "$LIST" \
+ ''
+
+fi
+
+exit $rc
diff --git a/r2/r2/lib/contrib/discount-1.6.0/tests/toc.t b/r2/r2/lib/contrib/discount-1.6.0/tests/toc.t
new file mode 100644
index 000000000..6408d4cce
--- /dev/null
+++ b/r2/r2/lib/contrib/discount-1.6.0/tests/toc.t
@@ -0,0 +1,41 @@
+./echo "table-of-contents support"
+
+rc=0
+MARKDOWN_FLAGS=
+
+try() {
+ unset FLAGS
+
+ case "$1" in
+ -*) FLAGS=$1
+ shift ;;
+ esac
+
+ ./echo -n " $1" '..................................' | ./cols 36
+
+ Q=`./echo "$2" | ./markdown $FLAGS`
+
+ if [ "$3" = "$Q" ]; then
+ ./echo " ok"
+ else
+ ./echo " FAILED"
+ ./echo "wanted: $3"
+ ./echo "got : $Q"
+ rc=1
+ fi
+}
+
+
+try '-T -ftoc' 'table of contents' \
+'#H1
+hi' \
+'
+
+H1
+
+hi
'
+
+
+exit $rc
diff --git a/r2/r2/lib/contrib/discount-1.6.0/tests/xml.t b/r2/r2/lib/contrib/discount-1.6.0/tests/xml.t
new file mode 100644
index 000000000..c9a3d2396
--- /dev/null
+++ b/r2/r2/lib/contrib/discount-1.6.0/tests/xml.t
@@ -0,0 +1,39 @@
+./echo "xml output with MKD_CDATA"
+
+rc=0
+MARKDOWN_FLAGS=
+
+try() {
+ unset FLAGS
+ case "$1" in
+ -*) FLAGS=$1
+ shift ;;
+ esac
+
+ ./echo -n " $1" '..................................' | ./cols 36
+
+ case "$2" in
+ -t*) Q=`./markdown $FLAGS "$2"` ;;
+ *) Q=`./echo "$2" | ./markdown $FLAGS` ;;
+ esac
+
+ if [ "$3" = "$Q" ]; then
+ ./echo " ok"
+ else
+ ./echo " FAILED"
+ ./echo "wanted: $3"
+ ./echo "got : $Q"
+ rc=1
+ fi
+}
+
+try -fcdata 'xml output from markdown()' 'hello,sailor' '<p>hello,sailor</p>'
+try -fcdata 'from mkd_generateline()' -t'"hello,sailor"' '“hello,sailor”'
+try -fnocdata 'html output from markdown()' '"hello,sailor"' '“hello,sailor”
'
+try -fnocdata '... from mkd_generateline()' -t'"hello,sailor"' '“hello,sailor”'
+
+try -fcdata 'xml output with multibyte utf-8' \
+ 'tecnologÃa y servicios más confiables' \
+ '<p>tecnologÃa y servicios más confiables</p>'
+
+exit $rc
diff --git a/r2/r2/lib/contrib/discount-1.6.0/theme.1 b/r2/r2/lib/contrib/discount-1.6.0/theme.1
new file mode 100644
index 000000000..473b913ee
--- /dev/null
+++ b/r2/r2/lib/contrib/discount-1.6.0/theme.1
@@ -0,0 +1,142 @@
+.\" %A%
+.\"
+.Dd January 23, 2008
+.Dt THEME 1
+.Os MASTODON
+.Sh NAME
+.Nm theme
+.Nd create a web page from a template file
+.Sh SYNOPSIS
+.Nm
+.Op Fl d Pa root
+.Op Fl f
+.Op Fl o Pa file
+.Op Fl p Pa pagename
+.Op Fl t Pa template
+.Op Fl V
+.Op Pa textfile
+.Sh DESCRIPTION
+The
+.Nm
+utility takes a
+.Xr markdown 7 Ns -formatted
+.Pa textfile
+.Pq or stdin if not specified,
+compiles it, and combines it with a
+.Em template
+.Po
+.Pa page.theme
+by default
+.Pc
+to produce a web page. If a path to the
+template is not specified,
+.Nm
+looks for
+.Pa page.theme
+in the current directory, then each parent directory up to the
+.Pa "document root"
+.Po
+set with
+.Fl d
+or, if unset, the
+.Em "root directory"
+of the system.
+.Pc
+If
+.Pa page.theme
+is found,
+.Nm
+copies it to the output, looking for
+.Em ""
+html tags and processing the embedded
+.Ar action
+as appropriate.
+.Pp
+.Nm
+processes the following actions:
+.Bl -tag -width "include("
+.It Ar author
+Prints the author name(s) from the
+.Xr mkd_doc_author 3
+function.
+.It Ar body
+Prints the formatted
+.Xr markdown 7
+input file.
+.It Ar date
+Prints the date returned by
+.Xr mkd_doc_date 3
+or, if none, the
+date the input file was last modified.
+.It Ar dir
+Prints the directory part of the pagename
+.It Ar include Ns Pq Pa file
+Prints the contents of
+.Pa file .
+.Xr Markdown 7
+translation will
+.Em NOT
+be done on this file.
+.It Ar source
+The filename part of the pagename.
+.It Ar style
+Print any stylesheets
+.Pq see Xr mkd-extensions 7
+found in the input file.
+.It Ar title
+Print the title returned by
+.Xr mkd_doc_title 3 ,
+or, if that does not exist, the source filename.
+.It Ar version
+Print the version of
+.Xr discount 7
+that this copy of theme was compiled with.
+.El
+.Pp
+If input is coming from a file and the output was not set with the
+.Ar o
+option,
+.Nm writes the output to
+.Pa file-sans-text.html
+.Pq if
+.Ar file
+has a
+.Pa .text
+suffix, that will be stripped off and replaced with
+.Pa .html ;
+otherwise a
+.Pa .html
+will be appended to the end of the filename.)
+.Pp
+The options are as follows:
+.Bl -tag -width "-o file"
+.It Fl d Pa root
+Set the
+.Em "document root"
+to
+.Ar root
+.It Fl f
+Forcibly overwrite existing html files.
+.It Fl o Pa filename
+Write the output to
+.Ar filename .
+.It Fl p Ar path
+Set the pagename to
+.Ar path .
+.It Fl t Ar filename
+Use
+.Ar filename
+as the template file.
+.El
+.Sh RETURN VALUES
+The
+.Nm
+utility exits 0 on success, and >0 if an error occurs.
+.Sh SEE ALSO
+.Xr markdown 1 ,
+.Xr markdown 3 ,
+.Xr markdown 7 ,
+.Xr mkd-extensions 7 .
+.Sh AUTHOR
+.An David Parsons
+.Pq Li orc@pell.chi.il.us
diff --git a/r2/r2/lib/contrib/discount-1.6.0/theme.c b/r2/r2/lib/contrib/discount-1.6.0/theme.c
new file mode 100644
index 000000000..97f401aae
--- /dev/null
+++ b/r2/r2/lib/contrib/discount-1.6.0/theme.c
@@ -0,0 +1,593 @@
+/*
+ * theme: use a template to create a webpage (markdown-style)
+ *
+ * usage: theme [-d root] [-p pagename] [-t template] [-o html] [source]
+ *
+ */
+/*
+ * Copyright (C) 2007 David L Parsons.
+ * The redistribution terms are provided in the COPYRIGHT file that must
+ * be distributed with this source code.
+ */
+#include "config.h"
+
+#include
+#include
+#include
+#if defined(HAVE_BASENAME) && defined(HAVE_LIBGEN_H)
+# include
+#endif
+#include
+#include
+#include
+#include
+#include
+#if HAVE_PWD_H
+# include
+#endif
+#include
+#include
+#include
+
+#include "mkdio.h"
+#include "cstring.h"
+#include "amalloc.h"
+
+char *pgm = "theme";
+char *output = 0;
+char *pagename = 0;
+char *root = 0;
+#if HAVE_PWD_H
+struct passwd *me = 0;
+#endif
+struct stat *infop = 0;
+
+#ifndef HAVE_BASENAME
+char *
+basename(char *path)
+{
+ char *p;
+
+ if (( p = strrchr(path, '/') ))
+ return 1+p;
+ return path;
+}
+#endif
+
+#ifdef HAVE_FCHDIR
+typedef int HERE;
+#define NOT_HERE (-1)
+
+#define pushd(d) open(d, O_RDONLY)
+
+int
+popd(HERE pwd)
+{
+ int rc = fchdir(pwd);
+ close(pwd);
+ return rc;
+}
+
+#else
+
+typedef char* HERE;
+#define NOT_HERE 0
+
+HERE
+pushd(char *d)
+{
+ HERE cwd;
+ int size;
+
+ if ( chdir(d) == -1 )
+ return NOT_HERE;
+
+ for (cwd = malloc(size=40); cwd; cwd = realloc(cwd, size *= 2))
+ if ( getcwd(cwd, size) )
+ return cwd;
+
+ return NOT_HERE;
+}
+
+int
+popd(HERE pwd)
+{
+ if ( pwd ) {
+ int rc = chdir(pwd);
+ free(pwd);
+
+ return rc;
+ }
+ return -1;
+}
+#endif
+
+typedef STRING(int) Istring;
+
+void
+fail(char *why, ...)
+{
+ va_list ptr;
+
+ va_start(ptr,why);
+ fprintf(stderr, "%s: ", pgm);
+ vfprintf(stderr, why, ptr);
+ fputc('\n', stderr);
+ va_end(ptr);
+ exit(1);
+}
+
+
+/* open_template() -- start at the current directory and work up,
+ * looking for the deepest nested template.
+ * Stop looking when we reach $root or /
+ */
+FILE *
+open_template(char *template)
+{
+ char *cwd;
+ int szcwd;
+ HERE here = pushd(".");
+ FILE *ret;
+
+ if ( here == NOT_HERE )
+ fail("cannot access the current directory");
+
+ szcwd = root ? 1 + strlen(root) : 2;
+
+ if ( (cwd = malloc(szcwd)) == 0 )
+ return 0;
+
+ while ( !(ret = fopen(template, "r")) ) {
+ if ( getcwd(cwd, szcwd) == 0 ) {
+ if ( errno == ERANGE )
+ goto up;
+ break;
+ }
+
+ if ( root && (strcmp(root, cwd) == 0) )
+ break; /* ran out of paths to search */
+ else if ( (strcmp(cwd, "/") == 0) || (*cwd == 0) )
+ break; /* reached / */
+
+ up: if ( chdir("..") == -1 )
+ break;
+ }
+ free(cwd);
+ popd(here);
+ return ret;
+} /* open_template */
+
+
+static Istring inbuf;
+static int psp;
+
+static int
+prepare(FILE *input)
+{
+ int c;
+
+ CREATE(inbuf);
+ psp = 0;
+ while ( (c = getc(input)) != EOF )
+ EXPAND(inbuf) = c;
+ fclose(input);
+ return 1;
+}
+
+static int
+pull()
+{
+ return psp < S(inbuf) ? T(inbuf)[psp++] : EOF;
+}
+
+static int
+peek(int offset)
+{
+ int pos = (psp + offset)-1;
+
+ if ( pos >= 0 && pos < S(inbuf) )
+ return T(inbuf)[pos];
+
+ return EOF;
+}
+
+static int
+shift(int shiftwidth)
+{
+ psp += shiftwidth;
+ return psp;
+}
+
+static int*
+cursor()
+{
+ return T(inbuf) + psp;
+}
+
+
+static int
+thesame(int *p, char *pat)
+{
+ int i;
+
+ for ( i=0; pat[i]; i++ ) {
+ if ( pat[i] == ' ' ) {
+ if ( !isspace(peek(i+1)) ) {
+ return 0;
+ }
+ }
+ else if ( tolower(peek(i+1)) != pat[i] ) {
+ return 0;
+ }
+ }
+ return 1;
+}
+
+
+static int
+istag(int *p, char *pat)
+{
+ int c;
+
+ if ( thesame(p, pat) ) {
+ c = peek(strlen(pat)+1);
+ return (c == '>' || isspace(c));
+ }
+ return 0;
+}
+
+
+/* finclude() includes some (unformatted) source
+ */
+static void
+finclude(MMIOT *doc, FILE *out, int flags)
+{
+ int c;
+ Cstring include;
+ FILE *f;
+
+ CREATE(include);
+
+ while ( (c = pull()) != '(' )
+ ;
+
+ while ( (c=pull()) != ')' && c != EOF )
+ EXPAND(include) = c;
+
+ if ( c != EOF ) {
+ EXPAND(include) = 0;
+ S(include)--;
+
+ if (( f = fopen(T(include), "r") )) {
+ while ( (c = getc(f)) != EOF )
+ putc(c, out);
+ fclose(f);
+ }
+ }
+ DELETE(include);
+}
+
+
+/* fdirname() prints out the directory part of a path
+ */
+static void
+fdirname(MMIOT *doc, FILE *output, int flags)
+{
+ char *p;
+
+ if ( pagename && (p = basename(pagename)) )
+ fwrite(pagename, strlen(pagename)-strlen(p), 1, output);
+}
+
+
+/* fbasename() prints out the file name part of a path
+ */
+static void
+fbasename(MMIOT *doc, FILE *output, int flags)
+{
+ char *p;
+
+ if ( pagename ) {
+ p = basename(pagename);
+
+ if ( !p )
+ p = pagename;
+
+ if ( p )
+ fwrite(p, strlen(p), 1, output);
+ }
+}
+
+
+/* ftitle() prints out the document title
+ */
+static void
+ftitle(MMIOT *doc, FILE* output, int flags)
+{
+ char *h;
+ if ( (h = mkd_doc_title(doc)) == 0 && pagename )
+ h = pagename;
+
+ if ( h )
+ mkd_generateline(h, strlen(h), output, flags);
+}
+
+
+/* fdate() prints out the document date
+ */
+static void
+fdate(MMIOT *doc, FILE *output, int flags)
+{
+ char *h;
+
+ if ( (h = mkd_doc_date(doc)) || ( infop && (h = ctime(&infop->st_mtime)) ) )
+ mkd_generateline(h, strlen(h), output, flags|MKD_TAGTEXT);
+}
+
+
+/* fauthor() prints out the document author
+ */
+static void
+fauthor(MMIOT *doc, FILE *output, int flags)
+{
+ char *h = mkd_doc_author(doc);
+
+#if HAVE_PWD_H
+ if ( (h == 0) && me )
+ h = me->pw_gecos;
+#endif
+
+ if ( h )
+ mkd_generateline(h, strlen(h), output, flags);
+}
+
+
+/* fversion() prints out the document version
+ */
+static void
+fversion(MMIOT *doc, FILE *output, int flags)
+{
+ fwrite(markdown_version, strlen(markdown_version), 1, output);
+}
+
+
+/* fbody() prints out the document
+ */
+static void
+fbody(MMIOT *doc, FILE *output, int flags)
+{
+ mkd_generatehtml(doc, output);
+}
+
+/* ftoc() prints out the table of contents
+ */
+static void
+ftoc(MMIOT *doc, FILE *output, int flags)
+{
+ mkd_generatetoc(doc, output);
+}
+
+/* fstyle() prints out the document's style section
+ */
+static void
+fstyle(MMIOT *doc, FILE *output, int flags)
+{
+ mkd_generatecss(doc, output);
+}
+
+
+#define INTAG 0x01
+#define INHEAD 0x02
+#define INBODY 0x04
+
+/*
+ * theme expansions we love:
+ * -- the document date (file or header date)
+ * -- the document title (header title or document name)
+ * -- the document author (header author or document owner)
+ * -- the version#
+ * -- the document body
+ * -- the filename part of the document name
+ * -- the directory part of the document name
+ * -- the html file name
+ * -- document-supplied style blocks
+ * -- include a file.
+ */
+static struct _keyword {
+ char *kw;
+ int where;
+ void (*what)(MMIOT*,FILE*,int);
+} keyword[] = {
+ { "author?>", 0xffff, fauthor },
+ { "body?>", INBODY, fbody },
+ { "toc?>", INBODY, ftoc },
+ { "date?>", 0xffff, fdate },
+ { "dir?>", 0xffff, fdirname },
+ { "include(", 0xffff, finclude },
+ { "source?>", 0xffff, fbasename },
+ { "style?>", INHEAD, fstyle },
+ { "title?>", 0xffff, ftitle },
+ { "version?>", 0xffff, fversion },
+};
+#define NR(x) (sizeof x / sizeof x[0])
+
+
+/* spin() - run through the theme template, looking for ') );
+ }
+ else if ( (peek(1) == '?') && thesame(cursor(), "?theme ") ) {
+ shift(strlen("?theme "));
+
+ while ( ((c = pull()) != EOF) && isspace(c) )
+ ;
+
+ shift(-1);
+ p = cursor();
+
+ if ( where & INTAG )
+ flags = MKD_TAGTEXT;
+ else if ( where & INHEAD )
+ flags = MKD_NOIMAGE|MKD_NOLINKS;
+ else
+ flags = 0;
+
+ for (i=0; i < NR(keyword); i++)
+ if ( thesame(p, keyword[i].kw) ) {
+ if ( keyword[i].where & where )
+ (*keyword[i].what)(doc,output,flags);
+ break;
+ }
+
+ while ( (c = pull()) != EOF && (c != '?' && peek(1) != '>') )
+ ;
+ shift(1);
+ }
+ else
+ putc(c, output);
+
+ if ( istag(cursor(), "head") ) {
+ where |= INHEAD;
+ where &= ~INBODY;
+ }
+ else if ( istag(cursor(), "body") ) {
+ where &= ~INHEAD;
+ where |= INBODY;
+ }
+ where |= INTAG;
+ continue;
+ }
+ else if ( c == '>' )
+ where &= ~INTAG;
+
+ putc(c, output);
+ }
+} /* spin */
+
+
+void
+main(argc, argv)
+char **argv;
+{
+ char *template = "page.theme";
+ char *source = "stdin";
+ FILE *tmplfile;
+ int opt;
+ int force = 0;
+ MMIOT *doc;
+ struct stat sourceinfo;
+
+ opterr=1;
+ pgm = basename(argv[0]);
+
+ while ( (opt=getopt(argc, argv, "fd:t:p:o:V")) != EOF ) {
+ switch (opt) {
+ case 'd': root = optarg;
+ break;
+ case 'p': pagename = optarg;
+ break;
+ case 'f': force = 1;
+ break;
+ case 't': template = optarg;
+ break;
+ case 'o': output = optarg;
+ break;
+ case 'V': printf("theme+discount %s\n", markdown_version);
+ exit(0);
+ default: fprintf(stderr, "usage: %s [-V] [-d dir] [-p pagename] [-t template] [-o html] [file]\n", pgm);
+ exit(1);
+ }
+ }
+
+ tmplfile = open_template(template);
+
+ argc -= optind;
+ argv += optind;
+
+
+ if ( argc > 0 ) {
+ int added_text=0;
+
+ if ( (source = malloc(strlen(argv[0]) + strlen("/index.text") + 1)) == 0 )
+ fail("out of memory allocating name buffer");
+
+ strcpy(source,argv[0]);
+ if ( (stat(source, &sourceinfo) == 0) && S_ISDIR(sourceinfo.st_mode) )
+ strcat(source, "/index");
+
+ if ( !freopen(source, "r", stdin) ) {
+ strcat(source, ".text");
+ added_text = 1;
+ if ( !freopen(source, "r", stdin) )
+ fail("can't open either %s or %s", argv[0], source);
+ }
+
+ if ( !output ) {
+ char *p, *q;
+ output = alloca(strlen(source) + strlen(".html") + 1);
+
+ strcpy(output, source);
+
+ if (( p = strchr(output, '/') ))
+ q = strrchr(p+1, '.');
+ else
+ q = strrchr(output, '.');
+
+ if ( q )
+ *q = 0;
+ strcat(q, ".html");
+ }
+ }
+ if ( output ) {
+ if ( force )
+ unlink(output);
+ if ( !freopen(output, "w", stdout) )
+ fail("can't write to %s", output);
+ }
+
+ if ( !pagename )
+ pagename = source;
+
+ if ( (doc = mkd_in(stdin, 0)) == 0 )
+ fail("can't read %s", source ? source : "stdin");
+
+ if ( fstat(fileno(stdin), &sourceinfo) == 0 )
+ infop = &sourceinfo;
+
+#if HAVE_GETPWUID
+ me = getpwuid(infop ? infop->st_uid : getuid());
+
+ if ( (root = strdup(me->pw_dir)) == 0 )
+ fail("out of memory");
+#endif
+
+ if ( !mkd_compile(doc, MKD_TOC) )
+ fail("couldn't compile input");
+
+ if ( tmplfile )
+ spin(tmplfile,doc,stdout);
+ else
+ mkd_generatehtml(doc, stdout);
+
+ mkd_cleanup(doc);
+ exit(0);
+}
diff --git a/r2/r2/lib/contrib/discount-1.6.0/toc.c b/r2/r2/lib/contrib/discount-1.6.0/toc.c
new file mode 100644
index 000000000..cf957c85e
--- /dev/null
+++ b/r2/r2/lib/contrib/discount-1.6.0/toc.c
@@ -0,0 +1,90 @@
+/*
+ * toc -- spit out a table of contents based on header blocks
+ *
+ * Copyright (C) 2008 Jjgod Jiang, David L Parsons.
+ * The redistribution terms are provided in the COPYRIGHT file that must
+ * be distributed with this source code.
+ */
+#include "config.h"
+#include
+#include
+#include
+
+#include "cstring.h"
+#include "markdown.h"
+#include "amalloc.h"
+
+/* write an header index
+ */
+int
+mkd_toc(Document *p, char **doc)
+{
+ Paragraph *tp, *srcp;
+ int last_hnumber = 0;
+ Cstring res;
+
+ CREATE(res);
+ RESERVE(res, 100);
+
+ *doc = 0;
+
+ if ( !(p && p->ctx) ) return -1;
+ if ( ! (p->ctx->flags & TOC) ) return 0;
+
+ for ( tp = p->code; tp ; tp = tp->next ) {
+ if ( tp->typ == SOURCE ) {
+ for ( srcp = tp->down; srcp; srcp = srcp->next ) {
+ if ( srcp->typ == HDR && srcp->text ) {
+
+ if ( last_hnumber == srcp->hnumber )
+ Csprintf(&res, "%*s\n", srcp->hnumber, "");
+ else while ( last_hnumber > srcp->hnumber ) {
+ Csprintf(&res, "%*s\n%*s\n",
+ last_hnumber, "",
+ last_hnumber-1,"");
+ --last_hnumber;
+ }
+
+ while ( srcp->hnumber > last_hnumber ) {
+ Csprintf(&res, "\n%*s\n", srcp->hnumber, "");
+ ++last_hnumber;
+ }
+ Csprintf(&res, "%*shnumber, "");
+ mkd_string_to_anchor(T(srcp->text->text), S(srcp->text->text), Csputc, &res);
+ Csprintf(&res, "\">");
+ Csreparse(&res, T(srcp->text->text), S(srcp->text->text), 0);
+ Csprintf(&res, " ");
+ }
+ }
+ }
+ }
+
+ while ( last_hnumber > 0 ) {
+ Csprintf(&res, "%*s \n%*s \n",
+ last_hnumber, "", last_hnumber, "");
+ --last_hnumber;
+ }
+ /* HACK ALERT! HACK ALERT! HACK ALERT! */
+ *doc = T(res); /* we know that a T(Cstring) is a character pointer */
+ /* so we can simply pick it up and carry it away, */
+ return S(res); /* leaving the husk of the Ctring on the stack */
+ /* END HACK ALERT */
+}
+
+
+/* write an header index
+ */
+int
+mkd_generatetoc(Document *p, FILE *out)
+{
+ char *buf = 0;
+ int sz = mkd_toc(p, &buf);
+ int ret = EOF;
+
+ if ( sz > 0 )
+ ret = fwrite(buf, sz, 1, out);
+
+ if ( buf ) free(buf);
+
+ return ret;
+}
diff --git a/r2/r2/lib/contrib/discount-1.6.0/tools/cols.c b/r2/r2/lib/contrib/discount-1.6.0/tools/cols.c
new file mode 100644
index 000000000..68ecc590d
--- /dev/null
+++ b/r2/r2/lib/contrib/discount-1.6.0/tools/cols.c
@@ -0,0 +1,38 @@
+#include
+#include
+
+main(argc, argv)
+char **argv;
+{
+ register c;
+ int xp;
+ int width;
+
+ if ( argc != 2 ) {
+ fprintf(stderr, "usage: %s width\n", argv[0]);
+ exit(1);
+ }
+ else if ( (width=atoi(argv[1])) < 1 ) {
+ fprintf(stderr, "%s: please set width to > 0\n", argv[0]);
+ exit(1);
+ }
+
+
+ for ( xp = 1; (c = getchar()) != EOF; xp++ ) {
+ while ( c & 0xC0 ) {
+ /* assume that (1) the output device understands utf-8, and
+ * (2) the only c & 0x80 input is utf-8.
+ */
+ do {
+ if ( xp <= width )
+ putchar(c);
+ } while ( (c = getchar()) != EOF && (c & 0x80) && !(c & 0x40) );
+ ++xp;
+ }
+ if ( c == '\n' )
+ xp = 0;
+ if ( xp <= width )
+ putchar(c);
+ }
+ exit(0);
+}
diff --git a/r2/r2/lib/contrib/discount-1.6.0/tools/echo.c b/r2/r2/lib/contrib/discount-1.6.0/tools/echo.c
new file mode 100644
index 000000000..5352caf92
--- /dev/null
+++ b/r2/r2/lib/contrib/discount-1.6.0/tools/echo.c
@@ -0,0 +1,22 @@
+#include
+#include
+
+
+main(argc, argv)
+char **argv;
+{
+ int nl = 1;
+ int i;
+
+ if ( (argc > 1) && (strcmp(argv[1], "-n") == 0) ) {
+ ++argv;
+ --argc;
+ nl = 0;
+ }
+
+ for ( i=1; i < argc; i++ ) {
+ if ( i > 1 ) putchar(' ');
+ fputs(argv[i], stdout);
+ }
+ if (nl) putchar('\n');
+}
diff --git a/r2/r2/lib/contrib/discount-1.6.0/version.c b/r2/r2/lib/contrib/discount-1.6.0/version.c
new file mode 100644
index 000000000..be99141bb
--- /dev/null
+++ b/r2/r2/lib/contrib/discount-1.6.0/version.c
@@ -0,0 +1,28 @@
+#include "config.h"
+
+char markdown_version[] = VERSION
+#if DL_TAG_EXTENSION
+ " DL_TAG"
+#endif
+#if PANDOC_HEADER
+ " HEADER"
+#endif
+#if 4 != 4
+ " TAB=4"
+#endif
+#if USE_AMALLOC
+ " DEBUG"
+#endif
+#if SUPERSCRIPT
+ " SUPERSCRIPT"
+#endif
+#if RELAXED_EMPHASIS
+ " RELAXED"
+#endif
+#if DIV_QUOTE
+ " DIV"
+#endif
+#if ALPHA_LIST
+ " AL"
+#endif
+ ;
diff --git a/r2/r2/lib/contrib/discount-1.6.0/version.c.in b/r2/r2/lib/contrib/discount-1.6.0/version.c.in
new file mode 100644
index 000000000..f4875606e
--- /dev/null
+++ b/r2/r2/lib/contrib/discount-1.6.0/version.c.in
@@ -0,0 +1,28 @@
+#include "config.h"
+
+char markdown_version[] = VERSION
+#if DL_TAG_EXTENSION
+ " DL_TAG"
+#endif
+#if PANDOC_HEADER
+ " HEADER"
+#endif
+#if @TABSTOP@ != 4
+ " TAB=@TABSTOP@"
+#endif
+#if USE_AMALLOC
+ " DEBUG"
+#endif
+#if SUPERSCRIPT
+ " SUPERSCRIPT"
+#endif
+#if RELAXED_EMPHASIS
+ " RELAXED"
+#endif
+#if DIV_QUOTE
+ " DIV"
+#endif
+#if ALPHA_LIST
+ " AL"
+#endif
+ ;
diff --git a/r2/r2/lib/contrib/discount-1.6.0/xml.c b/r2/r2/lib/contrib/discount-1.6.0/xml.c
new file mode 100644
index 000000000..5e5838993
--- /dev/null
+++ b/r2/r2/lib/contrib/discount-1.6.0/xml.c
@@ -0,0 +1,82 @@
+/* markdown: a C implementation of John Gruber's Markdown markup language.
+ *
+ * Copyright (C) 2007 David L Parsons.
+ * The redistribution terms are provided in the COPYRIGHT file that must
+ * be distributed with this source code.
+ */
+#include
+#include
+#include
+#include
+#include
+#include
+
+#include "config.h"
+
+#include "cstring.h"
+#include "markdown.h"
+#include "amalloc.h"
+
+/* return the xml version of a character
+ */
+static char *
+mkd_xmlchar(unsigned char c)
+{
+ switch (c) {
+ case '<': return "<";
+ case '>': return ">";
+ case '&': return "&";
+ case '"': return """;
+ case '\'': return "'";
+ default: if ( isascii(c) || (c & 0x80) )
+ return 0;
+ return "";
+ }
+}
+
+
+/* write output in XML format
+ */
+int
+mkd_generatexml(char *p, int size, FILE *out)
+{
+ unsigned char c;
+ char *entity;
+
+ while ( size-- > 0 ) {
+ c = *p++;
+
+ if ( entity = mkd_xmlchar(c) )
+ fputs(entity, out);
+ else
+ fputc(c, out);
+ }
+ return 0;
+}
+
+
+/* build a xml'ed version of a string
+ */
+int
+mkd_xml(char *p, int size, char **res)
+{
+ unsigned char c;
+ char *entity;
+ Cstring f;
+
+ CREATE(f);
+ RESERVE(f, 100);
+
+ while ( size-- > 0 ) {
+ c = *p++;
+ if ( entity = mkd_xmlchar(c) )
+ Cswrite(&f, entity, strlen(entity));
+ else
+ Csputc(c, &f);
+ }
+ /* HACK ALERT! HACK ALERT! HACK ALERT! */
+ *res = T(f); /* we know that a T(Cstring) is a character pointer */
+ /* so we can simply pick it up and carry it away, */
+ return S(f); /* leaving the husk of the Ctring on the stack */
+ /* END HACK ALERT */
+}
diff --git a/r2/r2/lib/contrib/discount-1.6.0/xmlpage.c b/r2/r2/lib/contrib/discount-1.6.0/xmlpage.c
new file mode 100644
index 000000000..96ed2b758
--- /dev/null
+++ b/r2/r2/lib/contrib/discount-1.6.0/xmlpage.c
@@ -0,0 +1,48 @@
+/*
+ * xmlpage -- write a skeletal xhtml page
+ *
+ * Copyright (C) 2007 David L Parsons.
+ * The redistribution terms are provided in the COPYRIGHT file that must
+ * be distributed with this source code.
+ */
+#include "config.h"
+#include
+#include
+#include
+
+#include "cstring.h"
+#include "markdown.h"
+#include "amalloc.h"
+
+
+int
+mkd_xhtmlpage(Document *p, int flags, FILE *out)
+{
+ char *title;
+ extern char *mkd_doc_title(Document *);
+
+ if ( mkd_compile(p, flags) ) {
+ fprintf(out, "\n");
+ fprintf(out, "\n");
+
+ fprintf(out, "\n");
+
+ fprintf(out, "\n");
+ if ( title = mkd_doc_title(p) )
+ fprintf(out, "%s \n", title);
+ mkd_generatecss(p, out);
+ fprintf(out, "\n");
+
+ fprintf(out, "\n");
+ mkd_generatehtml(p, out);
+ fprintf(out, "\n");
+ fprintf(out, "\n");
+
+ mkd_cleanup(p);
+
+ return 0;
+ }
+ return -1;
+}
diff --git a/r2/r2/lib/contrib/memcache.py b/r2/r2/lib/contrib/memcache.py
index abff3bcb1..30bc52ad0 100755
--- a/r2/r2/lib/contrib/memcache.py
+++ b/r2/r2/lib/contrib/memcache.py
@@ -256,8 +256,8 @@ class Client(local):
# return server, key
# serverhash = serverHashFunction(str(serverhash) + str(i))
- print ("Couldn't connect to any of the %d memcache servers" %
- len(self.buckets))
+ print ("Couldn't connect to any of the %d memcache servers: %r" %
+ (len(self.buckets), [ (x.ip, x.port) for x in self.buckets]))
return None, key
def disconnect_all(self):
@@ -940,7 +940,7 @@ class _Host:
buf += foo
if len(foo) == 0:
raise _Error, ( 'Read %d bytes, expecting %d, '
- 'read returned 0 length bytes' % ( len(buf), foo ))
+ 'read returned 0 length bytes' % ( len(buf), rlen ))
self.buffer = buf[rlen:]
return buf[:rlen]
diff --git a/r2/r2/lib/cssfilter.py b/r2/r2/lib/cssfilter.py
index ac3102efd..1df364713 100644
--- a/r2/r2/lib/cssfilter.py
+++ b/r2/r2/lib/cssfilter.py
@@ -28,6 +28,7 @@ from r2.lib.pages.things import wrap_links
from pylons import g, c
from pylons.i18n import _
+from mako import filters
import tempfile
from r2.lib import s3cp
@@ -170,10 +171,21 @@ def valid_url(prop,value,report):
* image labels %%..%% for images uploaded on /about/stylesheet
* urls with domains in g.allowed_css_linked_domains
"""
- url = value.getStringValue()
+ try:
+ url = value.getStringValue()
+ except IndexError:
+ g.log.error("Problem validating [%r]" % value)
+ raise
# local urls are allowed
if local_urls.match(url):
- pass
+ t_url = None
+ while url != t_url:
+ t_url, url = url, filters.url_unescape(url)
+ # disallow path trickery
+ if "../" in url:
+ report.append(ValidationError(msgs['broken_url']
+ % dict(brokenurl = value.cssText),
+ value))
# custom urls are allowed, but need to be transformed into a real path
elif custom_img_urls.match(url):
name = custom_img_urls.match(url).group(1)
@@ -329,13 +341,14 @@ def find_preview_links(sr):
from r2.lib.normalized_hot import get_hot
# try to find a link to use, otherwise give up and return
- links = get_hot(c.site, only_fullnames = True)
+ links = get_hot([c.site], only_fullnames = True)[0]
if not links:
sr = Subreddit._by_name(g.default_sr)
if sr:
- links = get_hot(sr, only_fullnames = True)
+ links = get_hot([sr], only_fullnames = True)[0]
if links:
+ links = links[:25]
links = Link._by_fullname(links, data=True, return_dict=False)
return links
diff --git a/r2/r2/lib/db/queries.py b/r2/r2/lib/db/queries.py
index c55dc6573..f6d35c527 100644
--- a/r2/r2/lib/db/queries.py
+++ b/r2/r2/lib/db/queries.py
@@ -1,13 +1,16 @@
from r2.models import Account, Link, Comment, Vote, SaveHide
-from r2.models import Message, Inbox, Subreddit
+from r2.models import Message, Inbox, Subreddit, ModeratorInbox
from r2.lib.db.thing import Thing, Merge
from r2.lib.db.operators import asc, desc, timeago
from r2.lib.db import query_queue
from r2.lib.normalized_hot import expire_hot
from r2.lib.db.sorts import epoch_seconds
from r2.lib.utils import fetch_things2, tup, UniqueIterator, set_last_modified
+from r2.lib import utils
from r2.lib.solrsearch import DomainSearchQuery
from r2.lib import amqp, sup
+from r2.lib.comment_tree import add_comment, link_comments
+
import cPickle as pickle
from datetime import datetime
@@ -15,15 +18,15 @@ import itertools
from pylons import g
query_cache = g.permacache
+log = g.log
+make_lock = g.make_lock
precompute_limit = 1000
db_sorts = dict(hot = (desc, '_hot'),
new = (desc, '_date'),
top = (desc, '_score'),
- controversial = (desc, '_controversy'),
- old = (asc, '_date'),
- toplinks = (desc, '_hot'))
+ controversial = (desc, '_controversy'))
def db_sort(sort):
cls, col = db_sorts[sort]
@@ -42,6 +45,29 @@ db_times = dict(all = None,
month = Thing.c._date >= timeago('1 month'),
year = Thing.c._date >= timeago('1 year'))
+# batched_time_sorts/batched_time_times: top and controversial
+# listings with a time-component are really expensive, and for the
+# ones that span more than a day they don't change much (if at all)
+# within that time. So we have some hacks to avoid re-running these
+# queries against the precomputer except up to once per day
+# * To get the results of the queries, we return the results of the
+# (potentially stale) query, merged with the query by 'day' (see
+# get_links)
+# * When we are adding the special queries to the queue, we add them
+# with a preflight check to determine if they are runnable and a
+# postflight action to make them not runnable again for 24 hours
+# (see new_vote)
+# * We have a task called catch_up_batch_queries to be run at least
+# once per day (ideally about once per hour) to find subreddits
+# where these queries haven't been run in the last 24 hours but that
+# have had at least one vote in that time
+# TODO:
+# * Do we need a filter on merged time-queries to keep items that are
+# barely too old from making it into the listing? This probably only
+# matters for 'week'
+batched_time_times = set(('year', 'month', 'week'))
+batched_time_sorts = set(('top', 'controversial'))
+
#we need to define the filter functions here so cachedresults can be pickled
def filter_identity(x):
return x
@@ -51,6 +77,20 @@ def filter_thing2(x):
the object of the relationship."""
return x._thing2
+def make_batched_time_query(sr, sort, time, preflight_check = True):
+ q = get_links(sr, sort, time, merge_batched=False)
+
+ if (g.use_query_cache
+ and sort in batched_time_sorts
+ and time in batched_time_times):
+
+ if not preflight_check:
+ q.force_run = True
+
+ q.batched_time_srid = sr._id
+
+ return q
+
class CachedResults(object):
"""Given a query returns a list-like object that will lazily look up
the query from the persistent cache. """
@@ -63,11 +103,57 @@ class CachedResults(object):
self.data = []
self._fetched = False
+ self.batched_time_srid = None
+
+ @property
+ def sort(self):
+ return self.query._sort
+
+ def preflight_check(self):
+ if getattr(self, 'force_run', False):
+ return True
+
+ sr_id = getattr(self, 'batched_time_srid', None)
+ if not sr_id:
+ return True
+
+ # this is a special query that tries to run less often, see
+ # the discussion about batched_time_times
+ sr = Subreddit._byID(sr_id, data=True)
+
+ if (self.iden in getattr(sr, 'last_batch_query', {})
+ and sr.last_batch_query[self.iden] > utils.timeago('1 day')):
+ # this has been done in the last 24 hours, so we should skip it
+ return False
+
+ return True
+
+ def postflight(self):
+ sr_id = getattr(self, 'batched_time_srid', None)
+ if not sr_id:
+ return True
+
+ with make_lock('modify_sr_last_batch_query(%s)' % sr_id):
+ sr = Subreddit._byID(sr_id, data=True)
+ last_batch_query = getattr(sr, 'last_batch_query', {}).copy()
+ last_batch_query[self.iden] = datetime.now(g.tz)
+ sr.last_batch_query = last_batch_query
+ sr._commit()
+
def fetch(self):
"""Loads the query from the cache."""
- if not self._fetched:
- self._fetched = True
- self.data = query_cache.get(self.iden) or []
+ self.fetch_multi([self])
+
+ @classmethod
+ def fetch_multi(cls, crs):
+ unfetched = [cr for cr in crs if not cr._fetched]
+ if not unfetched:
+ return
+
+ cached = query_cache.get_multi([cr.iden for cr in unfetched])
+ for cr in unfetched:
+ cr.data = cached.get(cr.iden) or []
+ cr._fetched = True
def make_item_tuple(self, item):
"""Given a single 'item' from the result of a query build the tuple
@@ -87,15 +173,21 @@ class CachedResults(object):
def can_insert(self):
"""True if a new item can just be inserted rather than
- rerunning the query. This is only true in some
- circumstances, which includes having no time rules, and
- being sorted descending"""
+ rerunning the query."""
+ # This is only true in some circumstances: queries where
+ # eligibility in the list is determined only by its sort
+ # value (e.g. hot) and where addition/removal from the list
+ # incurs an insertion/deletion event called on the query. So
+ # the top hottest items in X some subreddit where the query
+ # is notified on every submission/banning/unbanning/deleting
+ # will work, but for queries with a time-component or some
+ # other eligibility factor, it cannot be inserted this way.
if self.query._sort in ([desc('_date')],
[desc('_hot'), desc('_date')],
[desc('_score'), desc('_date')],
[desc('_controversy'), desc('_date')]):
- if not any(r.lval.name == '_date'
- for r in self.query._rules):
+ if not any(r for r in self.query._rules
+ if r.lval.name == '_date'):
# if no time-rule is specified, then it's 'all'
return True
return False
@@ -117,9 +209,11 @@ class CachedResults(object):
data = UniqueIterator(data, key = lambda x: x[0])
data = sorted(data, key=lambda x: x[1:], reverse=True)
data = list(data)
+ data = data[:precompute_limit]
+
self.data = data
- query_cache.set(self.iden, self.data[:precompute_limit])
+ query_cache.set(self.iden, self.data)
def delete(self, items):
"""Deletes an item from the cached data."""
@@ -150,33 +244,47 @@ class CachedResults(object):
for x in self.data:
yield x[0]
-def merge_cached_results(*results):
- """Given two CachedResults, merges their lists based on the sorts of
- their queries."""
- if len(results) == 1:
- return list(results[0])
+class MergedCachedResults(object):
+ """Given two CachedResults, merges their lists based on the sorts
+ of their queries."""
+ # normally we'd do this by having a superclass of CachedResults,
+ # but we have legacy pickled CachedResults that we don't want to
+ # break
- #make sure the sorts match
- sort = results[0].query._sort
- assert all(r.query._sort == sort for r in results[1:])
+ def __init__(self, results):
+ self.cached_results = results
+ CachedResults.fetch_multi([r for r in results
+ if isinstance(r, CachedResults)])
+ self._fetched = True
- def thing_cmp(t1, t2):
- for i, s in enumerate(sort):
- #t1 and t2 are tuples of (fullname, *sort_cols), so we can
- #get the value to compare right out of the tuple
+ self.sort = results[0].sort
+ # make sure they're all the same
+ assert all(r.sort == self.sort for r in results[1:])
+
+ # if something is 'top' for the year *and* for today, it would
+ # appear in both listings, so we need to filter duplicates
+ all_items = UniqueIterator((item for cr in results
+ for item in cr.data),
+ key = lambda x: x[0])
+ all_items = sorted(all_items, cmp=self._thing_cmp)
+ self.data = list(all_items)
+
+ def _thing_cmp(self, t1, t2):
+ for i, s in enumerate(self.sort):
+ # t1 and t2 are tuples of (fullname, *sort_cols), so we
+ # can get the value to compare right out of the tuple
v1, v2 = t1[i + 1], t2[i + 1]
if v1 != v2:
return cmp(v1, v2) if isinstance(s, asc) else cmp(v2, v1)
#they're equal
return 0
- all_items = []
- for r in results:
- r.fetch()
- all_items.extend(r.data)
+ def __repr__(self):
+ return '' % (self.cached_results,)
- #all_items = Thing._by_fullname(all_items, return_dict = False)
- return [i[0] for i in sorted(all_items, cmp = thing_cmp)]
+ def __iter__(self):
+ for x in self.data:
+ yield x[0]
def make_results(query, filter = filter_identity):
if g.use_query_cache:
@@ -187,24 +295,37 @@ def make_results(query, filter = filter_identity):
def merge_results(*results):
if g.use_query_cache:
- return merge_cached_results(*results)
+ return MergedCachedResults(results)
else:
m = Merge(results, sort = results[0]._sort)
#assume the prewrap_fn's all match
m.prewrap_fn = results[0].prewrap_fn
return m
-def get_links(sr, sort, time):
+def get_links(sr, sort, time, merge_batched=True):
"""General link query for a subreddit."""
q = Link._query(Link.c.sr_id == sr._id,
sort = db_sort(sort))
- if sort == 'toplinks':
- q._filter(Link.c.top_link == True)
-
if time != 'all':
q._filter(db_times[time])
- return make_results(q)
+
+ res = make_results(q)
+
+ # see the discussion above batched_time_times
+ if (merge_batched
+ and g.use_query_cache
+ and sort in batched_time_sorts
+ and time in batched_time_times):
+
+ byday = Link._query(Link.c.sr_id == sr._id,
+ sort = db_sort(sort))
+ byday._filter(db_times['day'])
+
+ res = merge_results(res,
+ make_results(byday))
+
+ return res
def get_spam_links(sr):
q_l = Link._query(Link.c.sr_id == sr._id,
@@ -297,6 +418,13 @@ def get_hidden(user):
def get_saved(user):
return user_rel_query(SaveHide, user, 'save')
+def get_subreddit_messages(sr):
+ return user_rel_query(ModeratorInbox, sr, 'inbox')
+
+def get_unread_subreddit_messages(sr):
+ return user_rel_query(ModeratorInbox, sr, 'inbox',
+ filters = [ModeratorInbox.c.new == True])
+
inbox_message_rel = Inbox.rel(Account, Message)
def get_inbox_messages(user):
return user_rel_query(inbox_message_rel, user, 'inbox')
@@ -338,15 +466,14 @@ def get_unread_inbox(user):
def add_queries(queries, insert_items = None, delete_items = None):
"""Adds multiple queries to the query queue. If insert_items or
- delete_items is specified, the query may not need to be recomputed at
- all."""
+ delete_items is specified, the query may not need to be
+ recomputed against the database."""
if not g.write_query_queue:
return
- log = g.log
- make_lock = g.make_lock
def _add_queries():
for q in queries:
+ query_cache.reset()
if not isinstance(q, CachedResults):
continue
@@ -393,27 +520,22 @@ def new_link(link):
sr = Subreddit._byID(link.sr_id)
author = Account._byID(link.author_id)
- results = all_queries(get_links, sr, ('hot', 'new', 'old'), ['all'])
+ results = [get_links(sr, 'new', 'all')]
+ # we don't have to do hot/top/controversy because new_vote will do
+ # that
- results.extend(all_queries(get_links, sr, ('top', 'controversial'),
- db_times.keys()))
results.append(get_submitted(author, 'new', 'all'))
- #results.append(get_links(sr, 'toplinks', 'all'))
if link._spam:
results.append(get_spam_links(sr))
-
- if link._deleted:
- results.append(get_links(sr, 'new', 'all'))
- add_queries(results, delete_items = link)
- else:
- # only 'new' qualifies for insertion, which will be done in
- # run_new_links
- add_queries(results, insert_items = link)
- amqp.add_item('new_link', link._fullname)
+ # only 'new' qualifies for insertion, which will be done in
+ # run_new_links
+ add_queries(results, insert_items = link)
+
+ amqp.add_item('new_link', link._fullname)
-def new_comment(comment, inbox_rel):
+def new_comment(comment, inbox_rels):
author = Account._byID(comment.author_id)
job = [get_comments(author, 'new', 'all')]
if comment._deleted:
@@ -425,19 +547,23 @@ def new_comment(comment, inbox_rel):
# job.append(get_spam_comments(sr))
add_queries(job, insert_items = comment)
amqp.add_item('new_comment', comment._fullname)
+ if not g.amqp_host:
+ l = Link._byID(comment.link_id,data=True)
+ add_comment_tree(comment, l)
# note that get_all_comments() is updated by the amqp process
# r2.lib.db.queries.run_new_comments
- if inbox_rel:
- inbox_owner = inbox_rel._thing1
- if inbox_rel._name == "inbox":
- add_queries([get_inbox_comments(inbox_owner)],
- insert_items = inbox_rel)
- else:
- add_queries([get_inbox_selfreply(inbox_owner)],
- insert_items = inbox_rel)
- set_unread(comment, True)
+ if inbox_rels:
+ for inbox_rel in tup(inbox_rels):
+ inbox_owner = inbox_rel._thing1
+ if inbox_rel._name == "inbox":
+ add_queries([get_inbox_comments(inbox_owner)],
+ insert_items = inbox_rel)
+ else:
+ add_queries([get_inbox_selfreply(inbox_owner)],
+ insert_items = inbox_rel)
+ set_unread(comment, inbox_owner, True)
@@ -455,10 +581,21 @@ def new_vote(vote):
if vote.valid_thing and not item._spam and not item._deleted:
sr = item.subreddit_slow
+ # don't do 'new', because that was done by new_link
results = [get_links(sr, 'hot', 'all')]
- results.extend(all_queries(get_links, sr, ('top', 'controversial'), db_times.keys()))
- #results.append(get_links(sr, 'toplinks', 'all'))
+
+ # for top and controversial we do some magic to recompute
+ # these less often; see the discussion above
+ # batched_time_times
+ for sort in batched_time_sorts:
+ for time in db_times.keys():
+ q = make_batched_time_query(sr, sort, time)
+ results.append(q)
+
add_queries(results, insert_items = item)
+
+ sr.last_valid_vote = datetime.now(g.tz)
+ sr._commit()
#must update both because we don't know if it's a changed vote
if vote._name == '1':
@@ -471,27 +608,39 @@ def new_vote(vote):
add_queries([get_liked(user)], delete_items = vote)
add_queries([get_disliked(user)], delete_items = vote)
-def new_message(message, inbox_rel):
+def new_message(message, inbox_rels):
from r2.lib.comment_tree import add_message
from_user = Account._byID(message.author_id)
- to_user = Account._byID(message.to_id)
-
- add_queries([get_sent(from_user)], insert_items = message)
- add_queries([get_inbox_messages(to_user)], insert_items = inbox_rel)
+ for inbox_rel in tup(inbox_rels):
+ to = inbox_rel._thing1
+ # moderator message
+ if isinstance(inbox_rel, ModeratorInbox):
+ add_queries([get_subreddit_messages(to)],
+ insert_items = inbox_rel)
+ # personal message
+ else:
+ add_queries([get_sent(from_user)], insert_items = message)
+ add_queries([get_inbox_messages(to)],
+ insert_items = inbox_rel)
+ set_unread(message, to, True)
add_message(message)
- set_unread(message, True)
-def set_unread(message, unread):
- for i in Inbox.set_unread(message, unread):
- kw = dict(insert_items = i) if unread else dict(delete_items = i)
- if i._name == 'selfreply':
- add_queries([get_unread_selfreply(i._thing1)], **kw)
- elif isinstance(message, Comment):
- add_queries([get_unread_comments(i._thing1)], **kw)
- else:
- add_queries([get_unread_messages(i._thing1)], **kw)
+def set_unread(message, to, unread):
+ if isinstance(to, Subreddit):
+ for i in ModeratorInbox.set_unread(message, unread):
+ kw = dict(insert_items = i) if unread else dict(delete_items = i)
+ add_queries([get_unread_subreddit_messages(i._thing1)], **kw)
+ else:
+ for i in Inbox.set_unread(message, unread):
+ kw = dict(insert_items = i) if unread else dict(delete_items = i)
+ if i._name == 'selfreply':
+ add_queries([get_unread_selfreply(i._thing1)], **kw)
+ elif isinstance(message, Comment):
+ add_queries([get_unread_comments(i._thing1)], **kw)
+ else:
+ add_queries([get_unread_messages(i._thing1)], **kw)
def new_savehide(rel):
user = rel._thing1
@@ -517,8 +666,8 @@ def _by_srid(things):
sr_id, in addition to the looked-up subreddits"""
ret = {}
- for thing in things:
- if hasattr(thing, 'sr_id'):
+ for thing in tup(things):
+ if getattr(thing, 'sr_id', None) is not None:
ret.setdefault(thing.sr_id, []).append(thing)
srs = Subreddit._byID(ret.keys(), return_dict=True) if ret else {}
@@ -526,6 +675,12 @@ def _by_srid(things):
return ret, srs
def ban(things):
+ del_or_ban(things, "ban")
+
+def delete_links(links):
+ del_or_ban(links, "del")
+
+def del_or_ban(things, why):
by_srid, srs = _by_srid(things)
if not by_srid:
return
@@ -536,15 +691,19 @@ def ban(things):
comments = [x for x in things if isinstance(x, Comment)]
if links:
- add_queries([get_spam_links(sr)], insert_items = links)
+ if why == "ban":
+ add_queries([get_spam_links(sr)], insert_items = links)
# rip it out of the listings. bam!
results = [get_links(sr, 'hot', 'all'),
- get_links(sr, 'new', 'all'),
- get_links(sr, 'top', 'all'),
- get_links(sr, 'controversial', 'all')]
- results.extend(all_queries(get_links, sr,
- ('top', 'controversial'),
- db_times.keys()))
+ get_links(sr, 'new', 'all')]
+
+ for sort in batched_time_sorts:
+ for time in db_times.keys():
+ # this will go through delete_items, so handling
+ # of batched_time_times isn't necessary and is
+ # included only for consistancy
+ q = make_batched_time_query(sr, sort, time)
+
add_queries(results, delete_items = links)
if comments:
@@ -567,12 +726,15 @@ def unban(things):
add_queries([get_spam_links(sr)], delete_items = links)
# put it back in the listings
results = [get_links(sr, 'hot', 'all'),
- get_links(sr, 'new', 'all'),
- get_links(sr, 'top', 'all'),
- get_links(sr, 'controversial', 'all')]
- results.extend(all_queries(get_links, sr,
- ('top', 'controversial'),
- db_times.keys()))
+ get_links(sr, 'new', 'all')]
+ for sort in batched_time_sorts:
+ for time in db_times.keys():
+ # skip the preflight check because we need to redo
+ # this query regardless
+ q = make_batched_time_query(sr, sort, time,
+ preflight_check=False)
+ results.append(q)
+
add_queries(results, insert_items = links)
if comments:
@@ -619,10 +781,9 @@ def add_all_srs():
"""Adds every listing query for every subreddit to the queue."""
q = Subreddit._query(sort = asc('_date'))
for sr in fetch_things2(q):
- add_queries(all_queries(get_links, sr, ('hot', 'new', 'old'), ['all']))
+ add_queries(all_queries(get_links, sr, ('hot', 'new'), ['all']))
add_queries(all_queries(get_links, sr, ('top', 'controversial'), db_times.keys()))
- add_queries([get_links(sr, 'toplinks', 'all'),
- get_spam_links(sr),
+ add_queries([get_spam_links(sr),
#get_spam_comments(sr),
get_reported_links(sr),
#get_reported_comments(sr),
@@ -651,19 +812,53 @@ def add_all_users():
for user in fetch_things2(q):
update_user(user)
+def add_comment_tree(comment, link):
+ #update the comment cache
+ add_comment(comment)
+ #update last modified
+ set_last_modified(link, 'comments')
# amqp queue processing functions
def run_new_comments():
+ """Add new incoming comments to the /comments page"""
+ # this is done as a queue because otherwise the contention for the
+ # lock on the query would be very high
def _run_new_comments(msgs, chan):
fnames = [msg.body for msg in msgs]
- comments = Comment._by_fullname(fnames, return_dict=False)
+ comments = Comment._by_fullname(fnames, data=True, return_dict=False)
+
add_queries([get_all_comments()],
insert_items = comments)
amqp.handle_items('newcomments_q', _run_new_comments, limit=100)
+def run_commentstree():
+ """Add new incoming comments to their respective comments trees"""
+
+ def _run_commentstree(msgs, chan):
+ fnames = [msg.body for msg in msgs]
+ comments = Comment._by_fullname(fnames, data=True, return_dict=False)
+
+ links = Link._byID(set(cm.link_id for cm in comments),
+ data=True,
+ return_dict=True)
+
+ # add the comment to the comments-tree
+ for comment in comments:
+ l = links[comment.link_id]
+ try:
+ add_comment_tree(comment, l)
+ except KeyError:
+ # Hackity hack. Try to recover from a corrupted
+ # comment tree
+ print "Trying to fix broken comments-tree."
+ link_comments(l._id, _update=True)
+ add_comment_tree(comment, l)
+
+ amqp.handle_items('commentstree_q', _run_commentstree, limit=1)
+
#def run_new_links():
# """queue to add new links to the 'new' page. note that this isn't
@@ -798,6 +993,32 @@ def process_votes(drain = False, limit = 100):
amqp.handle_items('register_vote_q', _handle_votes, limit = limit,
drain = drain)
+def catch_up_batch_queries():
+ # catch up on batched_time_times queries that haven't been run
+ # that should be, which should only happen to small
+ # subreddits. This should be cronned to run about once an
+ # hour. The more often, the more the work of rerunning the actual
+ # queries is spread out, but every run has a fixed-cost of looking
+ # at every single subreddit
+ sr_q = Subreddit._query(sort=desc('_downs'),
+ data=True)
+ dayago = utils.timeago('1 day')
+ for sr in fetch_things2(sr_q):
+ if hasattr(sr, 'last_valid_vote') and sr.last_valid_vote > dayago:
+ # if we don't know when the last vote was, it couldn't
+ # have been today
+ for sort in batched_time_sorts:
+ for time in batched_time_times:
+ q = make_batched_time_query(sr, sort, time)
+ if q.preflight_check():
+ # we haven't run the batched_time_times in the
+ # last day
+ add_queries([q])
+
+ # make sure that all of the jobs have been completed or processed
+ # by the time we return
+ amqp.worker.join()
+
try:
from r2admin.lib.admin_queries import *
except ImportError:
diff --git a/r2/r2/lib/db/query_queue.py b/r2/r2/lib/db/query_queue.py
index 21406fc0a..85f89bdb8 100644
--- a/r2/r2/lib/db/query_queue.py
+++ b/r2/r2/lib/db/query_queue.py
@@ -7,36 +7,34 @@ from pylons import g
working_prefix = 'working_'
prefix = 'prec_link_'
-TIMEOUT = 600
+TIMEOUT = 600 # after TIMEOUT seconds, assume that the process
+ # calculating a given query has crashed and allow it to
+ # be rerun as appropriate
def add_query(cached_results):
amqp.add_item('prec_links', pickle.dumps(cached_results, -1))
-def _skip_key(iden):
- return 'skip_precompute_queries-%s' % iden
-
def run():
def callback(msgs, chan):
for msg in msgs: # will be len==1
- # r2.lib.db.queries.CachedResults
+ # cr is a r2.lib.db.queries.CachedResults
cr = pickle.loads(msg.body)
iden = cr.query._iden()
- if (iden in g.skip_precompute_queries
- and g.hardcache.get(_skip_key(iden))):
- print 'skipping known query', iden
- continue
-
working_key = working_prefix + iden
key = prefix + iden
last_time = g.memcache.get(key)
# check to see if we've computed this job since it was
# added to the queue
- if last_time and last_time > msg.timestamp:
+ if last_time and last_time > msg.timestamp:
print 'skipping, already computed ', key
return
+ if not cr.preflight_check():
+ print 'skipping, preflight check failed', key
+ return
+
# check if someone else is working on this
elif not g.memcache.add(working_key, 1, TIMEOUT):
print 'skipping, someone else is working', working_key
@@ -48,10 +46,7 @@ def run():
cr.update()
g.memcache.set(key, datetime.now())
- if iden in g.skip_precompute_queries:
- print 'setting to be skipped for 6 hours', iden
- g.hardcache.set(_skip_key(iden), start,
- 60*60*6)
+ cr.postflight()
finally:
g.memcache.delete(working_key)
diff --git a/r2/r2/lib/db/stats.py b/r2/r2/lib/db/stats.py
index 47fdb76de..4e37e9f0a 100644
--- a/r2/r2/lib/db/stats.py
+++ b/r2/r2/lib/db/stats.py
@@ -78,24 +78,3 @@ def default_queries():
queries.append(q)
return queries
-
-def run_queries():
- from r2.models import subreddit
- from pylons import g
- cache = g.cache
- queries = cache.get(cache_key) or default_queries()
-
- for q in queries:
- q._read_cache = False
- q._write_cache = True
- q._cache_time = cache_time
- q._list()
-
- #find top
- q = default_queries()[0]
- q._limit = 1
- top_link = list(q)[0]
- if top_link:
- top_link._load()
- top_link.top_link = True
- top_link._commit()
diff --git a/r2/r2/lib/db/thing.py b/r2/r2/lib/db/thing.py
index adef809fc..9d00e3cc4 100644
--- a/r2/r2/lib/db/thing.py
+++ b/r2/r2/lib/db/thing.py
@@ -32,6 +32,7 @@ import sorts
from .. utils import iters, Results, tup, to36, Storage
from r2.config import cache
from r2.lib.cache import sgm
+from r2.lib.log import log_text
from pylons import g
@@ -75,6 +76,7 @@ class DataThing(object):
_data_int_props = ()
_int_prop_suffix = None
_defaults = {}
+ _essentials = ()
c = operators.Slots()
__safe__ = False
@@ -120,11 +122,53 @@ class DataThing(object):
try:
return getattr(self, '_defaults')[attr]
except KeyError:
+ try:
+ _id = object.__getattribute__(self, "_id")
+ except AttributeError:
+ _id = "???"
+ try:
+ cl = object.__getattribute__(self, "__class__").__name__
+ except AttributeError:
+ cl = "???"
+
if self._loaded:
- raise AttributeError, '%s not found' % attr
+ nl = "it IS loaded."
else:
- raise AttributeError,\
- attr + ' not found. thing is not loaded'
+ nl = "it is NOT loaded."
+
+ # The %d format is nicer, since it has no "L" at the end, but
+ # if we can't do that, fall back on %r.
+ try:
+ id_str = "%d" % _id
+ except TypeError:
+ id_str = "%r" % _id
+
+ desc = '%s(%s).%s' % (cl, id_str, attr)
+
+ try:
+ essentials = object.__getattribute__(self, "_essentials")
+ except AttributeError:
+ print "%s has no _essentials" % desc
+ essentials = ()
+
+ if isinstance(essentials, str):
+ print "Some dumbass forgot a comma."
+ essentials = essentials,
+
+ if attr in essentials:
+ log_text ("essentials-bandaid-reload",
+ "%s not found; %s Forcing reload." % (desc, nl),
+ "warning")
+ self._load()
+
+ try:
+ return self._t[attr]
+ except KeyError:
+ log_text ("essentials-bandaid-failed",
+ "Reload of %s didn't help. I recommend deletion."
+ % desc, "error")
+
+ raise AttributeError, '%s not found; %s' % (desc, nl)
def _cache_key(self):
return thing_prefix(self.__class__.__name__, self._id)
@@ -713,8 +757,12 @@ def Relation(type1, type2, denorm1 = None, denorm2 = None):
res = sgm(cache, pairs, items_db, prefix)
#convert the keys back into objects
- #we can assume the rels will be in the cache and just call
- #_byID lots
+
+ # populate up the local-cache in batch
+ cls._byID(filter(None, res.values()), data=data)
+
+ # now we can assume the rels will be in the cache and just
+ # call _byID lots
res_obj = {}
for k, rid in res.iteritems():
obj_key = (thing1_dict[k[0]], thing2_dict[k[1]], k[2])
diff --git a/r2/r2/lib/db/userrel.py b/r2/r2/lib/db/userrel.py
index ab211b86c..32ef0dce4 100644
--- a/r2/r2/lib/db/userrel.py
+++ b/r2/r2/lib/db/userrel.py
@@ -19,7 +19,7 @@
# All portions of the code written by CondeNet are Copyright (c) 2006-2010
# CondeNet, Inc. All Rights Reserved.
################################################################################
-from r2.lib.memoize import memoize, clear_memo
+from r2.lib.memoize import memoize
def UserRel(name, relation, disable_ids_fn = False, disable_reverse_ids_fn = False):
diff --git a/r2/r2/lib/emailer.py b/r2/r2/lib/emailer.py
index 284173e4e..192f1908f 100644
--- a/r2/r2/lib/emailer.py
+++ b/r2/r2/lib/emailer.py
@@ -44,6 +44,13 @@ def _system_email(email, body, kind, reply_to = "", thing = None):
kind, body = body, reply_to = reply_to,
thing = thing)
+def _nerds_email(body, from_name, kind):
+ """
+ For sending email to the nerds who run this joint
+ """
+ Email.handler.add_to_queue(None, g.nerds_email, from_name, g.nerds_email,
+ kind, body = body)
+
def verify_email(user, dest):
"""
For verifying an email address
@@ -93,6 +100,10 @@ def i18n_email(email, body, name='', reply_to = ''):
return _feedback_email(email, body, Email.Kind.HELP_TRANSLATE, name = name,
reply_to = reply_to)
+def nerds_email(body, from_name=g.domain):
+ """Queues a feedback email to the nerds running this site."""
+ return _nerds_email(body, from_name, Email.Kind.NERDMAIL)
+
def share(link, emails, from_name = "", reply_to = "", body = ""):
"""Queues a 'share link' email."""
now = datetime.datetime.now(g.tz)
@@ -138,11 +149,15 @@ def send_queued_mail(test = False):
should_queue = email.should_queue()
# check only on sharing that the mail is invalid
- if email.kind == Email.Kind.SHARE and should_queue:
- email.body = Share(username = email.from_name(),
- msg_hash = email.msg_hash,
- link = email.thing,
- body = email.body).render(style = "email")
+ if email.kind == Email.Kind.SHARE:
+ if should_queue:
+ email.body = Share(username = email.from_name(),
+ msg_hash = email.msg_hash,
+ link = email.thing,
+ body =email.body).render(style = "email")
+ else:
+ email.set_sent(rejected = True)
+ continue
elif email.kind == Email.Kind.OPTOUT:
email.body = Mail_Opt(msg_hash = email.msg_hash,
leave = True).render(style = "email")
diff --git a/r2/r2/lib/filters.py b/r2/r2/lib/filters.py
index 808b681eb..6c32cea54 100644
--- a/r2/r2/lib/filters.py
+++ b/r2/r2/lib/filters.py
@@ -19,13 +19,17 @@
# All portions of the code written by CondeNet are Copyright (c) 2006-2010
# CondeNet, Inc. All Rights Reserved.
################################################################################
-from BeautifulSoup import BeautifulSoup
-
-from pylons import c
-
import cgi
import urllib
import re
+from cStringIO import StringIO
+
+from xml.sax.handler import ContentHandler
+from lxml.sax import saxify
+import lxml.etree
+
+from pylons import g, c
+
from wrapped import Templated, CacheStub
SC_OFF = ""
@@ -122,52 +126,63 @@ def edit_comment_filter(text = ''):
text = unicode(text)
return url_escape(text)
+class SouptestSaxHandler(ContentHandler):
+ def __init__(self, ok_tags):
+ self.ok_tags = ok_tags
+
+ def startElementNS(self, tagname, qname, attrs):
+ if qname not in self.ok_tags:
+ raise ValueError('HAX: Unknown tag: %r' % qname)
+
+ for (ns, name), val in attrs.items():
+ if ns is not None:
+ raise ValueError('HAX: Unknown namespace? Seriously? %r' % ns)
+
+ if name not in self.ok_tags[qname]:
+ raise ValueError('HAX: Unknown attribute-name %r' % name)
+
+ if qname == 'a' and name == 'href':
+ lv = val.lower()
+ if not (lv.startswith('http://')
+ or lv.startswith('https://')
+ or lv.startswith('ftp://')
+ or lv.startswith('mailto:')
+ or lv.startswith('news:')
+ or lv.startswith('/')):
+ raise ValueError('HAX: Unsupported link scheme %r' % val)
+
+markdown_ok_tags = {
+ 'div': ('class'),
+ 'a': set(('href', 'title', 'target', 'nofollow')),
+ 'table': ("align", ),
+ 'th': ("align", ),
+ 'td': ("align", ),
+ }
+markdown_boring_tags = ('p', 'em', 'strong', 'br', 'ol', 'ul', 'hr', 'li',
+ 'pre', 'code', 'blockquote', 'center',
+ 'tbody', 'thead', "tr",
+ 'h1', 'h2', 'h3', 'h4', 'h5', 'h6',)
+for bt in markdown_boring_tags:
+ markdown_ok_tags[bt] = ()
+
def markdown_souptest(text, nofollow=False, target=None, lang=None):
- ok_tags = {
- 'div': ('class'),
- 'a': ('href', 'title', 'target', 'nofollow'),
- }
+ if not text:
+ return text
- boring_tags = ( 'p', 'em', 'strong', 'br', 'ol', 'ul', 'hr', 'li',
- 'pre', 'code', 'blockquote',
- 'h1', 'h2', 'h3', 'h4', 'h5', 'h6', )
+ smd = safemarkdown(text, nofollow, target, lang)
- for bt in boring_tags:
- ok_tags[bt] = ()
-
- smd = safemarkdown (text, nofollow, target, lang)
- soup = BeautifulSoup(smd)
-
- for tag in soup.findAll():
- if not tag.name in ok_tags:
- raise ValueError("<%s> tag found in markdown!" % tag.name)
- ok_attrs = ok_tags[tag.name]
- for k,v in tag.attrs:
- if not k in ok_attrs:
- raise ValueError("<%s %s='%s'> attr found in markdown!"
- % (tag.name, k,v))
- if tag.name == 'a' and k == 'href':
- lv = v.lower()
- if lv.startswith("http:"):
- pass
- elif lv.startswith("https:"):
- pass
- elif lv.startswith("ftp:"):
- pass
- elif lv.startswith("mailto:"):
- pass
- elif lv.startswith("/"):
- pass
- else:
- raise ValueError("Link to '%s' found in markdown!" % v)
+ s = StringIO(smd)
+ tree = lxml.etree.parse(s)
+ handler = SouptestSaxHandler(markdown_ok_tags)
+ saxify(tree, handler)
+ return smd
#TODO markdown should be looked up in batch?
#@memoize('markdown')
def safemarkdown(text, nofollow=False, target=None, lang=None):
from r2.lib.c_markdown import c_markdown
from r2.lib.py_markdown import py_markdown
- from pylons import g
from contrib.markdown import markdown
@@ -181,18 +196,14 @@ def safemarkdown(text, nofollow=False, target=None, lang=None):
target = "_top"
if lang is None:
- # TODO: lang should respect g.markdown_backend
- lang = "py"
+ lang = g.markdown_backend
- try:
- if lang == "c":
- text = c_markdown(text, nofollow, target)
- elif lang == "py":
- text = py_markdown(text, nofollow, target)
- else:
- raise ValueError("weird lang")
- except RuntimeError:
- text = "Comment Broken
"
+ if lang == "c":
+ text = c_markdown(text, nofollow, target)
+ elif lang == "py":
+ text = py_markdown(text, nofollow, target)
+ else:
+ raise ValueError("weird lang [%s]" % lang)
return SC_OFF + MD_START + text + MD_END + SC_ON
@@ -209,8 +220,6 @@ def unkeep_space(text):
def profanity_filter(text):
- from pylons import g
-
def _profane(m):
x = m.group(1)
return ''.join(u"\u2731" for i in xrange(len(x)))
diff --git a/r2/r2/lib/hardcachebackend.py b/r2/r2/lib/hardcachebackend.py
index 7fe70a3ba..4e3804af1 100644
--- a/r2/r2/lib/hardcachebackend.py
+++ b/r2/r2/lib/hardcachebackend.py
@@ -68,6 +68,8 @@ class HardCacheBackend(object):
)
def add(self, category, ids, val, time=0):
+ self.delete_if_expired(category, ids)
+
expiration = expiration_from_time(time)
value, kind = self.tdb.py2db(val, True)
@@ -87,6 +89,8 @@ class HardCacheBackend(object):
return self.get(category, ids)
def incr(self, category, ids, time=0, delta=1):
+ self.delete_if_expired(category, ids)
+
expiration = expiration_from_time(time)
rp = self.table.update(sa.and_(self.table.c.category==category,
@@ -155,7 +159,8 @@ class HardCacheBackend(object):
def ids_by_category(self, category, limit=1000):
s = sa.select([self.table.c.ids],
- self.table.c.category==category,
+ sa.and_(self.table.c.category==category,
+ self.table.c.expiration > datetime.now(g.tz)),
limit = limit)
rows = s.execute().fetchall()
return [ r.ids for r in rows ]
@@ -179,6 +184,13 @@ class HardCacheBackend(object):
rows = s.execute().fetchall()
return [ (r.expiration, r.category, r.ids) for r in rows ]
+ def delete_if_expired(self, category, ids, expiration="now"):
+ expiration_clause = self.clause_from_expiration(expiration)
+ self.table.delete(sa.and_(self.table.c.category==category,
+ self.table.c.ids==ids,
+ expiration_clause)).execute()
+
+
def delete_expired(expiration="now", limit=5000):
hcb = HardCacheBackend(g)
diff --git a/r2/r2/lib/jsonresponse.py b/r2/r2/lib/jsonresponse.py
index 95a500c15..66566ac15 100644
--- a/r2/r2/lib/jsonresponse.py
+++ b/r2/r2/lib/jsonresponse.py
@@ -22,7 +22,7 @@
from r2.lib.utils import tup
from r2.lib.captcha import get_iden
from r2.lib.wrapped import Wrapped, StringTemplate
-from r2.lib.filters import websafe_json
+from r2.lib.filters import websafe_json, spaceCompress
from r2.lib.jsontemplates import get_api_subtype
from r2.lib.base import BaseController
from r2.lib.pages.things import wrap_links
@@ -51,7 +51,7 @@ class JsonResponse(object):
self._errors = set()
self._new_captcha = False
self._data = {}
-
+
def send_failure(self, error):
c.errors.add(error)
self._clear()
@@ -69,7 +69,7 @@ class JsonResponse(object):
res['data'] = self._data
res['errors'] = [(e[0], c.errors[e].message) for e in self._errors]
return {"json": res}
-
+
def set_error(self, error_name, field_name):
self._errors.add((error_name, field_name))
@@ -86,6 +86,9 @@ class JsonResponse(object):
have_error = True
return have_error
+ def process_rendered(self, res):
+ return res
+
def _things(self, things, action, *a, **kw):
"""
function for inserting/replacing things in listings.
@@ -94,7 +97,7 @@ class JsonResponse(object):
if not all(isinstance(t, Wrapped) for t in things):
wrap = kw.pop('wrap', Wrapped)
things = wrap_links(things, wrapper = wrap)
- data = [t.render() for t in things]
+ data = [self.process_rendered(t.render()) for t in things]
if kw:
for d in data:
@@ -114,13 +117,13 @@ class JsonResponse(object):
def _send_data(self, **kw):
self._data.update(kw)
-
+
class JQueryResponse(JsonResponse):
"""
class which mimics the jQuery in javascript for allowing Dom
manipulations on the client side.
-
+
An instantiated JQueryResponse acts just like the "$" function on
the JS layer with the exception of the ability to run arbitrary
code on the client. Selectors and method functions evaluate to
@@ -144,7 +147,13 @@ class JQueryResponse(JsonResponse):
self.objs = None
self.ops = None
JsonResponse._clear(self)
-
+
+ def process_rendered(self, res):
+ if 'data' in res:
+ if 'content' in res['data']:
+ res['data']['content'] = spaceCompress(res['data']['content'])
+ return res
+
def send_failure(self, error):
c.errors.add(error)
self._clear()
@@ -181,12 +190,11 @@ class JQueryResponse(JsonResponse):
selector += ".field-" + field_name
message = c.errors[(error_name, field_name)].message
form.find(selector).show().html(message).end()
-
return {"jquery": self.ops}
# thing methods
#--------------
-
+
def _things(self, things, action, *a, **kw):
data = JsonResponse._things(self, things, action, *a, **kw)
new = self.__getattr__(action)
diff --git a/r2/r2/lib/jsontemplates.py b/r2/r2/lib/jsontemplates.py
index 161d475bd..cb5bf06d8 100644
--- a/r2/r2/lib/jsontemplates.py
+++ b/r2/r2/lib/jsontemplates.py
@@ -46,7 +46,7 @@ def make_fullname(typ, _id):
class ObjectTemplate(StringTemplate):
def __init__(self, d):
self.d = d
-
+
def update(self, kw):
def _update(obj):
if isinstance(obj, (str, unicode)):
@@ -56,10 +56,7 @@ class ObjectTemplate(StringTemplate):
elif isinstance(obj, (list, tuple)):
return map(_update, obj)
elif isinstance(obj, CacheStub) and kw.has_key(obj.name):
- r = kw[obj.name]
- if isinstance(r, (str, unicode)):
- r = spaceCompress(r)
- return r
+ return kw[obj.name]
else:
return obj
res = _update(self.d)
@@ -194,13 +191,23 @@ class AccountJsonTemplate(ThingJsonTemplate):
_data_attrs_ = ThingJsonTemplate.data_attrs(name = "name",
link_karma = "safe_karma",
comment_karma = "comment_karma",
- has_mail = "has_mail")
+ has_mail = "has_mail",
+ has_mod_mail = "has_mod_mail",
+ is_mod = "is_mod",
+ )
def thing_attr(self, thing, attr):
+ from r2.models import Subreddit
if attr == "has_mail":
if c.user_is_loggedin and thing._id == c.user._id:
return bool(c.have_messages)
return None
+ if attr == "has_mod_mail":
+ if c.user_is_loggedin and thing._id == c.user._id:
+ return bool(c.have_mod_messages)
+ return None
+ if attr == "is_mod":
+ return bool(Subreddit.reverse_moderator_ids(thing))
return ThingJsonTemplate.thing_attr(self, thing, attr)
class LinkJsonTemplate(ThingJsonTemplate):
@@ -328,6 +335,7 @@ class MessageJsonTemplate(ThingJsonTemplate):
body_html = "body_html",
author = "author",
dest = "dest",
+ subreddit = "subreddit",
was_comment = "was_comment",
context = "context",
created = "created",
@@ -341,7 +349,14 @@ class MessageJsonTemplate(ThingJsonTemplate):
return ("" if not thing.was_comment
else thing.permalink + "?context=3")
elif attr == "dest":
- return thing.to.name
+ if thing.to_id:
+ return thing.to.name
+ else:
+ return "#" + thing.subreddit.name
+ elif attr == "subreddit":
+ if thing.sr_id:
+ return thing.subreddit.name
+ return None
elif attr == "body_html":
return safemarkdown(thing.body)
return ThingJsonTemplate.thing_attr(self, thing, attr)
diff --git a/r2/r2/lib/lock.py b/r2/r2/lib/lock.py
index fd3ee8f8b..d04d50218 100644
--- a/r2/r2/lib/lock.py
+++ b/r2/r2/lib/lock.py
@@ -40,7 +40,7 @@ class MemcacheLock(object):
self.locks = locks.locks = getattr(locks, 'locks', set())
self.key = key
- self.cache = cache
+ self.cache = cache.get_local_client()
self.time = time
self.timeout = timeout
self.have_lock = False
diff --git a/r2/r2/lib/log.py b/r2/r2/lib/log.py
new file mode 100644
index 000000000..d6087bc76
--- /dev/null
+++ b/r2/r2/lib/log.py
@@ -0,0 +1,68 @@
+# The contents of this file are subject to the Common Public Attribution
+# License Version 1.0. (the "License"); you may not use this file except in
+# compliance with the License. You may obtain a copy of the License at
+# http://code.reddit.com/LICENSE. The License is based on the Mozilla Public
+# License Version 1.1, but Sections 14 and 15 have been added to cover use of
+# software over a computer network and provide for limited attribution for the
+# Original Developer. In addition, Exhibit A has been modified to be consistent
+# with Exhibit B.
+#
+# Software distributed under the License is distributed on an "AS IS" basis,
+# WITHOUT WARRANTY OF ANY KIND, either express or implied. See the License for
+# the specific language governing rights and limitations under the License.
+#
+# The Original Code is Reddit.
+#
+# The Original Developer is the Initial Developer. The Initial Developer of the
+# Original Code is CondeNet, Inc.
+#
+# All portions of the code written by CondeNet are Copyright (c) 2006-2010
+# CondeNet, Inc. All Rights Reserved.
+################################################################################
+
+from pylons import g
+from r2.lib import amqp
+from datetime import datetime
+import pickle
+import traceback
+
+tz = g.display_tz
+
+Q = 'log_q'
+
+def _default_dict():
+ return dict(time=datetime.now(tz),
+ host=g.reddit_host,
+ port=g.reddit_port,
+ pid=g.reddit_pid)
+
+# e_value and e should actually be the same thing.
+# e_type is the just the type of e_value
+# So e and e_traceback are the interesting ones.
+def log_exception(e, e_type, e_value, e_traceback):
+ d = _default_dict()
+
+ d['type'] = 'exception'
+ d['traceback'] = traceback.extract_tb(e_traceback)
+
+ d['exception_type'] = e.__class__.__name__
+ d['exception_desc'] = str(e)
+
+ amqp.add_item(Q, pickle.dumps(d))
+
+def log_text(classification, text=None, level="info"):
+ from r2.lib.filters import _force_utf8
+ if text is None:
+ text = classification
+
+ if level not in ('debug', 'info', 'warning', 'error'):
+ print "What kind of loglevel is %s supposed to be?" % level
+ level = 'error'
+
+ d = _default_dict()
+ d['type'] = 'text'
+ d['level'] = level
+ d['text'] = _force_utf8(text)
+ d['classification'] = classification
+
+ amqp.add_item(Q, pickle.dumps(d))
diff --git a/r2/r2/lib/memoize.py b/r2/r2/lib/memoize.py
index 00d96927d..9267f9ae8 100644
--- a/r2/r2/lib/memoize.py
+++ b/r2/r2/lib/memoize.py
@@ -19,9 +19,15 @@
# All portions of the code written by CondeNet are Copyright (c) 2006-2010
# CondeNet, Inc. All Rights Reserved.
################################################################################
+from hashlib import md5
+
from r2.config import cache
from r2.lib.filters import _force_utf8
-from r2.lib.cache import NoneResult
+from r2.lib.cache import NoneResult, make_key
+from r2.lib.lock import make_lock_factory
+from pylons import g
+
+make_lock = g.make_lock
def memoize(iden, time = 0):
def memoize_fn(fn):
@@ -35,49 +41,38 @@ def memoize(iden, time = 0):
update = kw['_update']
del kw['_update']
- key = _make_key(iden, a, kw)
- #print 'CHECKING', key
+ key = make_key(iden, *a, **kw)
res = None if update else cache.get(key)
if res is None:
- res = fn(*a, **kw)
- if res is None:
- res = NoneResult
- cache.set(key, res, time = time)
+ # not cached, we should calculate it.
+ with make_lock('memoize_lock(%s)' % key):
+ stored = None if update else cache.get(key)
+ if stored is None:
+ # okay now go and actually calculate it
+ res = fn(*a, **kw)
+ if res is None:
+ res = NoneResult
+ cache.set(key, res, time = time)
+ else:
+ # it was calculated while we were waiting on
+ # the lock
+ res = stored
+
if res == NoneResult:
res = None
+
return res
+
return new_fn
return memoize_fn
-def clear_memo(iden, *a, **kw):
- key = _make_key(iden, a, kw)
- #print 'CLEARING', key
- cache.delete(key)
-
-def _make_key(iden, a, kw):
- """
- Make the cache key. We have to descend into *a and **kw to make
- sure that only regular strings are used in the key to keep 'foo'
- and u'foo' in an args list from resulting in differing keys
- """
- def _conv(s):
- if isinstance(s, str):
- return s
- elif isinstance(s, unicode):
- return _force_utf8(s)
- else:
- return str(s)
-
- return (_conv(iden)
- + str([_conv(x) for x in a])
- + str([(_conv(x),_conv(y)) for (x,y) in sorted(kw.iteritems())]))
-
@memoize('test')
def test(x, y):
import time
time.sleep(1)
+ print 'calculating %d + %d' % (x, y)
if x + y == 10:
return None
else:
diff --git a/r2/r2/lib/menus.py b/r2/r2/lib/menus.py
index 6c313c756..483a9a8f3 100644
--- a/r2/r2/lib/menus.py
+++ b/r2/r2/lib/menus.py
@@ -98,12 +98,12 @@ menu = MenuHandler(hot = _('hot'),
mobile = _("mobile"),
store = _("store"),
ad_inq = _("inquire about advertising"),
- toplinks = _("top links"),
random = _('random'),
iphone = _("iPhone app"),
#preferences
options = _('options'),
+ feeds = _("RSS feeds"),
friends = _("friends"),
update = _("password/email"),
delete = _("delete"),
@@ -126,6 +126,7 @@ menu = MenuHandler(hot = _('hot'),
about = _("about"),
edit = _("edit this reddit"),
moderators = _("edit moderators"),
+ modmail = _("moderator mail"),
contributors = _("edit contributors"),
banned = _("ban users"),
banusers = _("ban users"),
@@ -135,7 +136,10 @@ menu = MenuHandler(hot = _('hot'),
mine = _("my reddits"),
i18n = _("help translate"),
+ errors = _("errors"),
awards = _("awards"),
+ ads = _("ads"),
+ usage = _("usage"),
promoted = _("promoted"),
reporters = _("reporters"),
reports = _("reported links"),
@@ -160,7 +164,9 @@ menu = MenuHandler(hot = _('hot'),
live_promos = _('live'),
unpaid_promos = _('unpaid'),
pending_promos = _('pending'),
- rejected_promos = _('rejected')
+ rejected_promos = _('rejected'),
+
+ whitelist = _("whitelist")
)
def menu_style(type):
@@ -287,6 +293,13 @@ class NavButton(Styled):
when it is different from self.title)"""
return self.title
+class ModeratorMailButton(NavButton):
+ def is_selected(self):
+ if c.default_sr and not self.sr_path:
+ return NavButton.is_selected(self)
+ elif not c.default_sr and self.sr_path:
+ return NavButton.is_selected(self)
+
class OffsiteButton(NavButton):
def build(self, base_path = ''):
self.sr_path = False
diff --git a/r2/r2/lib/normalized_hot.py b/r2/r2/lib/normalized_hot.py
index 7160bb897..9a6e4a385 100644
--- a/r2/r2/lib/normalized_hot.py
+++ b/r2/r2/lib/normalized_hot.py
@@ -33,7 +33,6 @@ from datetime import datetime, timedelta
import random
expire_delta = timedelta(minutes = 2)
-TOP_CACHE = 1800
max_items = 150
def access_key(sr):
@@ -77,43 +76,55 @@ def cached_query(query, sr):
return res
-def get_hot(sr, only_fullnames = False):
+def get_hot(srs, only_fullnames = False):
"""Get the (fullname, hotness, epoch_seconds) for the hottest
links in a subreddit. Use the query-cache to avoid some lookups
if we can."""
from r2.lib.db.thing import Query
from r2.lib.db.queries import CachedResults
- q = sr.get_links('hot', 'all')
- if isinstance(q, Query):
- links = cached_query(q, sr)
- res = [(link._fullname, link._hot, epoch_seconds(link._date))
- for link in links]
- elif isinstance(q, CachedResults):
- # we're relying on an implementation detail of CachedResults
- # here, where it's storing tuples that look exactly like the
- # return-type we want, to make our sorting a bit cheaper
- q.fetch()
- res = list(q.data)
+ ret = []
+ queries = [sr.get_links('hot', 'all') for sr in srs]
- age_limit = epoch_seconds(utils.timeago('%d days' % g.HOT_PAGE_AGE))
- return [(fname if only_fullnames else (fname, hot, date))
- for (fname, hot, date) in res
- if date > age_limit]
+ # fetch these all in one go
+ cachedresults = filter(lambda q: isinstance(q, CachedResults), queries)
+ CachedResults.fetch_multi(cachedresults)
+
+ for q in queries:
+ if isinstance(q, Query):
+ links = cached_query(q, sr)
+ res = [(link._fullname, link._hot, epoch_seconds(link._date))
+ for link in links]
+ elif isinstance(q, CachedResults):
+ # we're relying on an implementation detail of
+ # CachedResults here, where it's storing tuples that look
+ # exactly like the return-type we want, to make our
+ # sorting a bit cheaper
+ res = list(q.data)
+
+ # remove any that are too old
+ age_limit = epoch_seconds(utils.timeago('%d days' % g.HOT_PAGE_AGE))
+ res = [(fname if only_fullnames else (fname, hot, date))
+ for (fname, hot, date) in res
+ if date > age_limit]
+ ret.append(res)
+
+ return ret
@memoize('normalize_hot', time = g.page_cache_time)
def normalized_hot_cached(sr_ids):
"""Fetches the hot lists for each subreddit, normalizes the
scores, and interleaves the results."""
results = []
- srs = Subreddit._byID(sr_ids, data = True, return_dict = False)
- for sr in srs:
- # items =:= (fname, hot, epoch_seconds), ordered desc('_hot')
- items = get_hot(sr)[:max_items]
-
+ srs = Subreddit._byID(sr_ids, return_dict = False)
+ hots = get_hot(srs)
+ for items in hots:
if not items:
continue
+ # items =:= (fname, hot, epoch_seconds), ordered desc('_hot')
+ items = items[:max_items]
+
# the hotness of the hottest item in this subreddit
top_score = max(items[0][1], 1)
diff --git a/r2/r2/lib/organic.py b/r2/r2/lib/organic.py
index c2e5f16e1..5fad18996 100644
--- a/r2/r2/lib/organic.py
+++ b/r2/r2/lib/organic.py
@@ -99,7 +99,7 @@ def cached_organic_links(user_id, langs):
#potentially add a up and coming link
if random.choice((True, False)) and sr_ids:
sr = Subreddit._byID(random.choice(sr_ids))
- fnames = get_hot(sr, True)
+ fnames = get_hot([sr], True)[0]
if fnames:
if len(fnames) == 1:
new_item = fnames[0]
diff --git a/r2/r2/lib/pages/admin_pages.py b/r2/r2/lib/pages/admin_pages.py
index 14064da90..640aed6e9 100644
--- a/r2/r2/lib/pages/admin_pages.py
+++ b/r2/r2/lib/pages/admin_pages.py
@@ -41,7 +41,7 @@ class AdminPage(Reddit):
submit_box = False
extension_handling = False
show_sidebar = False
-
+
def __init__(self, nav_menus = None, *a, **kw):
#add admin options to the nav_menus
if c.user_is_admin:
@@ -50,7 +50,10 @@ class AdminPage(Reddit):
if g.translator:
buttons.append(NavButton(menu.i18n, "i18n"))
+ buttons.append(NavButton(menu.awards, "ads"))
buttons.append(NavButton(menu.awards, "awards"))
+ buttons.append(NavButton(menu.errors, "error log"))
+ buttons.append(NavButton(menu.usage, "usage stats"))
admin_menu = NavMenu(buttons, title='show', base_path = '/admin',
type="lightdrop")
diff --git a/r2/r2/lib/pages/pages.py b/r2/r2/lib/pages/pages.py
index f296e0223..82c28805f 100644
--- a/r2/r2/lib/pages/pages.py
+++ b/r2/r2/lib/pages/pages.py
@@ -19,9 +19,9 @@
# All portions of the code written by CondeNet are Copyright (c) 2006-2010
# CondeNet, Inc. All Rights Reserved.
################################################################################
-from r2.lib.wrapped import Wrapped, Templated, NoTemplateFound, CachedTemplate
-from r2.models import Account, Default
-from r2.models import FakeSubreddit, Subreddit
+from r2.lib.wrapped import Wrapped, Templated, CachedTemplate
+from r2.models import Account, Default, make_feedurl
+from r2.models import FakeSubreddit, Subreddit, Ad, AdSR
from r2.models import Friends, All, Sub, NotFound, DomainSR
from r2.models import Link, Printable, Trophy, bidding, PromoteDates
from r2.config import cache
@@ -39,14 +39,15 @@ from r2.lib.contrib.markdown import markdown
from r2.lib.filters import spaceCompress, _force_unicode, _force_utf8
from r2.lib.filters import unsafe, websafe, SC_ON, SC_OFF
from r2.lib.menus import NavButton, NamedButton, NavMenu, PageNameNav, JsButton
-from r2.lib.menus import SubredditButton, SubredditMenu
+from r2.lib.menus import SubredditButton, SubredditMenu, ModeratorMailButton
from r2.lib.menus import OffsiteButton, menu, JsNavMenu
from r2.lib.strings import plurals, rand_strings, strings, Score
from r2.lib.utils import title_to_url, query_string, UrlParser, to_js, vote_hash
-from r2.lib.utils import link_duplicates, make_offset_date, to_csv
+from r2.lib.utils import link_duplicates, make_offset_date, to_csv, median
from r2.lib.template_helpers import add_sr, get_domain
from r2.lib.subreddit_search import popular_searches
from r2.lib.scraper import scrapers
+from r2.lib.log import log_text
import sys, random, datetime, locale, calendar, simplejson, re
import graph, pycountry
@@ -61,6 +62,18 @@ def get_captcha():
if not c.user_is_loggedin or c.user.needs_captcha():
return get_iden()
+def responsive(res, space_compress = False):
+ """
+ Use in places where the template is returned as the result of the
+ controller so that it becomes compatible with the page cache.
+ """
+ if is_api():
+ res = json_respond(res)
+ elif space_compress:
+ res = spaceCompress(res)
+ c.response.content = res
+ return c.response
+
class Reddit(Templated):
'''Base class for rendering a page on reddit. Handles toolbar creation,
content of the footers, and content of the corner buttons.
@@ -129,16 +142,22 @@ class Reddit(Templated):
self._content = PaneStack([ShareLink(), content])
else:
self._content = content
-
+
self.toolbars = self.build_toolbars()
def sr_admin_menu(self):
buttons = [NamedButton('edit', css_class = 'reddit-edit'),
+ NamedButton('modmail', dest = "message/inbox",
+ css_class = 'moderator-mail'),
NamedButton('moderators', css_class = 'reddit-moderators')]
if c.site.type != 'public':
buttons.append(NamedButton('contributors',
css_class = 'reddit-contributors'))
+ elif (c.user_is_loggedin and c.site.use_whitelist and
+ (c.site.is_moderator(c.user) or c.user_is_admin)):
+ buttons.append(NavButton(menu.whitelist, "contributors",
+ css_class = 'reddit-contributors'))
buttons.extend([
NamedButton('traffic', css_class = 'reddit-traffic'),
@@ -177,7 +196,10 @@ class Reddit(Templated):
if total > len(moderators):
more_text = "...and %d more" % (total - len(moderators))
mod_href = "http://%s/about/moderators" % get_domain()
+ helplink = ("/message/compose?to=%%23%s" % c.site.name,
+ "message the moderators")
ps.append(SideContentBox(_('moderators'), moderators,
+ helplink = helplink,
more_href = mod_href,
more_text = more_text))
@@ -212,22 +234,8 @@ class Reddit(Templated):
In adition, unlike Templated.render, the result is in the form of a pylons
Response object with it's content set.
"""
- try:
- res = Templated.render(self, *a, **kw)
- if is_api():
- res = json_respond(res)
- elif self.space_compress:
- res = spaceCompress(res)
- c.response.content = res
- except NoTemplateFound, e:
- # re-raise the error -- development environment
- if g.debug:
- s = sys.exc_info()
- raise s[1], None, s[2]
- # die gracefully -- production environment
- else:
- abort(404, "not found")
- return c.response
+ res = Templated.render(self, *a, **kw)
+ return responsive(res, self.space_compress)
def corner_buttons(self):
"""set up for buttons in upper right corner of main page."""
@@ -307,8 +315,7 @@ class RedditFooter(CachedTemplate):
('buttons', [[(x.title, x.path) for x in y] for y in self.nav])]
def __init__(self):
- self.nav = [NavMenu([NamedButton("toplinks", False),
- NamedButton("mobile", False, nocname=True),
+ self.nav = [NavMenu([NamedButton("mobile", False, nocname=True),
OffsiteButton("rss", dest = '/.rss'),
NamedButton("store", False, nocname=True),
NamedButton("awards", False, nocname=True),
@@ -478,9 +485,13 @@ class PrefsPage(Reddit):
*a, **kw)
def build_toolbars(self):
- buttons = [NavButton(menu.options, ''),
- NamedButton('friends'),
- NamedButton('update')]
+ buttons = [NavButton(menu.options, '')]
+
+ if c.user.pref_private_feeds:
+ buttons.append(NamedButton('feeds'))
+
+ buttons.extend([NamedButton('friends'),
+ NamedButton('update')])
#if CustomerID.get_id(user):
# buttons += [NamedButton('payment')]
buttons += [NamedButton('delete')]
@@ -492,6 +503,9 @@ class PrefOptions(Templated):
def __init__(self, done = False):
Templated.__init__(self, done = done)
+class PrefFeeds(Templated):
+ pass
+
class PrefUpdate(Templated):
"""Preference form for updating email address and passwords"""
def __init__(self, email = True, password = True, verify = False):
@@ -526,11 +540,21 @@ class MessagePage(Reddit):
self._content))
def build_toolbars(self):
- buttons = [NamedButton('compose'),
+ buttons = [NamedButton('compose', sr_path = False),
NamedButton('inbox', aliases = ["/message/comments",
+ "/message/uread",
"/message/messages",
- "/message/selfreply"]),
- NamedButton('sent')]
+ "/message/selfreply"],
+ sr_path = False),
+ NamedButton('sent', sr_path = False)]
+ if c.show_mod_mail:
+ buttons.append(ModeratorMailButton(menu.modmail, "moderator",
+ sr_path = False))
+ if not c.default_sr:
+ buttons.append(ModeratorMailButton(
+ _("%(site)s mail") % {'site': c.site.name}, "moderator",
+ aliases = ["/about/message/inbox",
+ "/about/message/unread"]))
return [PageNameNav('nomenu', title = _("message")),
NavMenu(buttons, base_path = "/message", type="tabmenu")]
@@ -565,6 +589,8 @@ class HelpPage(BoringPage):
return [PageNameNav('help', title = self.pagename)]
class FormPage(BoringPage):
+ create_reddit_box = False
+ submit_box = False
"""intended for rendering forms with no rightbox needed or wanted"""
def __init__(self, pagename, show_sidebar = False, *a, **kw):
BoringPage.__init__(self, pagename, show_sidebar = show_sidebar,
@@ -1293,13 +1319,17 @@ class OptIn(Templated):
pass
-class ButtonEmbed(Templated):
+class ButtonEmbed(CachedTemplate):
"""Generates the JS wrapper around the buttons for embedding."""
def __init__(self, button = None, width = 100,
height=100, referer = "", url = "", **kw):
+ arg = "cnameframe=1&" if c.cname else ""
Templated.__init__(self, button = button,
width = width, height = height,
- referer=referer, url = url, **kw)
+ referer=referer, url = url,
+ domain = get_domain(),
+ arg = arg,
+ **kw)
class Button(Wrapped):
cachable = True
@@ -1322,9 +1352,13 @@ class Button(Wrapped):
if not hasattr(w, '_fullname'):
w._fullname = None
+ def render(self, *a, **kw):
+ res = Wrapped.render(self, *a, **kw)
+ return responsive(res, True)
+
class ButtonLite(Button):
- pass
-
+ def render(self, *a, **kw):
+ return Wrapped.render(self, *a, **kw)
class ButtonNoBody(Button):
"""A button page that just returns the raw button for direct embeding"""
@@ -1420,7 +1454,101 @@ class UserAwards(Templated):
else:
raise NotImplementedError
+class AdminErrorLog(Templated):
+ """The admin page for viewing the error log"""
+ def __init__(self):
+ hcb = g.hardcache.backend
+ date_groupings = {}
+ hexkeys_seen = {}
+
+ for ids in hcb.ids_by_category("error"):
+ date, hexkey = ids.split("-")
+
+ hexkeys_seen[hexkey] = True
+
+ d = g.hardcache.get("error-" + ids)
+
+ if d is None:
+ log_text("error=None", "Why is error-%s None?" % ids,
+ "warning")
+ continue
+
+ tpl = (len(d['occurrences']), hexkey, d)
+ date_groupings.setdefault(date, []).append(tpl)
+
+ self.nicknames = {}
+ self.statuses = {}
+
+ for hexkey in hexkeys_seen.keys():
+ nick = g.hardcache.get("error_nickname-%s" % hexkey, "???")
+ self.nicknames[hexkey] = nick
+ status = g.hardcache.get("error_status-%s" % hexkey, "normal")
+ self.statuses[hexkey] = status
+
+ for ids in hcb.ids_by_category("logtext"):
+ date, level, classification = ids.split("-", 2)
+ textoccs = []
+ dicts = g.hardcache.get("logtext-" + ids)
+ if dicts is None:
+ log_text("logtext=None", "Why is logtext-%s None?" % ids,
+ "warning")
+ continue
+ for d in dicts:
+ textoccs.append( (d['text'], d['occ'] ) )
+
+ sort_order = {
+ 'error': -1,
+ 'warning': -2,
+ 'info': -3,
+ 'debug': -4,
+ }[level]
+
+ tpl = (sort_order, level, classification, textoccs)
+ date_groupings.setdefault(date, []).append(tpl)
+
+ self.date_summaries = []
+
+ for date in sorted(date_groupings.keys(), reverse=True):
+ groupings = sorted(date_groupings[date], reverse=True)
+ self.date_summaries.append( (date, groupings) )
+
+ Templated.__init__(self)
+
+class AdminAds(Templated):
+ """The admin page for editing ads"""
+ def __init__(self):
+ from r2.models import Ad
+ Templated.__init__(self)
+ self.ads = Ad._all_ads()
+
+class AdminAdAssign(Templated):
+ """The interface for assigning an ad to a community"""
+ def __init__(self, ad):
+ self.weight = 100
+ Templated.__init__(self, ad = ad)
+
+class AdminAdSRs(Templated):
+ """View the communities an ad is running on"""
+ def __init__(self, ad):
+ self.adsrs = AdSR.by_ad(ad)
+
+ # Create a dictionary of
+ # SR => total weight of all its ads
+ # for all SRs that this ad is running on
+ self.sr_totals = {}
+ for adsr in self.adsrs:
+ sr = adsr._thing2
+
+ if sr.name not in self.sr_totals:
+ # We haven't added up this SR yet.
+ self.sr_totals[sr.name] = 0
+ # Get all its ads and total them up.
+ sr_adsrs = AdSR.by_sr_merged(sr)
+ for adsr2 in sr_adsrs:
+ self.sr_totals[sr.name] += adsr2.weight
+
+ Templated.__init__(self, ad = ad)
class AdminAwards(Templated):
"""The admin page for editing awards"""
@@ -1447,6 +1575,130 @@ class AdminAwardWinners(Templated):
trophies = Trophy.by_award(award)
Templated.__init__(self, award = award, trophies = trophies)
+class AdminUsage(Templated):
+ """The admin page for viewing usage stats"""
+ def __init__(self):
+ hcb = g.hardcache.backend
+
+ self.actions = {}
+ triples = set() # sorting key
+ daily_stats = {}
+
+ for ids in hcb.ids_by_category("profile_count", limit=10000):
+ time, action = ids.split("-")
+
+ if time.endswith("xx:xx"):
+ factor = 1.0
+ label = time[5:10] # MM/DD
+ day = True
+ elif time.endswith(":xx"):
+ factor = 24.0
+ label = time[11:] # HH:xx
+ else:
+ factor = 288.0 # number of five-minute periods in a day
+ label = time[11:] # HH:MM
+
+ # Elapsed in hardcache is in hundredths of a second.
+ # Multiply it by 100 so from this point forward, we're
+ # dealing with seconds -- as floats with two decimal
+ # places of precision. Similarly, round the average
+ # to two decimal places.
+ count = g.hardcache.get("profile_count-" + ids)
+ if count is None or count == 0:
+ log_text("usage count=None", "For %r, it's %r" % (ids, count), "error")
+ continue
+ elapsed = g.hardcache.get("profile_elapsed-" + ids, 0) / 100.0
+ average = int(100.0 * elapsed / count) / 100.0
+
+ triples.add( (factor, time, label) )
+
+ if factor == 1.0:
+ daily_stats.setdefault(action, []).append(
+ (count, elapsed, average)
+ )
+
+ self.actions.setdefault(action, {})
+ self.actions[action][label] = dict(count=count, elapsed=elapsed,
+ average=average,
+ factor=factor,
+ classes = {})
+
+ # Figure out what a typical day looks like. For each action,
+ # look at the daily stats and record the median.
+ for action in daily_stats.keys():
+ med = {}
+ med["count"] = median([ x[0] for x in daily_stats[action] ])
+ med["elapsed"] = median([ x[1] for x in daily_stats[action] ])
+ med["average"] = median([ x[2] for x in daily_stats[action] ])
+
+ for d in self.actions[action].values():
+ ice_cold = False
+ for category in ("elapsed", "count", "average"):
+ if category == "average":
+ scaled = d[category]
+ else:
+ scaled = d[category] * d["factor"]
+
+ if category == "elapsed" and scaled < 5 * 60:
+ # If we're spending less than five mins a day
+ # on this operation, consider it ice cold regardless
+ # of how much of an outlier it is
+ ice_cold = True
+
+ if ice_cold:
+ d["classes"][category] = "load0"
+ continue
+
+ if med[category] <= 0:
+ # This shouldn't happen. If it does,
+ # toggle commenting of the next three lines.
+ raise ValueError("Huh. I guess this can happen.")
+# d["classes"][category] = "load9"
+# continue
+
+ ratio = scaled / med[category]
+ if ratio > 5.0:
+ d["classes"][category] = "load9"
+ elif ratio > 3.0:
+ d["classes"][category] = "load8"
+ elif ratio > 2.0:
+ d["classes"][category] = "load7"
+ elif ratio > 1.5:
+ d["classes"][category] = "load6"
+ elif ratio > 1.1:
+ d["classes"][category] = "load5"
+ elif ratio > 0.9:
+ d["classes"][category] = "load4"
+ elif ratio > 0.75:
+ d["classes"][category] = "load3"
+ elif ratio > 0.5:
+ d["classes"][category] = "load2"
+ elif ratio > 0.10:
+ d["classes"][category] = "load1"
+ else:
+ d["classes"][category] = "load0"
+
+ # Build a list called labels that gives the template a sorting
+ # order for the columns.
+ self.labels = []
+ # Keep track of how many times we've seen a granularity (i.e., factor)
+ # so we can hide any that come after the third
+ factor_counts = {}
+ # sort actions by whatever will end up as the first column
+ action_sorting_column = None
+ for factor, time, label in sorted(triples, reverse=True):
+ if action_sorting_column is None:
+ action_sorting_column = label
+ factor_counts.setdefault(factor, 0)
+ factor_counts[factor] += 1
+ self.labels.append( (label, factor_counts[factor] > 3) )
+
+ self.action_order = sorted(self.actions.keys(), reverse=True,
+ key = lambda x:
+ self.actions[x].get(action_sorting_column, {"elapsed":0})["elapsed"])
+
+ Templated.__init__(self)
+
class Embed(Templated):
"""wrapper for embedding /help into reddit as if it were not on a separate wiki."""
@@ -1578,6 +1830,8 @@ class ContributorList(UserList):
@property
def form_title(self):
+ if c.site.type == "public":
+ return _("add to whitelist")
return _('add contributor')
@property
@@ -1826,7 +2080,9 @@ class UserText(CachedTemplate):
class MediaEmbedBody(CachedTemplate):
"""What's rendered inside the iframe that contains media objects"""
- pass
+ def render(self, *a, **kw):
+ res = CachedTemplate.render(self, *a, **kw)
+ return responsive(res, True)
class Traffic(Templated):
@staticmethod
@@ -2058,6 +2314,28 @@ class RedditTraffic(Traffic):
"%5.2f%%" % f))
return res
+class RedditAds(Templated):
+ def __init__(self, **kw):
+ self.sr_name = c.site.name
+ self.adsrs = AdSR.by_sr_merged(c.site)
+ self.total = 0
+
+ self.adsrs.sort(key=lambda a: a._thing1.codename)
+
+ seen = {}
+ for adsr in self.adsrs:
+ seen[adsr._thing1.codename] = True
+ self.total += adsr.weight
+
+ self.other_ads = []
+ all_ads = Ad._all_ads()
+ all_ads.sort(key=lambda a: a.codename)
+ for ad in all_ads:
+ if ad.codename not in seen:
+ self.other_ads.append(ad)
+
+ Templated.__init__(self, **kw)
+
class PaymentForm(Templated):
def __init__(self, **kw):
self.countries = pycountry.countries
@@ -2147,7 +2425,6 @@ class Promote_Graph(Templated):
(total_sale, total_refund)),
multiy = False)
- # table is labeled as "last month"
history = self.now - datetime.timedelta(30)
self.top_promoters = bidding.PromoteDates.top_promoters(history)
else:
@@ -2239,9 +2516,79 @@ class RawString(Templated):
def render(self, *a, **kw):
return unsafe(self.s)
-class Dart_Ad(Templated):
+class Dart_Ad(CachedTemplate):
def __init__(self, tag = None):
tag = tag or "homepage"
tracker_url = AdframeInfo.gen_url(fullname = "dart_" + tag,
ip = request.ip)
Templated.__init__(self, tag = tag, tracker_url = tracker_url)
+
+ def render(self, *a, **kw):
+ res = CachedTemplate.render(self, *a, **kw)
+ return responsive(res, False)
+
+class HouseAd(CachedTemplate):
+ def __init__(self, imgurl=None, linkurl=None, submit_link=None):
+ Templated.__init__(self, imgurl = imgurl, linkurl = linkurl,
+ submit_link = submit_link)
+
+ def render(self, *a, **kw):
+ res = CachedTemplate.render(self, *a, **kw)
+ return responsive(res, False)
+
+class ComScore(CachedTemplate):
+ pass
+
+def render_ad(reddit_name=None, codename=None):
+ if not reddit_name:
+ reddit_name = g.default_sr
+
+ if codename:
+ if codename == "DART":
+ return Dart_Ad(reddit_name).render()
+ else:
+ try:
+ ad = Ad._by_codename(codename)
+ except NotFound:
+ abort(404)
+ attrs = ad.important_attrs()
+ return HouseAd(**attrs).render()
+
+ try:
+ sr = Subreddit._by_name(reddit_name)
+ except NotFound:
+ return Dart_Ad(g.default_sr).render()
+
+ ads = {}
+
+ for adsr in AdSR.by_sr_merged(sr):
+ ad = adsr._thing1
+ ads[ad.codename] = (ad, adsr.weight)
+
+ total_weight = sum(t[1] for t in ads.values())
+
+ if total_weight == 0:
+ log_text("no ads", "No ads found for %s" % reddit_name, "error")
+ abort(404)
+
+ lotto = random.randint(0, total_weight - 1)
+ winner = None
+ for t in ads.values():
+ lotto -= t[1]
+ if lotto <= 0:
+ winner = t[0]
+
+ if winner.codename == "DART":
+ return Dart_Ad(reddit_name).render()
+ else:
+ attrs = winner.important_attrs()
+ return HouseAd(**attrs).render()
+
+ # No winner?
+
+ log_text("no winner",
+ "No winner found for /r/%s, total_weight=%d" %
+ (reddit_name, total_weight),
+ "error")
+
+ return Dart_Ad(reddit_name).render()
diff --git a/r2/r2/lib/queues.py b/r2/r2/lib/queues.py
index 1075316d0..0f637cf83 100644
--- a/r2/r2/lib/queues.py
+++ b/r2/r2/lib/queues.py
@@ -68,9 +68,12 @@ class RedditQueueMap(QueueMap):
self._q('scraper_q')
self._q('searchchanges_q', self_refer=True)
self._q('newcomments_q')
+ self._q('commentstree_q')
# this isn't in use until the spam_q plumbing is
#self._q('newpage_q')
self._q('register_vote_q', self_refer=True)
+ self._q('log_q', self_refer=True)
+ self._q('usage_q', self_refer=True)
def bindings(self):
self.newlink_bindings()
@@ -87,6 +90,7 @@ class RedditQueueMap(QueueMap):
def newcomment_bindings(self):
self._bind('new_comment', 'newcomments_q')
+ self._bind('new_comment', 'commentstree_q')
def newsubreddit_bindings(self):
self._bind('new_subreddit', 'searchchanges_q')
diff --git a/r2/r2/lib/solrsearch.py b/r2/r2/lib/solrsearch.py
index b50cf2c62..b10ed3449 100644
--- a/r2/r2/lib/solrsearch.py
+++ b/r2/r2/lib/solrsearch.py
@@ -44,6 +44,8 @@ from r2.lib.utils import unicode_safe, tup
from r2.lib.cache import SelfEmptyingCache
from r2.lib import amqp
+solr_cache_time = g.solr_cache_time
+
## Changes to the list of searchable languages will require changes to
## Solr's configuration (specifically, the fields that are searched)
searchable_langs = set(['dk','nl','en','fi','fr','de','it','no','nn','pt',
@@ -485,7 +487,7 @@ class SearchQuery(object):
return "<%s(%s)>" % (self.__class__.__name__, ", ".join(attrs))
- def run(self, after = None, num = 100, reverse = False):
+ def run(self, after = None, num = 1000, reverse = False):
if not self.q:
return pysolr.Results([],0)
@@ -568,71 +570,24 @@ class SearchQuery(object):
if reverse:
sort = swap_strings(sort,'asc','desc')
+ after = after._fullname if after else None
- if after:
- # size of the pre-search to run in the case that we need
- # to search more than once. A bigger one can reduce the
- # number of searches that need to be run twice, but if
- # it's bigger than the default display size, it could
- # waste some
- PRESEARCH_SIZE = num
-
- # run a search and get back the number of hits, so that we
- # can re-run the search with that max_count.
- pre_search = cls.run_search_cached(q, sort, 0, PRESEARCH_SIZE,
- solr_params)
-
- if (PRESEARCH_SIZE >= pre_search.hits
- or pre_search.hits == len(pre_search.docs)):
- # don't run a second search if our pre-search found
- # all of the elements anyway
- search = pre_search
- else:
- # now that we know how many to request, we can request
- # the whole lot
- search = cls.run_search_cached(q, sort, 0,
- pre_search.hits,
- solr_params, max=True)
-
- search.docs = get_after(search.docs, after._fullname, num)
- else:
- search = cls.run_search_cached(q, sort, 0, num, solr_params)
+ search = cls.run_search_cached(q, sort, 0, num, solr_params)
+ search.docs = get_after(search.docs, after, num)
return search
@staticmethod
- def run_search_cached(q, sort, start, rows, other_params, max=False):
- "Run the search, first trying the best available cache"
+ @memoize('solr_search', solr_cache_time)
+ def run_search_cached(q, sort, start, rows, other_params):
+ with SolrConnection() as s:
+ g.log.debug(("Searching q = %r; sort = %r,"
+ + " start = %r, rows = %r,"
+ + " params = %r")
+ % (q,sort,start,rows,other_params))
- # first, try to see if we've cached the result for the entire
- # dataset for that query, returning the requested slice of it
- # if so. If that's not available, try the cache for the
- # partial result requested (passing the actual search along to
- # solr if both of those fail)
- full_key = 'solrsearch_%s' % ','.join(('%r' % r)
- for r in (q,sort,other_params))
- part_key = "%s,%d,%d" % (full_key, start, rows)
-
- full_cached = g.cache.get(full_key)
- if full_cached:
- res = pysolr.Results(hits = full_cached.hits,
- docs = full_cached.docs[start:start+rows])
- else:
- part_cached = g.cache.get(part_key)
- if part_cached:
- res = part_cached
- else:
- with SolrConnection() as s:
- g.log.debug(("Searching q = %r; sort = %r,"
- + " start = %r, rows = %r,"
- + " params = %r, max = %r")
- % (q,sort,start,rows,other_params,max))
-
- res = s.search(q, sort, start = start, rows = rows,
- other_params = other_params)
-
- g.cache.set(full_key if max else part_key,
- res, time = g.solr_cache_time)
+ res = s.search(q, sort, start = start, rows = rows,
+ other_params = other_params)
# extract out the fullname in the 'docs' field, since that's
# all we care about
@@ -708,11 +663,14 @@ class DomainSearchQuery(SearchQuery):
qt='standard')
def get_after(fullnames, fullname, num):
+ if not fullname:
+ return fullnames[:num]
+
for i, item in enumerate(fullnames):
if item == fullname:
return fullnames[i+1:i+num+1]
- else:
- return fullnames[:num]
+
+ return fullnames[:num]
def run_commit(optimize=False):
diff --git a/r2/r2/lib/strings.py b/r2/r2/lib/strings.py
index f3d11907d..8820a5a04 100644
--- a/r2/r2/lib/strings.py
+++ b/r2/r2/lib/strings.py
@@ -80,7 +80,7 @@ string_dict = dict(
moderator = _("you have been added as a moderator to [%(title)s](%(url)s)."),
contributor = _("you have been added as a contributor to [%(title)s](%(url)s)."),
banned = _("you have been banned from posting to [%(title)s](%(url)s)."),
- traffic = _('you have been added to the list of users able to see [traffic for the sponsoted link "%(title)s"](%(traffic_url)s).')
+ traffic = _('you have been added to the list of users able to see [traffic for the sponsored link "%(title)s"](%(traffic_url)s).')
),
subj_add_friend = dict(
@@ -117,12 +117,7 @@ string_dict = dict(
permalink_title = _("%(author)s comments on %(title)s"),
link_info_title = _("%(title)s : %(site)s"),
banned_subreddit = _("""**this reddit has been banned**\n\nmost likely this was done automatically by our spam filtering program. the program is still learning, and may even have some bugs, so if you feel the ban was a mistake, please send a message to [our site admins](%(link)s) and be sure to include the **exact name of the reddit**."""),
- comments_panel_text = _("""
- The following is a sample of what Reddit users had to say about this
- page. The full discussion is available [here](%(fd_link)s); you can
- also get there by clicking the link's title
- (in the middle of the toolbar, to the right of the comments button).
- """),
+ comments_panel_text = _("""The following is a sample of what Reddit users had to say about this page. The full discussion is available [here](%(fd_link)s); you can also get there by clicking the link's title (in the middle of the toolbar, to the right of the comments button)."""),
submit_link = _("""You are submitting a link. The key to a successful submission is interesting content and a descriptive title."""),
submit_text = _("""You are submitting a text-based post. Speak your mind. A title is required, but expanding further in the text field is not. Beginning your title with "vote up if" is violation of intergalactic law."""),
@@ -130,7 +125,7 @@ string_dict = dict(
verify_email = _("we're going to need to verify your email address for you to proceed."),
email_verified = _("your email address has been verfied"),
email_verify_failed = _("Verification failed. Please try that again"),
- search_failed = _("Our search machines are under too much load to handle your request right now. :( Sorry for the inconvenience.\n\n[Try again](%(link)s) in a little bit -- but please don't mash reload; that only makes the problem worse.")
+ search_failed = _("Our search machines are under too much load to handle your request right now. :( Sorry for the inconvenience. [Try again](%(link)s) in a little bit -- but please don't mash reload; that only makes the problem worse.")
)
class StringHandler(object):
diff --git a/r2/r2/lib/traffic.py b/r2/r2/lib/traffic.py
index 816a213ac..6c65d78bf 100644
--- a/r2/r2/lib/traffic.py
+++ b/r2/r2/lib/traffic.py
@@ -25,7 +25,7 @@ from cPickle import loads
from utils import query_string
import os, socket, time, datetime
from pylons import g
-from r2.lib.memoize import memoize, clear_memo
+from r2.lib.memoize import memoize
def load_traffic_uncached(interval, what, iden,
start_time = None, stop_time = None,
diff --git a/r2/r2/lib/translation.py b/r2/r2/lib/translation.py
index 8a20356db..194a5a5ca 100644
--- a/r2/r2/lib/translation.py
+++ b/r2/r2/lib/translation.py
@@ -263,8 +263,8 @@ class TranslatedString(Templated):
if indx < 0:
return all(self.is_valid(i) for i in range(0,len(self.msgstr)))
elif indx < len(self.msgstr):
- return self.msgid.compatible(self.msgstr[indx]) or \
- self.msgstr.compatible(self.msgstr[indx])
+ return self.msgid.compatible(self.msgstr[indx]) #or \
+ #self.msgstr.compatible(self.msgstr[indx])
return True
else:
return self.msgid.compatible(self.msgstr)
@@ -655,7 +655,7 @@ class Transliterator(AutoTranslator):
def __init__(self, **kw):
Translator.__init__(self, **kw)
for string in self.strings:
- if string.is_translated() \
+ if not string.is_translated() \
and not isinstance(string, GettextHeader):
if string.plural:
string.add(self.translate(string.msgstr[0].unicode()),
@@ -767,12 +767,80 @@ class TamilTranslator(Transliterator):
t = t.replace(k, v)
return t
+class SerbianCyrillicTranslator(Transliterator):
+ letters = \
+ (( "A" , u'\u0410'),
+ ( "B" , u'\u0411'),
+ ( "V" , u'\u0412'),
+ ( "G" , u'\u0413'),
+ ( "D" , u'\u0414'),
+ ( u'\u0110' , u'\u0402'),
+ ( "E" , u'\u0415'),
+ ( u"\u017d" , u'\u0416'),
+ ( "Z" , u'\u0417'),
+ ( "I" , u'\u0418'),
+ ( "J" , u'\u0408'),
+ ( "K" , u'\u041a'),
+ ( "L" , u'\u041b'),
+ ( "Lj" , u'\u0409'),
+ ( "M" , u'\u041c'),
+ ( "N" , u'\u041d'),
+ ( "Nj" , u'\u040a'),
+ ( "O" , u'\u041e'),
+ ( "P" , u'\u041f'),
+ ( "R" , u'\u0420'),
+ ( "S" , u'\u0421'),
+ ( "T" , u'\u0422'),
+ ( u"\u0106" , u'\u040b'),
+ ( "U" , u'\u0423'),
+ ( "F" , u'\u0424'),
+ ( 'H' , u'\u0425'),
+ ( "C" , u'\u0426'),
+ ( u"\u010c", u'\u0427'),
+ ( u"D\u017e", u'\u040f'),
+ ( u"\u0160", u'\u0428'),
-
-
+ ( "a" , u'\u0430'),
+ ( "b" , u'\u0431'),
+ ( "v" , u'\u0432'),
+ ( "g" , u'\u0433'),
+ ( "d" , u'\u0434'),
+ ( u'\u0111' , u'\u0452'),
+ ( "e" , u'\u0435'),
+ ( u"\u017e" , u'\u0436'),
+ ( "z" , u'\u0437'),
+ ( "i" , u'\u0438'),
+ ( "j" , u'\u0458'),
+ ( "k" , u'\u043a'),
+ ( "l" , u'\u043b'),
+ ( "lj" , u'\u0459'),
+ ( "m" , u'\u043c'),
+ ( "n" , u'\u043d'),
+ ( "nj" , u'\u045a'),
+ ( "o" , u'\u043e'),
+ ( "p" , u'\u043f'),
+ ( "r" , u'\u0440'),
+ ( "s" , u'\u0441'),
+ ( "t" , u'\u0442'),
+ ( u"\u0107" , u'\u045b'),
+ ( "u" , u'\u0443'),
+ ( "f" , u'\u0444'),
+ ( 'h' , u'\u0445'),
+ ( "c" , u'\u0446'),
+ ( u"\u010d", u'\u0447'),
+ ( u"d\u017e", u'\u045f'),
+ ( u"\u0161", u'\u0448'))
+ ligatures = [(x,y) for x, y in letters if len(x) == 2]
+ letters = dict((x, y) for x, y in letters if len(x) == 1)
+ def trans_rules(self, string):
+ for x, y in self.ligatures:
+ string = string.replace(x, y)
+ return "".join(self.letters.get(s, s) for s in string)
+
import random
class LeetTranslator(AutoTranslator):
def trans_rules(self, string):
+ print string
key = dict(a=["4","@"],
b=["8"], c=["("],
d=[")", "|)"], e=["3"],
@@ -786,9 +854,11 @@ class LeetTranslator(AutoTranslator):
return ''.join(s)
def get_translator(locale):
+ #if locale == 'sr':
+ # return SerbianCyrillicTranslator(locale = locale)
if locale == 'leet':
return LeetTranslator(locale = locale)
- elif locale == 'en':
+ elif locale.startswith('en'):
return USEnglishTranslator(locale = locale)
elif locale == 'ta':
return TamilTranslator(locale = locale)
diff --git a/r2/r2/lib/utils/utils.py b/r2/r2/lib/utils/utils.py
index 92f8c22f9..e13aec807 100644
--- a/r2/r2/lib/utils/utils.py
+++ b/r2/r2/lib/utils/utils.py
@@ -19,7 +19,8 @@
# All portions of the code written by CondeNet are Copyright (c) 2006-2010
# CondeNet, Inc. All Rights Reserved.
################################################################################
-from urllib import unquote_plus, urlopen
+from urllib import unquote_plus
+from urllib2 import urlopen
from urlparse import urlparse, urlunparse
from threading import local
import signal
@@ -27,6 +28,8 @@ from copy import deepcopy
import cPickle as pickle
import re, math, random
+from BeautifulSoup import BeautifulSoup
+
from datetime import datetime, timedelta
from pylons.i18n import ungettext, _
from r2.lib.filters import _force_unicode
@@ -54,6 +57,9 @@ def randstr(len, reallyrandom = False):
return ''.join(random.choice(alphabet)
for i in range(len))
+def is_authorized_cname(domain, cnames):
+ return any((domain == cname or domain.endswith('.' + cname))
+ for cname in cnames)
class Storage(dict):
"""
@@ -292,26 +298,31 @@ def path_component(s):
res = r_path_component.findall(base_url(s))
return (res and res[0]) or s
-r_title = re.compile('(.*?)<\/title>', re.I|re.S)
-r_charset = re.compile(""""
- import chardet
- if not url or not url.startswith('http://'): return None
+ the contents of """
+ if not url or not url.startswith('http://'):
+ return None
+
try:
- content = urlopen(url).read()
- t = r_title.findall(content)
- if t:
- title = t[0].strip()
- en = (r_charset.findall(content) or
- r_encoding.findall(content))
- encoding = en[0] if en else chardet.detect(content)["encoding"]
- if encoding:
- title = unicode(title, encoding).encode("utf-8")
- return title
- except: return None
+ # if we don't find it in the first kb of the resource, we
+ # probably won't find it
+ opener = urlopen(url, timeout=15)
+ text = opener.read(1024)
+ opener.close()
+ bs = BeautifulSoup(text)
+ if not bs:
+ return
+
+ title_bs = bs.first('title')
+
+ if not title_bs or title_bs.children:
+ return
+
+ return title_bs.text.encode('utf-8')
+
+ except:
+ return None
valid_schemes = ('http', 'https', 'ftp', 'mailto')
valid_dns = re.compile('^[-a-zA-Z0-9]+$')
@@ -348,6 +359,9 @@ def sanitize_url(url, require_scheme = False):
#if this succeeds, this portion of the dns is almost
#valid and converted to ascii
label = label.encode('idna')
+ except TypeError:
+ print "label sucks: [%r]" % label
+ raise
except UnicodeError:
return
else:
@@ -456,6 +470,10 @@ def to_base(q, alphabet):
def to36(q):
return to_base(q, '0123456789abcdefghijklmnopqrstuvwxyz')
+def median(l):
+ if l:
+ return l[len(l)/2]
+
def query_string(dict):
pairs = []
for k,v in dict.iteritems():
@@ -628,8 +646,9 @@ class UrlParser(object):
g.domain, or a subdomain of the provided subreddit's cname.
"""
from pylons import g
- return (not self.hostname or
+ return (not self.hostname or
self.hostname.endswith(g.domain) or
+ is_authorized_cname(self.hostname, g.authorized_cnames) or
(subreddit and subreddit.domain and
self.hostname.endswith(subreddit.domain)))
@@ -1147,3 +1166,20 @@ def in_chunks(it, size=25):
except StopIteration:
if chunk:
yield chunk
+
+class Hell(object):
+ def __str__(self):
+ return "boom!"
+
+class Bomb(object):
+ @classmethod
+ def __getattr__(cls, key):
+ raise Hell()
+
+ @classmethod
+ def __setattr__(cls, key, val):
+ raise Hell()
+
+ @classmethod
+ def __repr__(cls):
+ raise Hell()
diff --git a/r2/r2/lib/wrapped.py b/r2/r2/lib/wrapped.py
index e574aba0f..a5560302b 100644
--- a/r2/r2/lib/wrapped.py
+++ b/r2/r2/lib/wrapped.py
@@ -23,7 +23,7 @@ from itertools import chain
from datetime import datetime
import re, types
-class NoTemplateFound(Exception): pass
+from hashlib import md5
class StringTemplate(object):
"""
@@ -54,7 +54,7 @@ class StringTemplate(object):
self.template = unicode(template)
except UnicodeDecodeError:
self.template = unicode(template, "utf8")
-
+
def update(self, d):
"""
Given a dictionary of replacement rules for the Template,
@@ -134,20 +134,37 @@ class Templated(object):
if not hasattr(self, "render_class"):
self.render_class = self.__class__
+ def _notfound(self, style):
+ from pylons import g, request
+ from pylons.controllers.util import abort
+ from r2.lib.log import log_text
+ if g.debug:
+ raise NotImplementedError (repr(self), style)
+ else:
+ if style == 'png':
+ level = "debug"
+ else:
+ level = "warning"
+ log_text("missing template",
+ "Couldn't find %s template for %r %s" %
+ (style, self, request.path),
+ level)
+ abort(404)
+
def template(self, style = 'html'):
"""
Fetches template from the template manager
"""
from r2.config.templates import tpm
from pylons import g
+
debug = g.template_debug
template = None
try:
template = tpm.get(self.render_class,
style, cache = not debug)
except AttributeError:
- raise NoTemplateFound, (repr(self), style)
-
+ self._notfound(style)
return template
def cache_key(self, *a):
@@ -165,6 +182,7 @@ class Templated(object):
"""
from filters import unsafe
from pylons import c
+
# the style has to default to the global render style
# fetch template
template = self.template(style)
@@ -183,7 +201,7 @@ class Templated(object):
c.render_style = render_style
return res
else:
- raise NoTemplateFound, repr(self)
+ self._notfound(style)
def _render(self, attr, style, **kwargs):
"""
@@ -249,7 +267,7 @@ class Templated(object):
# in the tuple that is the current dict's values.
# This dict cast will generate a new dict of cache_key
# to value
- cached = g.rendercache.get_multi(dict(current.values()))
+ cached = self._read_cache(dict(current.values()))
# replacements will be a map of key -> rendered content
# for updateing the current set of updates
replacements = {}
@@ -290,10 +308,10 @@ class Templated(object):
# that we didn't find in the cache.
# cache content that was newly rendered
- g.rendercache.set_multi(dict((k, v)
- for k, (v, kw) in updates.values()
- if k in to_cache))
-
+ self._write_cache(dict((k, v)
+ for k, (v, kw) in updates.values()
+ if k in to_cache))
+
# edge case: this may be the primary tempalte and cachable
if isinstance(res, CacheStub):
res = updates[res.name][1][0]
@@ -321,8 +339,25 @@ class Templated(object):
res = res.finalize(kwargs)
return res
-
-
+
+ def _write_cache(self, keys):
+ from pylons import g
+
+ toset = dict((md5(key).hexdigest(), val)
+ for (key, val)
+ in keys.iteritems())
+ g.rendercache.set_multi(toset)
+
+ def _read_cache(self, keys):
+ from pylons import g
+
+ ekeys = dict((md5(key).hexdigest(), key)
+ for key in keys)
+ found = g.rendercache.get_multi(ekeys)
+ return dict((ekeys[fkey], val)
+ for (fkey, val)
+ in found.iteritems())
+
def render(self, style = None, **kw):
from r2.lib.filters import unsafe
res = self._render(None, style, **kw)
@@ -380,7 +415,7 @@ class CachedTemplate(Templated):
# can make the caching process-local.
template_hash = getattr(self.template(style), "hash",
id(self.__class__))
-
+
# these values are needed to render any link on the site, and
# a menu is just a set of links, so we best cache against
# them.
@@ -453,9 +488,9 @@ class Wrapped(CachedTemplate):
break
except AttributeError:
pass
-
+
if not found:
- raise AttributeError, attr
+ raise AttributeError, "%r has no %s" % (self, attr)
setattr(self, attr, res)
return res
diff --git a/r2/r2/models/__init__.py b/r2/r2/models/__init__.py
index fdf8db877..2301271f1 100644
--- a/r2/r2/models/__init__.py
+++ b/r2/r2/models/__init__.py
@@ -27,6 +27,7 @@ from vote import *
from report import *
from subreddit import *
from award import *
+from ad import *
from bidding import *
from mail_queue import Email, has_opted_out, opt_count
from admintools import *
diff --git a/r2/r2/models/account.py b/r2/r2/models/account.py
index d11b626bf..91b73619d 100644
--- a/r2/r2/models/account.py
+++ b/r2/r2/models/account.py
@@ -23,7 +23,7 @@ from r2.lib.db.thing import Thing, Relation, NotFound
from r2.lib.db.operators import lower
from r2.lib.db.userrel import UserRel
from r2.lib.memoize import memoize
-from r2.lib.utils import modhash, valid_hash, randstr, timefromnow
+from r2.lib.utils import modhash, valid_hash, randstr, timefromnow, UrlParser
from r2.lib.cache import sgm
from pylons import g
@@ -61,6 +61,7 @@ class Account(Thing):
pref_mark_messages_read = True,
pref_threaded_messages = True,
pref_collapse_read_messages = False,
+ pref_private_feeds = True,
reported = 0,
report_made = 0,
report_correct = 0,
@@ -301,7 +302,6 @@ class Account(Thing):
else:
g.hardcache.set("cup_info-%d" % self._id, cup_info, cache_lifetime)
-
def remove_cup(self):
g.hardcache.delete("cup_info-%d" % self._id)
@@ -313,6 +313,7 @@ class Account(Thing):
ids = [ int(i) for i in ids ]
return sgm(g.hardcache, ids, miss_fn=None, prefix="cup_info-")
+
class FakeAccount(Account):
_nodb = True
pref_no_profanity = True
@@ -338,6 +339,29 @@ def valid_cookie(cookie):
return (account, True)
return (False, False)
+def valid_feed(name, feedhash, path):
+ if name and feedhash and path:
+ from r2.lib.template_helpers import add_sr
+ path = add_sr(path)
+ try:
+ user = Account._by_name(name)
+ if (user.pref_private_feeds and
+ feedhash == make_feedhash(user, path)):
+ return user
+ except NotFound:
+ pass
+
+def make_feedhash(user, path):
+ return sha.new("".join([user.name, user.password, g.FEEDSECRET])
+ ).hexdigest()
+
+def make_feedurl(user, path, ext = "rss"):
+ u = UrlParser(path)
+ u.update_query(user = user.name,
+ feed = make_feedhash(user, path))
+ u.set_extension(ext)
+ return u.unparse()
+
def valid_login(name, password):
try:
a = Account._by_name(name)
@@ -358,7 +382,7 @@ def valid_password(a, password):
salt = a.password[:3]
if a.password == passhash(a.name, password, salt):
return a
- except AttributeError:
+ except AttributeError, UnicodeEncodeError:
return False
def passhash(username, password, salt = ''):
@@ -398,8 +422,18 @@ class DeletedUser(FakeAccount):
def name(self):
return '[deleted]'
+ @property
+ def _deleted(self):
+ return True
+
def _fullname(self):
raise NotImplementedError
def _id(self):
raise NotImplementedError
+
+ def __setattr__(self, attr, val):
+ if attr == '_deleted':
+ pass
+ else:
+ object.__setattr__(self, attr, val)
diff --git a/r2/r2/models/ad.py b/r2/r2/models/ad.py
new file mode 100644
index 000000000..181d953e6
--- /dev/null
+++ b/r2/r2/models/ad.py
@@ -0,0 +1,142 @@
+# The contents of this file are subject to the Common Public Attribution
+# License Version 1.0. (the "License"); you may not use this file except in
+# compliance with the License. You may obtain a copy of the License at
+# http://code.reddit.com/LICENSE. The License is based on the Mozilla Public
+# License Version 1.1, but Sections 14 and 15 have been added to cover use of
+# software over a computer network and provide for limited attribution for the
+# Original Developer. In addition, Exhibit A has been modified to be consistent
+# with Exhibit B.
+#
+# Software distributed under the License is distributed on an "AS IS" basis,
+# WITHOUT WARRANTY OF ANY KIND, either express or implied. See the License for
+# the specific language governing rights and limitations under the License.
+#
+# The Original Code is Reddit.
+#
+# The Original Developer is the Initial Developer. The Initial Developer of the
+# Original Code is CondeNet, Inc.
+#
+# All portions of the code written by CondeNet are Copyright (c) 2006-2008
+# CondeNet, Inc. All Rights Reserved.
+################################################################################
+from r2.lib.db.thing import Thing, Relation, NotFound
+from r2.lib.db.operators import asc, desc, lower
+from r2.lib.memoize import memoize
+from r2.models import Subreddit
+from pylons import c, g, request
+
+class Ad (Thing):
+ _defaults = dict(
+ codename = None,
+ imgurl = None,
+ linkurl = None,
+ )
+
+ @classmethod
+ @memoize('ad.all_ads')
+ def _all_ads_cache(cls):
+ return [ a._id for a in Ad._query(sort=desc('_date'), limit=1000) ]
+
+ @classmethod
+ def _all_ads(cls, _update=False):
+ all = cls._all_ads_cache(_update=_update)
+ # Can't just return Ad._byID() results because
+ # the ordering will be lost
+ d = Ad._byID(all, data=True)
+ return [ d[id] for id in all ]
+
+ @classmethod
+ def _new(cls, codename, imgurl, linkurl):
+ print "Creating new ad codename=%s imgurl=%s linkurl=%s" % (
+ codename, imgurl, linkurl)
+ a = Ad(codename=codename, imgurl=imgurl, linkurl=linkurl)
+ a._commit()
+ Ad._all_ads_cache(_update=True)
+
+ @classmethod
+ def _by_codename(cls, codename):
+ q = cls._query(lower(Ad.c.codename) == codename.lower())
+ q._limit = 1
+ ad = list(q)
+
+ if ad:
+ return cls._byID(ad[0]._id, True)
+ else:
+ raise NotFound, 'Ad %s' % codename
+
+ def url(self):
+ return "%s/ads/%s" % (g.ad_domain, self.codename)
+
+ def submit_link(self):
+ from r2.lib.template_helpers import get_domain
+ from mako.filters import url_escape
+
+ d = get_domain(subreddit=False)
+ u = self.url()
+
+ return "http://%s/r/ads/submit?url=%s" % (d, url_escape(u))
+
+ def important_attrs(self):
+ return dict(imgurl=self.imgurl, linkurl=self.linkurl, submit_link=self.submit_link())
+
+class AdSR(Relation(Ad, Subreddit)):
+ @classmethod
+ def _new(cls, ad, sr, weight=100):
+ t = AdSR(ad, sr, "adsr")
+ t.weight = weight
+ t._commit()
+
+ AdSR.by_ad(ad, _update=True)
+ AdSR.by_sr(sr, _update=True)
+
+ @classmethod
+ @memoize('adsr.by_ad')
+ def by_ad_cache(cls, ad):
+ q = AdSR._query(AdSR.c._thing1_id == ad._id,
+ sort = desc('_date'))
+ q._limit = 500
+ return [ t._id for t in q ]
+
+ @classmethod
+ def by_ad(cls, ad, _update=False):
+ rel_ids = cls.by_ad_cache(ad, _update=_update)
+ adsrs = AdSR._byID_rel(rel_ids, data=True, eager_load=True,
+ thing_data=True, return_dict = False)
+ return adsrs
+
+ @classmethod
+ @memoize('adsr.by_sr')
+ def by_sr_cache(cls, sr):
+ q = AdSR._query(AdSR.c._thing2_id == sr._id,
+ sort = desc('_date'))
+ q._limit = 500
+ return [ t._id for t in q ]
+
+ @classmethod
+ def by_sr(cls, sr, _update=False):
+ rel_ids = cls.by_sr_cache(sr, _update=_update)
+ adsrs = AdSR._byID_rel(rel_ids, data=True, eager_load=True,
+ thing_data=True, return_dict = False)
+ return adsrs
+
+ @classmethod
+ def by_sr_merged(cls, sr, _update=False):
+ if sr.name == g.default_sr:
+ return cls.by_sr(sr)
+
+ my_adsrs = cls.by_sr(sr)
+ global_adsrs = cls.by_sr(Subreddit._by_name(g.default_sr))
+
+ seen = {}
+ for adsr in my_adsrs:
+ seen[adsr._thing1.codename] = True
+ for adsr in global_adsrs:
+ if adsr._thing1.codename not in seen:
+ my_adsrs.append(adsr)
+
+ return my_adsrs
+
+ @classmethod
+ def by_ad_and_sr(cls, ad, sr):
+ q = cls._fast_query(ad, sr, "adsr")
+ return q.values()[0]
diff --git a/r2/r2/models/admintools.py b/r2/r2/models/admintools.py
index 32c98ab3a..a5880c09a 100644
--- a/r2/r2/models/admintools.py
+++ b/r2/r2/models/admintools.py
@@ -21,6 +21,7 @@
################################################################################
from r2.lib.utils import tup
from r2.lib.filters import websafe
+from r2.lib.log import log_text
from r2.models import Report, Account
from pylons import g
@@ -30,7 +31,8 @@ from copy import copy
class AdminTools(object):
- def spam(self, things, auto, moderator_banned, banner, date = None, **kw):
+ def spam(self, things, auto=True, moderator_banned=False,
+ banner=None, date = None, **kw):
from r2.lib.db import queries
things = [x for x in tup(things) if not x._spam]
@@ -98,7 +100,7 @@ class AdminTools(object):
def set_last_sr_ban(self, things):
by_srid = {}
for thing in things:
- if hasattr(thing, 'sr_id'):
+ if getattr(thing, 'sr_id', None) is not None:
by_srid.setdefault(thing.sr_id, []).append(thing)
if by_srid:
@@ -117,7 +119,7 @@ def is_banned_IP(ip):
return False
def is_banned_domain(dom):
- return False
+ return None
def valid_thing(v, karma):
return not v._thing1._spam
@@ -125,6 +127,10 @@ def valid_thing(v, karma):
def valid_user(v, sr, karma):
return True
+# Returns whether this person is being suspicious
+def login_throttle(username, wrong_password):
+ return False
+
def apply_updates(user):
pass
diff --git a/r2/r2/models/award.py b/r2/r2/models/award.py
index 4447fe31c..8b61938ed 100644
--- a/r2/r2/models/award.py
+++ b/r2/r2/models/award.py
@@ -21,11 +21,10 @@
################################################################################
from r2.lib.db.thing import Thing, Relation, NotFound
from r2.lib.db.userrel import UserRel
-from r2.lib.db.operators import desc, lower
+from r2.lib.db.operators import asc, desc, lower
from r2.lib.memoize import memoize
from r2.models import Account
from pylons import c, g, request
-from r2.lib.db.operators import asc
class Award (Thing):
_defaults = dict(
diff --git a/r2/r2/models/builder.py b/r2/r2/models/builder.py
index 4b701b768..82d847666 100644
--- a/r2/r2/models/builder.py
+++ b/r2/r2/models/builder.py
@@ -34,7 +34,7 @@ from r2.lib.wrapped import Wrapped
from r2.lib import utils
from r2.lib.db import operators
from r2.lib.cache import sgm
-from r2.lib.comment_tree import link_comments, user_messages, conversation, tree_sort_fn
+from r2.lib.comment_tree import *
from copy import deepcopy, copy
import time
@@ -90,18 +90,12 @@ class Builder(object):
wrapped = []
count = 0
- if isinstance(c.site, FakeSubreddit):
- mods = []
- else:
- mods = c.site.moderators
- modlink = ''
- if c.cname:
- modlink = '/about/moderators'
- else:
- modlink = '/r/%s/about/moderators' % c.site.name
-
- modlabel = (_('moderator of /r/%(reddit)s, speaking officially') %
- dict(reddit = c.site.name) )
+ modlink = {}
+ modlabel = {}
+ for s in subreddits.values():
+ modlink[s._id] = '/r/%s/about/moderators' % s.name
+ modlabel[s._id] = (_('moderator of /r/%(reddit)s, speaking officially') %
+ dict(reddit = s.name) )
for item in items:
@@ -142,9 +136,9 @@ class Builder(object):
w.author and w.author.name in g.admins):
add_attr(w.attribs, 'A')
- if (w.distinguished == 'moderator' and
- getattr(item, "author_id", None) in mods):
- add_attr(w.attribs, 'M', label=modlabel, link=modlink)
+ if w.distinguished == 'moderator':
+ add_attr(w.attribs, 'M', label=modlabel[item.sr_id],
+ link=modlink[item.sr_id])
if w.author and w.author._id in cup_infos and not c.profilepage:
cup_info = cup_infos[w.author._id]
@@ -154,7 +148,7 @@ class Builder(object):
label=label,
link = "/user/%s" % w.author.name)
- if hasattr(item, "sr_id"):
+ if hasattr(item, "sr_id") and item.sr_id is not None:
w.subreddit = subreddits[item.sr_id]
w.likes = likes.get((user, item))
@@ -204,6 +198,8 @@ class Builder(object):
w.moderator_banned = ban_info.get('moderator_banned', False)
w.autobanned = ban_info.get('auto', False)
w.banner = ban_info.get('banner')
+ if hasattr(w, "author") and w.author._spam:
+ w.show_spam = "author"
elif getattr(item, 'reported', 0) > 0:
w.show_reports = True
@@ -240,7 +236,7 @@ class QueryBuilder(Builder):
self.start_count = kw.get('count', 0) or 0
self.after = kw.get('after')
self.reverse = kw.get('reverse')
-
+
self.prewrap_fn = None
if hasattr(query, 'prewrap_fn'):
self.prewrap_fn = query.prewrap_fn
@@ -372,18 +368,30 @@ class QueryBuilder(Builder):
class IDBuilder(QueryBuilder):
def init_query(self):
- names = self.names = list(tup(self.query))
+ names = list(tup(self.query))
- if self.reverse:
+ after = self.after._fullname if self.after else None
+
+ self.names = self._get_after(names,
+ after,
+ self.reverse)
+
+ @staticmethod
+ def _get_after(l, after, reverse):
+ names = list(l)
+
+ if reverse:
names.reverse()
- if self.after:
+ if after:
try:
- i = names.index(self.after._fullname)
+ i = names.index(after)
except ValueError:
- self.names = ()
+ names = ()
else:
- self.names = names[i + 1:]
+ names = names[i + 1:]
+
+ return names
def fetch_more(self, last_item, num_have):
done = False
@@ -405,14 +413,22 @@ class IDBuilder(QueryBuilder):
return done, new_items
-class SearchBuilder(QueryBuilder):
+class SearchBuilder(IDBuilder):
def init_query(self):
self.skip = True
- self.total_num = 0
- self.start_time = time.time()
self.start_time = time.time()
+ search = self.query.run()
+ names = list(search.docs)
+ self.total_num = search.hits
+
+ after = self.after._fullname if self.after else None
+
+ self.names = self._get_after(names,
+ after,
+ self.reverse)
+
def keep_item(self,item):
# doesn't use the default keep_item because we want to keep
# things that were voted on, even if they've chosen to hide
@@ -422,31 +438,6 @@ class SearchBuilder(QueryBuilder):
else:
return True
-
- def fetch_more(self, last_item, num_have):
- from r2.lib import solrsearch
-
- done = False
- limit = None
- if self.num:
- num_need = self.num - num_have
- if num_need <= 0:
- return True, None
- else:
- limit = max(int(num_need * EXTRA_FACTOR), 1)
- else:
- done = True
-
- search = self.query.run(after = last_item or self.after,
- reverse = self.reverse,
- num = limit)
-
- new_items = Thing._by_fullname(search.docs, data = True, return_dict=False)
-
- self.total_num = search.hits
-
- return done, new_items
-
def empty_listing(*things):
parent_name = None
for t in things:
@@ -484,9 +475,21 @@ class CommentBuilder(Builder):
for j in self.item_iter(i.child.things):
yield j
- def get_items(self, num, starting_depth = 0):
+ def get_items(self, num):
r = link_comments(self.link._id)
cids, comment_tree, depth, num_children = r
+
+ if (not isinstance(self.comment, utils.iters)
+ and self.comment and not self.comment._id in depth):
+ g.log.error("self.comment (%d) not in depth. Forcing update..."
+ % self.comment._id)
+
+ r = link_comments(self.link._id, _update=True)
+ cids, comment_tree, depth, num_children = r
+
+ if not self.comment._id in depth:
+ g.log.error("Update didn't help. This is gonna end in tears.")
+
if cids:
comments = set(Comment._byID(cids, data = True,
return_dict = False))
@@ -503,7 +506,11 @@ class CommentBuilder(Builder):
extra = {}
top = None
dont_collapse = []
+ ignored_parent_ids = []
#loading a portion of the tree
+
+ start_depth = 0
+
if isinstance(self.comment, utils.iters):
candidates = []
candidates.extend(self.comment)
@@ -514,6 +521,10 @@ class CommentBuilder(Builder):
#if hasattr(candidates[0], "parent_id"):
# parent = comment_dict[candidates[0].parent_id]
# items.append(parent)
+ if (hasattr(candidates[0], "parent_id") and
+ candidates[0].parent_id is not None):
+ ignored_parent_ids.append(candidates[0].parent_id)
+ start_depth = depth[candidates[0].parent_id]
#if permalink
elif self.comment:
top = self.comment
@@ -549,7 +560,7 @@ class CommentBuilder(Builder):
comments.remove(to_add)
if to_add._deleted and not comment_tree.has_key(to_add._id):
pass
- elif depth[to_add._id] < self.max_depth:
+ elif depth[to_add._id] < self.max_depth + start_depth:
#add children
if comment_tree.has_key(to_add._id):
candidates.extend(comment_tree[to_add._id])
@@ -589,6 +600,11 @@ class CommentBuilder(Builder):
#put the extras in the tree
for p_id, morelink in extra.iteritems():
+ if p_id not in cids:
+ if p_id in ignored_parent_ids:
+ raise KeyError("%r not in cids because it was ignored" % p_id)
+ else:
+ raise KeyError("%r not in cids but it wasn't ignored" % p_id)
parent = cids[p_id]
parent.child = empty_listing(morelink)
parent.child.parent_name = parent._fullname
@@ -641,9 +657,9 @@ class CommentBuilder(Builder):
return final
class MessageBuilder(Builder):
- def __init__(self, user, parent = None, focal = None,
+ def __init__(self, parent = None, focal = None,
skip = True, **kw):
- self.user = user
+
self.num = kw.pop('num', None)
self.focal = focal
self.parent = parent
@@ -661,11 +677,11 @@ class MessageBuilder(Builder):
for j in i.child.things:
yield j
+ def get_tree(self):
+ raise NotImplementedError, "get_tree"
+
def get_items(self):
- if self.parent:
- tree = conversation(self.user, self.parent)
- else:
- tree = user_messages(self.user)
+ tree = self.get_tree()
prev = next = None
if not self.parent:
@@ -747,6 +763,37 @@ class MessageBuilder(Builder):
return (final, prev, next, len(final), len(final))
+class SrMessageBuilder(MessageBuilder):
+ def __init__(self, sr, **kw):
+ self.sr = sr
+ MessageBuilder.__init__(self, **kw)
+
+ def get_tree(self):
+ if self.parent:
+ return sr_conversation(self.sr, self.parent)
+ return subreddit_messages(self.sr)
+
+class UserMessageBuilder(MessageBuilder):
+ def __init__(self, user, **kw):
+ self.user = user
+ MessageBuilder.__init__(self, **kw)
+
+ def get_tree(self):
+ if self.parent:
+ return conversation(self.user, self.parent)
+ return user_messages(self.user)
+
+class ModeratorMessageBuilder(MessageBuilder):
+ def __init__(self, user, **kw):
+ self.user = user
+ MessageBuilder.__init__(self, **kw)
+
+ def get_tree(self):
+ if self.parent:
+ return conversation(self.user, self.parent)
+ return moderator_messages(self.user)
+
+
def make_wrapper(parent_wrapper = Wrapped, **params):
def wrapper_fn(thing):
w = parent_wrapper(thing)
@@ -765,5 +812,5 @@ class TopCommentBuilder(CommentBuilder):
max_depth = 1, wrap = wrap)
def get_items(self, num = 10):
- final = CommentBuilder.get_items(self, num = num, starting_depth = 0)
+ final = CommentBuilder.get_items(self, num = num)
return [ cm for cm in final if not cm.deleted ]
diff --git a/r2/r2/models/link.py b/r2/r2/models/link.py
index ff94b8457..35283aa5e 100644
--- a/r2/r2/models/link.py
+++ b/r2/r2/models/link.py
@@ -53,7 +53,7 @@ class Link(Thing, Printable):
disable_comments = False,
selftext = '',
ip = '0.0.0.0')
-
+ _essentials = ('sr_id',)
_nsfw = re.compile(r"\bnsfw\b", re.I)
def __init__(self, *a, **kw):
@@ -72,7 +72,7 @@ class Link(Thing, Printable):
from subreddit import Default
if sr == Default:
sr = None
-
+
url = cls.by_url_key(url)
link_ids = g.permacache.get(url)
if link_ids:
@@ -131,7 +131,7 @@ class Link(Thing, Printable):
l._commit()
l.set_url_cache()
if author._spam:
- admintools.spam(l, True, False, 'banned user')
+ admintools.spam(l, banner='banned user')
return l
@classmethod
@@ -186,15 +186,15 @@ class Link(Thing, Printable):
if self._spam and (not user or
(user and self.author_id != user._id)):
return False
-
+
#author_karma = wrapped.author.link_karma
#if author_karma <= 0 and random.randint(author_karma, 0) != 0:
#return False
- if user:
+ if user and not c.ignore_hide_rules:
if user.pref_hide_ups and wrapped.likes == True:
return False
-
+
if user.pref_hide_downs and wrapped.likes == False:
return False
@@ -325,9 +325,7 @@ class Link(Thing, Printable):
item.score = max(0, item.score)
item.domain = (domain(item.url) if not item.is_self
- else 'self.' + item.subreddit.name)
- if not hasattr(item,'top_link'):
- item.top_link = False
+ else 'self.' + item.subreddit.name)
item.urlprefix = ''
item.saved = bool(saved.get((user, item, 'save')))
item.hidden = bool(hidden.get((user, item, 'hide')))
@@ -389,8 +387,10 @@ class Link(Thing, Printable):
item.link_child = SelfTextChild(item, expand = expand,
nofollow = item.nofollow)
#draw the edit button if the contents are pre-expanded
- item.editable = expand and item.author == c.user
-
+ item.editable = (expand and
+ item.author == c.user and
+ not item._deleted)
+
item.tblink = "http://%s/tb/%s" % (
get_domain(cname = cname, subreddit=False),
item._id36)
@@ -424,6 +424,11 @@ class Link(Thing, Printable):
item.midcolmargin = CachedVariable("midcolmargin")
item.comment_label = CachedVariable("numcomments")
+ item.as_deleted = False
+ if item.deleted and not c.user_is_admin:
+ item.author = DeletedUser()
+ item.as_deleted = True
+
if user_is_loggedin:
incr_counts(wrapped)
@@ -468,7 +473,7 @@ class PromotedLink(Link):
class Comment(Thing, Printable):
_data_int_props = Thing._data_int_props + ('reported',)
- _defaults = dict(reported = 0, parent_id = None,
+ _defaults = dict(reported = 0, parent_id = None,
moderator_banned = False, new = False,
banned_before_moderator = False)
@@ -596,7 +601,7 @@ class Comment(Thing, Printable):
if not hasattr(item, 'subreddit'):
item.subreddit = item.subreddit_slow
- if item.author_id == item.link.author_id:
+ if item.author_id == item.link.author_id and not item.link._deleted:
add_attr(item.attribs, 'S',
link = item.link.make_permalink(item.subreddit))
if not hasattr(item, 'target'):
@@ -708,6 +713,14 @@ class MoreMessages(Printable):
def recipient(self):
return self.parent.recipient
+ @property
+ def sr_id(self):
+ return self.parent.sr_id
+
+ @property
+ def subreddit(self):
+ return self.parent.subreddit
+
class MoreComments(Printable):
cachable = False
@@ -746,47 +759,94 @@ class MoreChildren(MoreComments):
class Message(Thing, Printable):
_defaults = dict(reported = 0, was_comment = False, parent_id = None,
- new = False, first_message = None,
- to_collapse = None, author_collapse = None)
+ new = False, first_message = None, to_id = None,
+ sr_id = None, to_collapse = None, author_collapse = None)
_data_int_props = Thing._data_int_props + ('reported', )
- cache_ignore = set(["to"]).union(Printable.cache_ignore)
+ cache_ignore = set(["to", "subreddit"]).union(Printable.cache_ignore)
@classmethod
- def _new(cls, author, to, subject, body, ip, parent = None):
+ def _new(cls, author, to, subject, body, ip, parent = None, sr = None):
m = Message(subject = subject,
body = body,
author_id = author._id,
new = True,
ip = ip)
m._spam = author._spam
+ sr_id = None
+ # check to see if the recipient is a subreddit and swap args accordingly
+ if to and isinstance(to, Subreddit):
+ to, sr = None, to
+
+ if sr:
+ sr_id = sr._id
if parent:
m.parent_id = parent._id
if parent.first_message:
m.first_message = parent.first_message
else:
m.first_message = parent._id
+ if parent.sr_id:
+ sr_id = parent.sr_id
+
+ if not to and not sr_id:
+ raise CreationError, "Message created with neither to nor sr_id"
+
+ m.to_id = to._id if to else None
+ if sr_id is not None:
+ m.sr_id = sr_id
- m.to_id = to._id
m._commit()
- #author = Author(author, m, 'author')
- #author._commit()
-
- # only global admins can be message spammed.
inbox_rel = None
- if not m._spam or to.name in g.admins:
- inbox_rel = Inbox._add(to, m, 'inbox')
+ if sr_id and not sr:
+ sr = Subreddit._byID(sr_id)
+ inbox_rel = []
+ # if there is a subreddit id, we have to add it to the moderator inbox
+ if sr_id:
+ inbox_rel.append(ModeratorInbox._add(sr, m, 'inbox'))
+ if author.name in g.admins:
+ m.distinguished = 'admin'
+ m._commit()
+ elif sr.is_moderator(author):
+ m.distinguished = 'yes'
+ m._commit()
+ # if there is a "to" we may have to create an inbox relation as well
+ # also, only global admins can be message spammed.
+ if to and (not m._spam or to.name in g.admins):
+ # if the current "to" is not a sr moderator,
+ # they need to be notified
+ if not sr_id or not sr.is_moderator(to):
+ inbox_rel.append(Inbox._add(to, m, 'inbox'))
+ # find the message originator
+ elif sr_id and m.first_message:
+ first = Message._byID(m.first_message, True)
+ orig = Account._byID(first.author_id)
+ # if the originator is not a moderator...
+ if not sr.is_moderator(orig) and orig._id != author._id:
+ inbox_rel.append(Inbox._add(orig, m, 'inbox'))
return (m, inbox_rel)
@property
def permalink(self):
return "/message/messages/%s" % self._id36
- def can_view(self):
- return (c.user_is_loggedin and
- (c.user_is_admin or
- c.user._id in (self.author_id, self.to_id)))
+ def can_view_slow(self):
+ if c.user_is_loggedin:
+ # simple case from before:
+ if (c.user_is_admin or
+ c.user._id in (self.author_id, self.to_id)):
+ return True
+ elif self.sr_id:
+ sr = Subreddit._byID(self.sr_id)
+ is_moderator = sr.is_moderator(c.user)
+ # moderators can view messages on subreddits they moderate
+ if is_moderator:
+ return True
+ elif self.first_message:
+ first = Message._byID(self.first_message, True)
+ return (first.author_id == c.user._id)
+
@classmethod
def add_props(cls, user, wrapped):
@@ -795,19 +855,33 @@ class Message(Thing, Printable):
#reset msgtime after this request
msgtime = c.have_messages
- #load the "to" field if required
- to_ids = set(w.to_id for w in wrapped)
+ # make sure there is a sr_id set:
+ for w in wrapped:
+ if not hasattr(w, "sr_id"):
+ w.sr_id = None
+
+ # load the to fields if one exists
+ to_ids = set(w.to_id for w in wrapped if w.to_id is not None)
tos = Account._byID(to_ids, True) if to_ids else {}
+
+ # load the subreddit field if one exists:
+ sr_ids = set(w.sr_id for w in wrapped if w.sr_id is not None)
+ m_subreddits = Subreddit._byID(sr_ids, data = True, return_dict = True)
+
+ # load the links and their subreddits (if comment-as-message)
links = Link._byID(set(l.link_id for l in wrapped if l.was_comment),
data = True,
return_dict = True)
- subreddits = Subreddit._byID(set(l.sr_id for l in links.values()),
- data = True, return_dict = True)
+ # subreddits of the links (for comment-as-message)
+ l_subreddits = Subreddit._byID(set(l.sr_id for l in links.values()),
+ data = True, return_dict = True)
+
parents = Comment._byID(set(l.parent_id for l in wrapped
if l.parent_id and l.was_comment),
data = True, return_dict = True)
# load the inbox relations for the messages to determine new-ness
+ # TODO: query cache?
inbox = Inbox._fast_query(c.user,
[item.lookups[0] for item in wrapped],
['inbox', 'selfreply'])
@@ -816,18 +890,36 @@ class Message(Thing, Printable):
inbox = dict((m._fullname, v)
for (u, m, n), v in inbox.iteritems() if v)
- for item in wrapped:
- item.to = tos[item.to_id]
- item.recipient = (item.to_id == c.user._id)
+ modinbox = ModeratorInbox._query(
+ ModeratorInbox.c._thing2_id == [item._id for item in wrapped],
+ data = True)
+
+ # best to not have to eager_load the things
+ def make_message_fullname(mid):
+ return "t%s_%s" % (utils.to36(Message._type_id), utils.to36(mid))
+ modinbox = dict((make_message_fullname(v._thing2_id), v)
+ for v in modinbox)
+
+ for item in wrapped:
+ item.to = tos.get(item.to_id)
+ if item.sr_id:
+ item.recipient = (item.author_id != c.user._id)
+ else:
+ item.recipient = (item.to_id == c.user._id)
- # don't mark non-recipient messages as new
- if not item.recipient:
- item.new = False
# new-ness is stored on the relation
+ if item.author_id == c.user._id:
+ item.new = False
elif item._fullname in inbox:
item.new = getattr(inbox[item._fullname], "new", False)
- if item.new and c.user.pref_mark_messages_read:
- queries.set_unread(inbox[item._fullname]._thing2, False)
+ # wipe new messages if preferences say so, and this isn't a feed
+ # and it is in the user's personal inbox
+ if (item.new and c.user.pref_mark_messages_read
+ and c.extension not in ("rss", "xml", "api", "json")):
+ queries.set_unread(inbox[item._fullname]._thing2,
+ c.user, False)
+ elif item._fullname in modinbox:
+ item.new = getattr(modinbox[item._fullname], "new", False)
else:
item.new = False
@@ -835,9 +927,10 @@ class Message(Thing, Printable):
item.score_fmt = Score.none
item.message_style = ""
+ # comment as message:
if item.was_comment:
link = links[item.link_id]
- sr = subreddits[link.sr_id]
+ sr = l_subreddits[link.sr_id]
item.to_collapse = False
item.author_collapse = False
item.link_title = link.title
@@ -851,6 +944,9 @@ class Message(Thing, Printable):
else:
item.subject = _('post reply')
item.message_style = "post-reply"
+ elif item.sr_id is not None:
+ item.subreddit = m_subreddits[item.sr_id]
+
if c.user.pref_no_profanity:
item.subject = profanity_filter(item.subject)
@@ -866,12 +962,17 @@ class Message(Thing, Printable):
# Run this last
Printable.add_props(user, wrapped)
+ @property
+ def subreddit_slow(self):
+ from subreddit import Subreddit
+ if self.sr_id:
+ return Subreddit._byID(self.sr_id)
+
@staticmethod
def wrapped_cache_key(wrapped, style):
s = Printable.wrapped_cache_key(wrapped, style)
- s.extend([c.msg_location, wrapped.new, wrapped.collapsed])
+ s.extend([wrapped.new, wrapped.collapsed])
return s
-
def keep_item(self, wrapped):
return True
@@ -914,3 +1015,35 @@ class Inbox(MultiRelation('inbox',
res.append(i)
return res
+
+class ModeratorInbox(Relation(Subreddit, Message)):
+ #TODO: shouldn't dupe this
+ @classmethod
+ def _add(cls, sr, obj, *a, **kw):
+ i = ModeratorInbox(sr, obj, *a, **kw)
+ i.new = True
+ i._commit()
+
+ if not sr._loaded:
+ sr._load()
+
+ moderators = Account._byID(sr.moderator_ids(), return_dict = False)
+ for m in moderators:
+ if obj.author_id != m._id and not getattr(m, 'modmsgtime', None):
+ m.modmsgtime = obj._date
+ m._commit()
+
+ return i
+
+ @classmethod
+ def set_unread(cls, thing, unread):
+ inbox = cls._query(cls.c._thing2_id == thing._id,
+ eager_load = True)
+ res = []
+ for i in inbox:
+ if i:
+ i.new = unread
+ i._commit()
+ res.append(i)
+ return res
+
diff --git a/r2/r2/models/mail_queue.py b/r2/r2/models/mail_queue.py
index da91ba967..8f25654f6 100644
--- a/r2/r2/models/mail_queue.py
+++ b/r2/r2/models/mail_queue.py
@@ -300,6 +300,7 @@ class Email(object):
"FINISHED_PROMO",
"NEW_PROMO",
"HELP_TRANSLATE",
+ "NERDMAIL"
)
subjects = {
@@ -318,6 +319,7 @@ class Email(object):
Kind.FINISHED_PROMO : _("[reddit] your promotion has finished"),
Kind.NEW_PROMO : _("[reddit] your promotion has been created"),
Kind.HELP_TRANSLATE : _("[i18n] translation offer from '%(user)s'"),
+ Kind.NERDMAIL : _("[reddit] hey, nerd!"),
}
def __init__(self, user, thing, email, from_name, date, ip, banned_ip,
diff --git a/r2/r2/models/subreddit.py b/r2/r2/models/subreddit.py
index ebd69fc84..00a43d2f9 100644
--- a/r2/r2/models/subreddit.py
+++ b/r2/r2/models/subreddit.py
@@ -50,12 +50,11 @@ class Subreddit(Thing, Printable):
allow_top = False, # overridden in "_new"
description = '',
images = {},
- ad_type = None,
- ad_file = os.path.join(g.static_path, 'ad_default.html'),
reported = 0,
valid_votes = 0,
show_media = False,
- css_on_cname = True,
+ css_on_cname = True,
+ use_whitelist = False,
domain = None,
over_18 = False,
mod_actions = 0,
@@ -64,6 +63,7 @@ class Subreddit(Thing, Printable):
sponsorship_img = None,
sponsorship_name = None,
)
+ _essentials = ('type', 'name')
_data_int_props = ('mod_actions',)
sr_limit = 50
@@ -98,7 +98,11 @@ class Subreddit(Thing, Printable):
q = cls._query(lower(cls.c.name) == name.lower(),
cls.c._spam == (True, False),
limit = 1)
- l = list(q)
+ try:
+ l = list(q)
+ except UnicodeEncodeError:
+ print "Error looking up SR %r" % name
+ raise
if l:
return l[0]._id
@@ -199,8 +203,9 @@ class Subreddit(Thing, Printable):
return (user
and (c.user_is_admin
or self.is_moderator(user)
- or (self.type in ('restricted', 'private')
- and self.is_contributor(user))))
+ or ((self.type in ('restricted', 'private') or
+ self.use_whitelist) and
+ self.is_contributor(user))))
def can_give_karma(self, user):
return self.is_special(user)
@@ -213,8 +218,8 @@ class Subreddit(Thing, Printable):
rl_karma = g.MIN_RATE_LIMIT_COMMENT_KARMA
else:
rl_karma = g.MIN_RATE_LIMIT_KARMA
-
- return not (self.is_special(user) or
+
+ return not (self.is_special(user) or
user.karma(kind, self) >= rl_karma)
def can_view(self, user):
@@ -231,7 +236,8 @@ class Subreddit(Thing, Printable):
def load_subreddits(cls, links, return_dict = True):
"""returns the subreddits for a list of links. it also preloads the
permissions for the current user."""
- srids = set(l.sr_id for l in links if hasattr(l, "sr_id"))
+ srids = set(l.sr_id for l in links
+ if getattr(l, "sr_id", None) is not None)
subreddits = {}
if srids:
subreddits = cls._byID(srids, True)
@@ -312,7 +318,7 @@ class Subreddit(Thing, Printable):
data = True,
read_cache = True,
write_cache = True,
- cache_time = g.page_cache_time)
+ cache_time = 3600)
if lang != 'all':
pop_reddits._filter(Subreddit.c.lang == lang)
@@ -579,15 +585,12 @@ class DefaultSR(FakeSubreddit):
srs = Subreddit._byID(sr_ids, return_dict = False)
if g.use_query_cache:
- results = []
- for sr in srs:
- results.append(queries.get_links(sr, sort, time))
- return queries.merge_cached_results(*results)
+ results = [queries.get_links(sr, sort, time)
+ for sr in srs]
+ return queries.merge_results(*results)
else:
q = Link._query(Link.c.sr_id == sr_ids,
sort = queries.db_sort(sort))
- if sort == 'toplinks':
- q._filter(Link.c.top_link == True)
if time != 'all':
q._filter(queries.db_times[time])
return q
@@ -652,7 +655,7 @@ class DomainSR(FakeSubreddit):
def get_links(self, sort, time):
from r2.lib.db import queries
return queries.get_domain_links(self.domain, sort, time)
-
+
Sub = SubSR()
Friends = FriendsSR()
All = AllSR()
diff --git a/r2/r2/public/static/comscore.html b/r2/r2/public/static/comscore.html
new file mode 120000
index 000000000..939a56851
--- /dev/null
+++ b/r2/r2/public/static/comscore.html
@@ -0,0 +1 @@
+../../templates/comscore.html
\ No newline at end of file
diff --git a/r2/r2/public/static/css/reddit.css b/r2/r2/public/static/css/reddit.css
index ac7fd2b64..fc365b3b5 100644
--- a/r2/r2/public/static/css/reddit.css
+++ b/r2/r2/public/static/css/reddit.css
@@ -149,6 +149,11 @@ ul.flat-vert {text-align: left;}
}
#mail img {position: relative; top: 2px}
+#modmail img {position: relative; top: 4px; margin-top: -6px; }
+#modmail.nohavemail {
+ opacity: .7;
+ filter:alpha(opacity=70); /* IE patch */
+}
.user {color: gray;}
@@ -692,6 +697,10 @@ ul.flat-vert {text-align: left;}
margin: 5px;
margin-right: 15px;
}
+.md td, .md th { border: 1px solid #EEE; padding: 1px 3px; }
+.md th { font-weight: bold; }
+.md table { margin: 5px 10px; }
+.md center { text-align: left; }
/*top link*/
a.star { text-decoration: none; color: #ff8b60 }
@@ -912,6 +921,12 @@ textarea.gray { color: gray; }
margin-left: 12px;
}
+.message.was-comment .child .message,
+.message.was-comment .child .usertext {
+ margin-top: 0px;
+ margin-left: 0px;
+}
+
.message .expand {
display: none;
}
@@ -1148,16 +1163,16 @@ textarea.gray { color: gray; }
}
.server-status td { padding-right: 2px; padding-left: 2px; }
.server-status .bar { height: 5px; background-color: blue; }
-.server-status .load0 { background-color: #FFFFFF; }
-.server-status .load1 { background-color: #f0f5FF; }
-.server-status .load2 { background-color: #E2ECFF; }
-.server-status .load3 { background-color: #d6f5cb; }
-.server-status .load4 { background-color: #CAFF98; }
-.server-status .load5 { background-color: #e4f484; }
-.server-status .load6 { background-color: #FFEA71; }
-.server-status .load7 { background-color: #ffdb81; }
-.server-status .load8 { background-color: #FF9191; }
-.server-status .load9 { background-color: #FF0000; color: #FFFFFF }
+.load0 { background-color: #FFFFFF; } /* white */
+.load1 { background-color: #f0f5FF; } /* pale blue */
+.load2 { background-color: #E2ECFF; } /* blue */
+.load3 { background-color: #d6f5cb; } /* pale green */
+.load4 { background-color: #CAFF98; } /* green */
+.load5 { background-color: #e4f484; } /* yellowgreen */
+.load6 { background-color: #FFEA71; } /* orange */
+.load7 { background-color: #ffdb81; } /* orangerose */
+.load8 { background-color: #FF9191; } /* pink */
+.load9 { background-color: #FF0000; color: #FFFFFF } /* red */
.server-status tr.down > * {
background-color: #C0C0C0;
text-decoration: line-through;
@@ -1428,6 +1443,250 @@ textarea.gray { color: gray; }
.oldbylink a { background-color: #F0F0F0; margin: 2px; color: gray}
+.error-log {
+ clear: both;
+}
+
+.error-log a:hover { text-decoration: underline }
+
+.error-log .rest {
+ display: none;
+}
+
+.error-log:first-child .rest {
+ display: block;
+}
+
+.error-log, .error-log .exception {
+ border: solid #aaa 1px;
+ padding: 3px 5px;
+ margin-bottom: 10px;
+}
+
+.error-log .exception {
+ background-color: #f0f0f8;
+}
+
+.error-log .exception.new {
+ border: dashed #ff6600 2px;
+}
+
+.error-log .exception.severe {
+ border: solid #ff0000 2px;
+ background-color: #ffdfdf;
+}
+
+.error-log .exception.interesting {
+ border: dotted black 2px;
+ background-color: #e0e0e8;
+}
+
+.error-log .exception.fixed {
+ border: solid #008800 1px;
+ background-color: #e8f6e8;
+}
+
+.error-log .exception span {
+ font-weight: bold;
+ margin-right: 5px;
+}
+
+.error-log .exception span.normal {
+ margin-right: 0;
+ display: none;
+}
+
+.error-log .exception span.new, .error-log .edit-area label.new {
+ color: #ff6600;
+}
+
+.error-log .exception span.severe, .error-log .edit-area label.severe {
+ color: #ff0000;
+}
+
+.error-log .exception span.interesting, .error-log .edit-area label.interesting {
+ font-weight: normal;
+ font-style: italic;
+}
+
+.error-log .exception span.fixed, .error-log .edit-area label.fixed {
+ color: #008800;
+}
+
+.error-log .exception-name {
+ margin-right: 5px;
+}
+
+.error-log .nickname {
+ color: black;
+ font-weight: bold;
+ font-size: larger;
+}
+
+.error-log .exception.fixed .nickname {
+ text-decoration: line-through;
+}
+
+.error-log a:focus {
+ -moz-outline-style: none;
+}
+
+.error-log .edit-area {
+ border: solid black 1px;
+ background-color: #eee;
+}
+
+.error-log .edit-area label {
+ margin-right: 25px;
+}
+
+.error-log .edit-area input[type=radio] {
+ margin-right: 4px;
+}
+
+.error-log .edit-area input[type=text] {
+ width: 800px;
+}
+
+.error-log .edit-area table td, .error-log .edit-area table th {
+ padding: 5px 0 0 5px;
+}
+
+.error-log .save-button {
+ margin: 0 5px 5px 0;
+ font-size: small;
+ padding: 0;
+}
+
+.error-log .date {
+ font-size: 150%;
+ font-weight: bold;
+}
+
+.error-log .hexkey {
+ color: #997700;
+}
+
+.error-log .exception-name {
+ font-size: larger;
+ color: #000077;
+}
+
+.error-log .frequency {
+ font-size: larger;
+ float: right;
+ color: #886666;
+}
+
+.error-log .occurrences {
+ border: solid #003300 1px;
+ margin: 5px 0 2px;
+ padding: 2px;
+}
+
+.error-log .occurrence {
+ color: #003300;
+ font-family: monospace;
+ margin-right: 3em;
+ white-space: nowrap;
+}
+
+.error-log table.stacktrace th, .error-log table.stacktrace td {
+ border: solid 1px #aaa;
+}
+
+.error-log table.stacktrace td {
+ font-family: monospace;
+}
+
+.error-log table.stacktrace td.col-1 {
+ text-align: right;
+ padding-right: 10px;
+}
+
+.error-log .logtext.error {
+ color: black;
+ margin: 0 0 10px 0;
+}
+
+.error-log .logtext {
+ margin-bottom: 10px;
+ border: solid #555 2px;
+ background-color: #eeece6;
+ padding: 5px;
+ font-size: small;
+}
+
+.error-log .logtext * {
+ color: black;
+}
+
+.error-log .logtext.error .loglevel {
+ color: white;
+ background-color: red;
+}
+
+.error-log .logtext.warning .loglevel {
+ background-color: #ff6600;
+}
+
+.error-log .logtext.info .loglevel {
+ background-color: #00bbff;
+}
+
+.error-log .logtext.debug .loglevel {
+ background-color: #00ee00;
+}
+
+.error-log .logtext .loglevel {
+ padding: 0 5px;
+ margin-right: 5px;
+ border: solid black 1px;
+}
+.error-log .logtext table {
+ margin: 8px 5px 2px 0;
+ font-family: monospace;
+}
+
+.error-log .logtext table,
+.error-log .logtext table th,
+.error-log .logtext table td {
+ border: solid #aaa 1px;
+}
+.error-log .logtext table th, .error-log .logtext table td {
+ border: solid #aaa 1px;
+}
+
+.error-log .logtext table .occ {
+ text-align: right;
+}
+
+.error-log .logtext table .dotdotdot {
+ padding: 0;
+}
+.error-log .logtext table .dotdotdot a {
+ margin: 0;
+ display: block;
+ width: 100%;
+ height: 100%;
+ background-color: #e0e0e0;
+}
+.error-log .logtext table .dotdotdot a:hover {
+ background-color: #bbb;
+ text-decoration: none;
+}
+
+.error-log .logtext .classification {
+ font-size: larger;
+ font-weight: bold;
+}
+.error-log .logtext .actual-text {
+ max-width: 600px;
+ overflow: hidden;
+}
+.error-log .logtext .occ {
+}
+
.details {
font-size: x-small;
margin-bottom: 10px;
@@ -1967,6 +2226,15 @@ form input[type=radio] {margin: 2px .5em 0 0; }
.reported { background-color: #f6e69f }
.suspicious { background-color: #f6e69f }
.spam { background-color: #FA8072 }
+.banned-user {
+ overflow: hidden;
+ opacity: .7;
+ filter:alpha(opacity=70); /* IE patch */
+}
+
+.banned-user .title {
+ text-decoration: line-through;
+}
.little { font-size: smaller }
.gray { color: gray }
@@ -2087,7 +2355,31 @@ ul#image-preview-list .description pre {
padding: 5px;
margin: 5px;
float: left;
-}
+}
+
+.private-feeds.instructions .prefright {
+ line-height: 2em;
+}
+
+.private-feeds.instructions .feedlink {
+ padding: 2px 5px;
+ font-weight: bold;
+ margin-right: 5px;
+ border: 1px solid #0000FF;
+ color: white;
+ padding-left: 22px;
+ background: #336699 none no-repeat scroll top left;
+}
+
+.private-feeds.instructions .feedlink.rss-link {
+ background-image: url(/static/rss.png);
+}
+
+.private-feeds.instructions .feedlink.json-link {
+ background-color: #DDDDDD;
+ background-image: url(/static/json.png);
+ color: black;
+}
/* Socialite */
.socialite.instructions ul {
@@ -2744,20 +3036,20 @@ ul.tabmenu.formtab {
color: #336699;
}
-.award-table {
+.lined-table {
margin: 5px;
}
-table.award-table {
+table.lined-table {
margin: 5px 3px;
}
-.award-table th, .award-table td {
+.lined-table th, .lined-table td {
border: solid #cdcdcd 1px;
padding: 3px;
}
-.award-table th {
+.lined-table th {
text-align: center;
font-weight: bold;
}
@@ -2782,7 +3074,6 @@ table.award-table {
.sidecontentbox a.helplink {
float: right;
- font-size: x-small;
margin-top: 4px;
}
@@ -3242,6 +3533,9 @@ dd { margin-left: 20px; }
.icon-menu .reddit-moderators {
background-image: url(/static/star.png); /* SPRITE */
}
+.icon-menu .moderator-mail {
+ background-image: url(/static/mailgray.png); /* SPRITE */
+}
.icon-menu .reddit-contributors {
background-image: url(/static/pencil.png); /* SPRITE */
}
@@ -3278,14 +3572,15 @@ dd { margin-left: 20px; }
border: 1px solid gray;
}
-a.ip {
+a.adminbox {
border: solid 1px #eeeeee;
color: #cdcdcd;
font-family: monospace;
- text-size: x-small;
+ text-align: center;
+ padding-right: 1px;
}
-a.ip:hover {
+a.adminbox:hover {
text-decoration: none;
color: orangered;
border: solid 1px orangered;
@@ -3302,3 +3597,83 @@ a.ip:hover {
font-weight: bold;
}
+.wide {
+ width: 100%;
+}
+
+.centered {
+ text-align: center;
+ vertical-align: middle;
+}
+
+.sr-ad-table .inherited {
+ background-color: #ddeeff;
+}
+.sr-ad-table .overridden {
+ background-color: #ffeedd;
+}
+.sr-ad-table .unused {
+ background-color: #eee;
+}
+.sr-ad-table .inherited .whence {
+ font-style: italic;
+}
+.sr-ad-table .overridden .whence {
+ font-weight: bold;
+}
+.sr-ad-table .details {
+ font-size: 150%;
+ padding: 10px;
+ vertical-align: top;
+}
+.sr-ad-table .details div {
+}
+.sr-ad-table .details .codename {
+ font-size: 150%;
+ margin-bottom: 20px;
+}
+.sr-ad-table .weight {
+ width: 4em;
+}
+
+.ad-assign-table .warning {
+ font-weight: bold;
+ color: red;
+}
+
+.usage-table .intersection {
+ color: #888;
+ font-family: monospace;
+ text-align: right;
+ border-left: none;
+ border-right: none;
+}
+
+.usage-table .intersection span {
+ padding: 1px 3px 0 2px;
+}
+
+.usage-table .empty.intersection {
+ text-align: center;
+ color: #ccc;
+}
+
+.usage-table .elapsed.intersection {
+ color: black;
+}
+
+.usage-table .count.intersection {
+ color: black;
+}
+
+.usage-table .average.intersection {
+ color: black;
+ border-right: solid #cdcdcd 1px;
+}
+
+.usage-table .empty.intersection, .usage-table .average.intersection {
+ padding-left: 0;
+ margin-left: 0;
+ border-right: solid #cdcdcd 1px;
+ padding-right: 5px;
+}
diff --git a/r2/r2/public/static/js/jquery.reddit.js b/r2/r2/public/static/js/jquery.reddit.js
index 8f1b1fe24..4f9a37ea2 100644
--- a/r2/r2/public/static/js/jquery.reddit.js
+++ b/r2/r2/public/static/js/jquery.reddit.js
@@ -66,9 +66,9 @@ $.with_default = function(value, alt) {
$.unsafe = function(text) {
/* inverts websafe filtering of reddit app. */
if(typeof(text) == "string") {
- text = text.replace(/>/g, ">")
- .replace(/</g, "<").replace(/&/g, "&")
- .replace(/"/g, '"');
+ text = text.replace(/"/g, '"')
+ .replace(/>/g, ">").replace(/</g, "<")
+ .replace(/&/g, "&");
}
return (text || "");
};
@@ -121,8 +121,12 @@ function handleResponse(action) {
objs[0] = jQuery;
$.map(r.jquery, function(q) {
var old_i = q[0], new_i = q[1], op = q[2], args = q[3];
- for(var i = 0; args.length && i < args.length; i++)
+ if (typeof(args) == "string") {
+ args = $.unsafe(args);
+ } else { // assume array
+ for(var i = 0; args.length && i < args.length; i++)
args[i] = $.unsafe(args[i]);
+ }
if (op == "call")
objs[new_i] = objs[old_i].apply(objs[old_i]._obj, args);
else if (op == "attr") {
@@ -220,7 +224,8 @@ rate_limit = function() {
var default_rate_limit = 333;
/* rate limit on a per-action basis (also in ms, 0 = don't rate limit) */
var rate_limits = {"vote": 333, "comment": 5000,
- "ignore": 0, "ban": 0, "unban": 0};
+ "ignore": 0, "ban": 0, "unban": 0,
+ "assignad": 0 };
var last_dates = {};
/* paranoia: copy global functions used to avoid tampering. */
@@ -483,9 +488,9 @@ $.insert_things = function(things, append) {
var midcol = $(".midcol:visible:first").css("width");
var numcol = $(".rank:visible:first").css("width");
var s = $.listing(data.parent);
- if(append)
+ if(append)
s = s.append($.unsafe(data.content)).children(".thing:last");
- else
+ else
s = s.prepend($.unsafe(data.content)).children(".thing:first");
s.find(".midcol").css("width", midcol);
s.find(".rank").css("width", midcol);
diff --git a/r2/r2/public/static/json.png b/r2/r2/public/static/json.png
new file mode 100644
index 0000000000000000000000000000000000000000..6f349fa857be22940dd953957ff0e4242c68df6a
GIT binary patch
literal 1110
zcmV-c1gZOpP)SE!qotv#V`!MsHdZ&!#+7V`Sj@M=;77Xl^c7Kn3$*lP16*&+kMm6
z*yt-PEF8$r&i<*kwl=6^fn^ZThM*8XR##W&udJ*Lo}ZuJWw6a=leoAziHV7kgoFeI
z%Kr|BL+b16B{em5-fFcz>g??FF=
z%vU;``@6B8ZbaM;807{uVJsw!0$0M1WNP6pT4*WcdX-@j+E
zSZ-Of%+u3T0g+Nj)s5HZmWqlB=jiC@56J3db~0;jZkG1;c4`%db`Q{~@4Q~`tL5co
zJA&^rDH}!qv$HdS<^rki93CFNG>3+U>QG`nMQ5qhrM0zHuCK2p5C}|TREEg}I$!nm
z^?j97w%*@Q6
z2}Drp$A>IwySuy3t$TZWcL>BXa&mIS<#MS_BogSgror@zCi%EWX;?B7qg=tv3JyUM
zzFSo$mYb20!7*UhRLN-4?Z9h5XdNXx-Sy++W982nf|?j+H3p>5jND*G8g$9mKzfB7
z?+}n>6KrQ^N3n#jl$4Z+iSc;Nk_o5xWrop(vriSvr{E=x;GN*ek@@6)29q91uu
zT;1K>J^1*9I`T+S3yQ5XqY@L~kqg6R(W{r>x_tE=~!0Jo4TQ-je56S9hHtq)is4Tx0C)kdlg(=rK^VrLO_VCs`hljsC<}^uh_WVu7Q7_QM~MwFB+;n1WOwtS
z$!6W%My(!t@Gk1fgP`cagIDq6RKbG?m8vKz{uLHPkh(sz%ccqzFTTs}{Py9QcV^z1
z0m935)9i)-^bE%;&rHjwPM?uSK10F=Y{p(3t*f?~DwPVftP|fp(9L*vNaR}D^ZxbC
zg_C#RE+De}s0#$#SOY19HXes|p!
zdPklCtCs8jxb3>D_c_KVVDY>}Lnwa19J^kN(bnT|Acp+*<^y
z3qXPGEw%^7f$f)p5~wE;ipPW3uOVe%OD0<=$gzYnS<4CIe~n&uj(g?BR(?l~M)5e?
z3BE7vK_we-vy;mQtkpBDzweoEr;7nMtz0?av-M+Qu3jG9I8h1Mbf$+M-Odj6`F6Dy
zc+T}_haOGO4Ci2Ui{h?$IXh0i8Za`*qQFN+3<)ThL=8EV{PRW^w(*?Z8BZjpD$gU=
zh44o>`bBRa(|z-j)oHaIImO$iN9DYs9*)W}<%oh`%^}xdIXgfIC1*BqEfOk~G`{rA~neijY9qe#{&1*y}{%
zr(q;#W}oMKvy(*RA7Zb*8l!xgLT6cxp@7=2X0y=&WBJdkB%1&m>*7SxgJJ7l1A@h!r1N_-
z@d0M)JGL1b?`wYc6Iax*l{We07A!~-c6}*EvtQSkHeFcS6
z8rUtiObB~KwZlH$3}O?T8iNt+H;Q5ZQ}m3o^HF<_i|S%3`Qd1biF^KBe*qZXlgzdp
Rs~i9T002ovPDHLkV1m4R!^i*t
literal 0
HcmV?d00001
diff --git a/r2/r2/public/static/modmailgray.png b/r2/r2/public/static/modmailgray.png
new file mode 100644
index 0000000000000000000000000000000000000000..5129cae2f64ee4eac2e8025a88c936ea74e12261
GIT binary patch
literal 648
zcmV;30(bq1P)lH(4otoBBDcekWvt8wI#BH4XZ^u
zI8M4OCFQ){H@Yg@p@rzdkMI3{^WK~9y?MhSBK&VVM7TsaHTEx2saiF
z5PpsQGa8NNyxng17z~C*;u=YkJ~SGQjlp2>n0R3v$8yy#5Dx?bg4t{?IUJ5xYioCL
z>+=^>aB4$EL7~ya+u1DgJpa;SvAhljg96Kia5yX$i$zSOQrPWwM;Sl1@edn}G*J{W
znM`7(Qo(36DuT!3c^8kzvDfR30t@Sir}c>UuTPU7-9%g>?fFBMb~+toIX<6nbFNyg
zUUR$MpwsC_oo!11a&L6*+P5}=w4I(xjYcC-D@wUs&amNqQpo6>Q=bNx0Z!76nD}Xd
z%PRoks{lt9Clc+#PdaMMpLe-j1zDCSH-5YM9y?NZ!ewfYX}i{Lx6x*^J!3CrHyGCI
z_2p0~^faH(AEjc@ZGVOA>I%@x0>^Pc!-YsB0&wCu)M_>UqPCz?3b79=wP}+;(x#zFHr>R{f3iRK
z-kI^t+!RI7Vb9*#b7#Ie=R4n-W!M?#kHp(@?JWaJ>0^M7>RT`V_L{?z&_={4HRgw+>(gYQWt{429Y9;`Fc6_lf$<
zuGedY$ZZu@5bL@hb%$St-YP;CuR$(chAK{hsWNShQKWn)+Aul;Y(}+IMminKEw|Ob
zCK&WFlO`7$gpO$UBX*Jf(h9OL0e|dksH8lI_qyc*@t1o`$c9D(FGMG$NKO
zmk9E=q1N(Xu_RfY2?I`3k2Q7oGYEb;30?RDOvEkuAs8L%DJZSM7-5YM?9th5BXm%u
zD2aFib`mLwrY^W${SaM;34B)=wO>MT;a%v-ZwMe_BQs#eG>hQo{1TjjgQ*jVWCBe4
zaOxZ3w)fKM9`xcErFjPG#(4|KGhKi~1-iWtYI6zx+!d&RTQCa}TH}^tb`owYw!<8FD^w1knycV)5DN;?gv(==WY^Ve0v@qBp
z(k}t)vtYGpO0^!WCIjB_C=^M^8>dPqTj1on!4qi&YeiJA{tl%mMM|^ck-QA9N!b(w
z{yNn4pW$B^f_M65s4HLCV_7^6(SHKUO`&}51E}p4>yICN9@PTmi>?w;#~d%Fziigx0-smPreWL
z$cGS53`56qsE$xX)HP;A_CEy`NfF>rR7FMPlkj#QprQnS1ClRPKB!xAcYN)c<T=eTmHD5WKM-wpt&I}qf@
z5ft-bN6b%Q{j0aAnsgD&(1jrxmcM+5l#n9u6^f7}UK>HlV?%7@=}dFy1DzLdYm4Wq
zr3w-3TasnqKn4twnqPSmvP|!2uC*Xw>X1s+R##EFb(s`o%31t#cIIB^)MRP%rFJ$Z
z+?qr2ShP-(rX=;LNkeOUtq?gbKw>t6X4HO$9!__lh{uu4CZUpSVPm=QO7E${rSL){
zHqJdBZ_ReL3=qm7Q57XV{&$#S(vm7EWN5vZAAa!j@Gsym=FI3u$Lkn900000NkvXX
Hu0mjfCrmKj
literal 0
HcmV?d00001
diff --git a/r2/r2/templates/adminadassign.html b/r2/r2/templates/adminadassign.html
new file mode 100644
index 000000000..c9cbf2cb8
--- /dev/null
+++ b/r2/r2/templates/adminadassign.html
@@ -0,0 +1,70 @@
+## The contents of this file are subject to the Common Public Attribution
+## License Version 1.0. (the "License"); you may not use this file except in
+## compliance with the License. You may obtain a copy of the License at
+## http://code.reddit.com/LICENSE. The License is based on the Mozilla Public
+## License Version 1.1, but Sections 14 and 15 have been added to cover use of
+## software over a computer network and provide for limited attribution for the
+## Original Developer. In addition, Exhibit A has been modified to be consistent
+## with Exhibit B.
+##
+## Software distributed under the License is distributed on an "AS IS" basis,
+## WITHOUT WARRANTY OF ANY KIND, either express or implied. See the License for
+## the specific language governing rights and limitations under the License.
+##
+## The Original Code is Reddit.
+##
+## The Original Developer is the Initial Developer. The Initial Developer of
+## the Original Code is CondeNet, Inc.
+##
+## All portions of the code written by CondeNet are Copyright (c) 2006-2010
+## CondeNet, Inc. All Rights Reserved.
+################################################################################
+
+<%namespace file="utils.html" import="error_field"/>
+
+
+
diff --git a/r2/r2/templates/adminads.html b/r2/r2/templates/adminads.html
new file mode 100644
index 000000000..f5ae80d39
--- /dev/null
+++ b/r2/r2/templates/adminads.html
@@ -0,0 +1,112 @@
+## The contents of this file are subject to the Common Public Attribution
+## License Version 1.0. (the "License"); you may not use this file except in
+## compliance with the License. You may obtain a copy of the License at
+## http://code.reddit.com/LICENSE. The License is based on the Mozilla Public
+## License Version 1.1, but Sections 14 and 15 have been added to cover use of
+## software over a computer network and provide for limited attribution for the
+## Original Developer. In addition, Exhibit A has been modified to be consistent
+## with Exhibit B.
+##
+## Software distributed under the License is distributed on an "AS IS" basis,
+## WITHOUT WARRANTY OF ANY KIND, either express or implied. See the License for
+## the specific language governing rights and limitations under the License.
+##
+## The Original Code is Reddit.
+##
+## The Original Developer is the Initial Developer. The Initial Developer of
+## the Original Code is CondeNet, Inc.
+##
+## All portions of the code written by CondeNet are Copyright (c) 2006-2010
+## CondeNet, Inc. All Rights Reserved.
+################################################################################
+
+<%namespace file="utils.html" import="error_field"/>
+
+<%def name="adbuttons(codename, submit_link)">
+
+%def>
+
+<%def name="adedit(fullname, codename='', imgurl='', linkurl='')">
+
+%def>
+
+
+
+
+ fn
+ cn
+ img
+ links & buttons
+
+ %for ad in thing.ads:
+
+ ${ad._fullname}
+ ${ad.codename}
+ %if ad.codename == "DART":
+
+
+
+
+ ${adbuttons(ad.codename, ad.submit_link())}
+
+ %else:
+
+
+
+
+
+
+ img: ${ad.imgurl}
+ link: ${ad.linkurl}
+
+ ${adbuttons(ad.codename, ad.submit_link())}
+ ${adedit(ad._fullname, ad.codename, ad.imgurl, ad.linkurl)}
+
+ %endif
+
+ %endfor
+
+
+new ad
+
+${adedit("NEW")}
diff --git a/r2/r2/templates/adminadsrs.html b/r2/r2/templates/adminadsrs.html
new file mode 100644
index 000000000..b7e77bbcc
--- /dev/null
+++ b/r2/r2/templates/adminadsrs.html
@@ -0,0 +1,71 @@
+## The contents of this file are subject to the Common Public Attribution
+## License Version 1.0. (the "License"); you may not use this file except in
+## compliance with the License. You may obtain a copy of the License at
+## http://code.reddit.com/LICENSE. The License is based on the Mozilla Public
+## License Version 1.1, but Sections 14 and 15 have been added to cover use of
+## software over a computer network and provide for limited attribution for the
+## Original Developer. In addition, Exhibit A has been modified to be consistent
+## with Exhibit B.
+##
+## Software distributed under the License is distributed on an "AS IS" basis,
+## WITHOUT WARRANTY OF ANY KIND, either express or implied. See the License for
+## the specific language governing rights and limitations under the License.
+##
+## The Original Code is Reddit.
+##
+## The Original Developer is the Initial Developer. The Initial Developer of
+## the Original Code is CondeNet, Inc.
+##
+## All portions of the code written by CondeNet are Copyright (c) 2006-2010
+## CondeNet, Inc. All Rights Reserved.
+################################################################################
+
+<%namespace file="utils.html" import="percentage"/>
+
+<%def name="adsrline(adsr)">
+
+
+
+ ${adsr._thing2.name}
+
+
+
+ ${adsr.weight}
+
+
+ ${percentage(adsr.weight, thing.sr_totals[adsr._thing2.name])}
+
+
+%def>
+
+
+
+
+ ${thing.ad.codename}
+
+
+
+
+
+ community
+
+
+ wt
+
+
+ pct
+
+
+ %for adsr in thing.adsrs:
+ ${adsrline(adsr)}
+ %endfor
+
+
+
+ back to ads
+
+
diff --git a/r2/r2/templates/adminawardgive.html b/r2/r2/templates/adminawardgive.html
index 7c9c3c059..acffedbc5 100644
--- a/r2/r2/templates/adminawardgive.html
+++ b/r2/r2/templates/adminawardgive.html
@@ -27,7 +27,7 @@
-
+
diff --git a/r2/r2/templates/adminawards.html b/r2/r2/templates/adminawards.html
index b7f1c73ca..2a205fb97 100644
--- a/r2/r2/templates/adminawards.html
+++ b/r2/r2/templates/adminawards.html
@@ -46,7 +46,7 @@
onsubmit="return post_form(this, 'editaward');" id="awardedit-${fullname}">
-
+
codename
@@ -84,7 +84,7 @@
%def>
-
+
fn
diff --git a/r2/r2/templates/adminawardwinners.html b/r2/r2/templates/adminawardwinners.html
index 5902c9738..0f93a6766 100644
--- a/r2/r2/templates/adminawardwinners.html
+++ b/r2/r2/templates/adminawardwinners.html
@@ -39,7 +39,7 @@
%def>
-
+
diff --git a/r2/r2/templates/adminerrorlog.html b/r2/r2/templates/adminerrorlog.html
new file mode 100644
index 000000000..bfe35ee59
--- /dev/null
+++ b/r2/r2/templates/adminerrorlog.html
@@ -0,0 +1,211 @@
+## The contents of this file are subject to the Common Public Attribution
+## License Version 1.0. (the "License"); you may not use this file except in
+## compliance with the License. You may obtain a copy of the License at
+## http://code.reddit.com/LICENSE. The License is based on the Mozilla Public
+## License Version 1.1, but Sections 14 and 15 have been added to cover use of
+## software over a computer network and provide for limited attribution for the
+## Original Developer. In addition, Exhibit A has been modified to be consistent
+## with Exhibit B.
+##
+## Software distributed under the License is distributed on an "AS IS" basis,
+## WITHOUT WARRANTY OF ANY KIND, either express or implied. See the License for
+## the specific language governing rights and limitations under the License.
+##
+## The Original Code is Reddit.
+##
+## The Original Developer is the Initial Developer. The Initial Developer of
+## the Original Code is CondeNet, Inc.
+##
+## All portions of the code written by CondeNet are Copyright (c) 2006-2010
+## CondeNet, Inc. All Rights Reserved.
+################################################################################
+
+<%namespace file="utils.html" import="error_field"/>
+
+<%def name="status_radio(val, datehex, current)">
+
+ ${val}
+%def>
+
+
+ %for date, groupings in thing.date_summaries:
+
+
+
+ ${date}
+
+
+
+ %for g in groupings:
+ %if g[0] > 0:
+ ${exception(date, *g)}
+ %else:
+ ${text(date, *g)}
+ %endif
+ %endfor
+
+
+ %endfor
+
+
+<%def name="exception(date, frequency, hexkey, d)">
+ <% datehex = "-".join([date.replace("/",""), hexkey]) %>
+
+
+%def>
+
+<%def name="textocc(text, occ, hide)">
+ %if hide:
+
+ %endif
+
+ ${text}
+
+
+ ${occ}
+
+
+%def>
+
+<%def name="text(date, sort_order, level, classification, textoccs)">
+
+
+ ${level}:
+
+
+ ${classification}
+
+
+ %for i, (text, occ) in enumerate (textoccs):
+ %if i < 3 or i >= len(textoccs) - 3:
+ ${textocc(text, occ, False)}
+ %elif i == 3:
+
+
+ ${textocc(text, occ, True)}
+ %else:
+ ${textocc(text, occ, True)}
+ %endif
+ %endfor
+
+
+%def>
diff --git a/r2/r2/templates/adminusage.html b/r2/r2/templates/adminusage.html
new file mode 100644
index 000000000..b7e8827e8
--- /dev/null
+++ b/r2/r2/templates/adminusage.html
@@ -0,0 +1,78 @@
+## The contents of this file are subject to the Common Public Attribution
+## License Version 1.0. (the "License"); you may not use this file except in
+## compliance with the License. You may obtain a copy of the License at
+## http://code.reddit.com/LICENSE. The License is based on the Mozilla Public
+## License Version 1.1, but Sections 14 and 15 have been added to cover use of
+## software over a computer network and provide for limited attribution for the
+## Original Developer. In addition, Exhibit A has been modified to be consistent
+## with Exhibit B.
+##
+## Software distributed under the License is distributed on an "AS IS" basis,
+## WITHOUT WARRANTY OF ANY KIND, either express or implied. See the License for
+## the specific language governing rights and limitations under the License.
+##
+## The Original Code is Reddit.
+##
+## The Original Developer is the Initial Developer. The Initial Developer of
+## the Original Code is CondeNet, Inc.
+##
+## All portions of the code written by CondeNet are Copyright (c) 2006-2010
+## CondeNet, Inc. All Rights Reserved.
+################################################################################
+
+<%def name="intersection(d, hidden)">
+ %if d is None:
+
+ —
+
+ %else:
+ %for cls in ("elapsed", "slash", "count", "equals", "average"):
+
+ %if cls == "slash":
+ /
+ %elif cls == "equals":
+ =
+ %else:
+
+ %if cls == 'count':
+ ${d[cls]}
+ %else:
+ ${"%0.2f" % d[cls]}
+ %endif
+
+ %endif
+
+ %endfor
+ %endif
+%def>
+
+
+
+ action
+ %for label, hidden in thing.labels:
+ ${label}
+ %endfor
+
+
+%for action in thing.action_order:
+
+ ${action}
+ %for label, hidden in thing.labels:
+ ${intersection(thing.actions[action].get(label), hidden)}
+ %endfor
+
+%endfor
+
+
diff --git a/r2/r2/templates/ads.html b/r2/r2/templates/ads.html
index 28d3651e3..8b1796ef5 100644
--- a/r2/r2/templates/ads.html
+++ b/r2/r2/templates/ads.html
@@ -25,20 +25,12 @@
import random
%>
-%if c.site.ad_type == "custom" or c.site.ad_file != c.site._defaults.get("ad_file"):
-
-%elif c.site.ad_type == "basic":
- <% name = c.site.name if not c.default_sr else '' %>
-
-%else:
-
-%endif
+<% name = "r/%s/" % c.site.name if not c.default_sr else '' %>
+
+
+
+
+
+
+
+