Added the following FIXME:
How should a Twitter user get their Inbox filled with foreign tweets?
Every imported Twitter user has a profile in the Profile table, so we
could setup a Subscription entry for each of those, meaning they get
collected in the InboxNoticeStream... But this would mean a lot of
unnecessary entries and listings that generally just point to the
locked down Twitter service.
Let's figure out a good relation so we can connect any profile to any
imported foreign notice, so it shows up in the "all" feed.
Also cleaned up and made typing stricter for the stream, so only
profiles can be submitted. This reasonably also means we can create
"inbox" or "all" streams for foreign profiles as well using the same
stream handler (but of course only for messages we already know about).
To avoid looking up posts for a long time in a large notice database,
the lookback period for the inbox is no longer than the profile creation
date. (this matches the behaviour of Inbox)
Inbox class can probably be removed now.
Generally the Cron plugin will run if there's still execution time for
1 second since starting the Action processing. If you want to change
this (such as disabling, 0 seconds, or maybe running bigger chunks,
for like 4 seconds) you can do this, where 'n' is time in seconds.
addPlugin('Cron', array('secs_per_action', n));
Add 'rel_to_pageload'=>false to the array if you want to run the queue
for a certain amount of seconds _despite_ maybe already having run that
long in the previous parts of Action processing.
Perhaps you want to run the cron script remotely, using a machine capable
of background processing (or locally, to avoid running daemon processes),
simply do an HTTP GET request to the route /main/cron of your GNU social.
Setting secs_per_action to 0 in the plugin config will imply that you run
all your queue handling by calling /main/cron (which runs as long as it can).
/main/cron will output "0" if it has finished processing, "1" if it should
be called again to complete processing (because it ran out of time due to
PHP's max_execution_time INI setting).
The Cron plugin also runs events as close to hourly, daily and weekly
as you get, based on the opportunistic method of running whenever a user
visits the site. This means of course that the cron events should be as
fast as possible, not only to avoid delaying page load for users but
also to minimize the risk of running into PHP's max_execution_time. One
suggestion is to only use the events to add new queue items for later processing.
These events are called CronHourly, CronDaily, CronWeekly - however there
is no guarantee that all events will execute, so some kind of failsafe,
transaction-ish method must be implemented in the future.
To make the StatusNet::addPlugin() accept only arrays,
the lib/default.php had to be changed because all plugins
had 'null' as default value instead of an array.
Also removed the entirely unused saveGroups function.
Now avoiding multiGet and using listFind in Profile->getGroups()
so we don't have to deal with ArrayWrapper.
New plugins:
* LRDD
LRDD implements client-side RFC6415 and RFC7033 resource descriptor
discovery procedures. I.e. LRDD, host-meta and WebFinger stuff.
OStatus and OpenID now depend on the LRDD plugin (XML_XRD).
* WebFinger
This plugin implements the server-side of RFC6415 and RFC7033. Note:
WebFinger technically doesn't handle XRD, but we serve both that and
JRD (JSON Resource Descriptor), depending on Accept header and one
ugly hack to check for old StatusNet installations.
WebFinger depends on LRDD.
We might make this even prettier by using Net_WebFinger, but it is not
currently RFC7033 compliant (no /.well-known/webfinger resource GETs).
Disabling the WebFinger plugin would effectively render your site non-
federated (which might be desired on a private site).
Disabling the LRDD plugin would make your site unable to do modern web
URI lookups (making life just a little bit harder).
Orbited plugin may not work at all anymore, I had no means to try it.
But there's a check whether 'LACONICA' is defined there, which is a
very unlikely thing in the future. So far only tests and scripts have
been migrated consistently, though.
Apparently I forgot scripts/commandline.inc in the commit for 'GNUSOCIAL'
definition 4c6803a054.
define('GNUSOCIAL', true); indicates that we're running GNUSOCIAL, and that
one should be aware of this if applying patches.
I used this hacky sed-command (run it from your GNU Social root, or change the first grep's path to where it actually lies) to do a rough fix on all ::staticGet calls and rename them to ::getKV
sed -i -s -e '/DataObject::staticGet/I!s/::staticGet/::getKV/Ig' $(grep -R ::staticGet `pwd`/* | grep -v -e '^extlib' | grep -v DataObject:: |grep -v "function staticGet"|cut -d: -f1 |sort |uniq)
If you're applying this, remember to change the Managed_DataObject and Memcached_DataObject function definitions of staticGet to getKV!
This might of course take some getting used to, or modification fo StatusNet plugins, but the result is that all the static calls (to staticGet) are now properly made without breaking PHP Strict Standards. Standards are there to be followed (and they caused some very bad confusion when used with get_called_class)
Reasonably any plugin or code that tests for the definition of 'GNUSOCIAL' or similar will take this change into consideration.
commit bd23a7da105d635414643dfcedd9c8f710d565b8
Author: Evan Prodromou <evan@e14n.com>
Date: Sat Jun 29 07:49:03 2013 -0400
Make the after flag work correctly
commit 5c5845a2f866f0bbffedd8e2e5d1f512f87d5329
Author: Evan Prodromou <evan@e14n.com>
Date: Sat Jun 29 06:14:43 2013 -0400
Add an 'after' flag for backup script
commit bd23a7da105d635414643dfcedd9c8f710d565b8
Author: Evan Prodromou <evan@e14n.com>
Date: Sat Jun 29 07:49:03 2013 -0400
Make the after flag work correctly
commit 5c5845a2f866f0bbffedd8e2e5d1f512f87d5329
Author: Evan Prodromou <evan@e14n.com>
Date: Sat Jun 29 06:14:43 2013 -0400
Add an 'after' flag for backup script
commit cd43ac412c90722e3b83ec750d9232a2ac2f12c9
Merge: dad72cc adaf175
Author: Evan Prodromou <evan@status.net>
Date: Mon Jul 9 09:41:05 2012 -0400
Merge commit 'refs/merge-requests/196' of git://gitorious.org/statusnet/mainline into merge-requests/196
commit adaf17552d3ab35d451c00cdb32d87a107e0e56a
Author: Jeremy Pope <jpope@jpope.org>
Date: Thu Jul 5 12:33:06 2012 -0500
fix for XMPP high CPU usage - issue no 3232
commit e573e8ee6690af94259ff8793a84652a139d0662
Author: Jeremy Pope <jpope@jpope.org>
Date: Thu Jul 5 12:30:34 2012 -0500
fix for queuedaemon and imdaemon not being stopped by stopdaemons.sh
If you're adding a tag that already exists, or deleting a tag that
doesn't exist, using settag.php, the exit value is 0 instead of
previous -1. This makes scripting around tags a wee bit easier.
One of the problems we've had with running large-scale hosting systems
for StatusNet is enabling new plugins. If the plugin is not enabled,
its database tables are not checked at script time. Conversely, if it
is enabled, it may take several hours to run checkschema for tens of
thousands of sites -- during which time users might see DB errors.
A new argument to checkschema lets it pre-load one or more plugins
before checking the schema. This lets us prepare the plugins' database
tables before they're used in production. In a multihome environment,
this can be combined with tags to gradually roll out a new plugin.
In the config file, a stanza like:
$site = Status_network::getFromHostname(...);
if ($site->hasTag('fooenabled')) {
addPlugin('Foo');
}
...will only enable the plugin on certain sites. Meanwhile, a bash
script like this should gradually enable the plugin:
# For all sites...
for site in `php allsites.php`; do
# Update the schema for the Foo plugin
php checkschema.php -s$site.wildcard -xFoo;
# Enable the Foo plugin
php settag.php -s$site.wildcard fooenabled;
done
like leprous boils in our code. So, I've replaced all of them with //
comments instead. It's a massive, meaningless, and potentially buggy
change -- great one for the middle of a release cycle, eh?
UserActivityStream -- used to create a full activity stream including subscriptions, favorites, notices, etc -- normally buffers everything into memory at once. This is infeasible for accounts with long histories of serious usage; it can take tens of seconds just to pull all records from the database, and working with them all in memory is very likely to hit resource limits.
This commit adds an alternate mode for this class which avoids pulling notices until during the actual output. Instead of pre-sorting and buffering all the notices, empty spaces between the other activities are filled in with notices as we're making output. This means more smaller queries spread out during operations, and less stuff kept in memory.
Callers (backupaccount action, and backupuser.php) which can stream their output pass an $outputMode param of UserActivityStream::OUTPUT_RAW, and during getString() it'll send straight to output as well as slurping the notices in this extra funky fashion.
Other callers will let it default to the OUTPUT_STRING mode, which keeps the previous behavior.
There should be a better way to do this, swapping out the stringer output for raw output more consitently.
Moved most of the heavy-lifting for account restoration out of
restoreuser.php and into its own class, with the hope that we'll do
the work from the Web eventually.
common_shorten_links() can only access the web session's logged-in user, so never properly took user options into effect for posting via XMPP, API, mail, etc.
Adds an optional $user parameter on common_shorten_links(), and a $user->shortenLinks() as a clearer interface for that.
Tweaked some lower-level functions so $user gets passed down -- making the $notice_id param previously there for saving URLs at notice save time generalized a little.
Note also ticket #2919: there's a lot of duplicate code calling the shortening, checking the length, and reporting near-identical error messages. These should be consolidated to aid in code and translation maintenance.
* add some sanity checking: abort on failures instead of plodding through
* add some progress / error output
* fetch the target database server name from the status_network entry and use that to target the DROP DATABASE
Note that database names and other overrides in status_network entry may still not be seen.