I used this hacky sed-command (run it from your GNU Social root, or change the first grep's path to where it actually lies) to do a rough fix on all ::staticGet calls and rename them to ::getKV
sed -i -s -e '/DataObject::staticGet/I!s/::staticGet/::getKV/Ig' $(grep -R ::staticGet `pwd`/* | grep -v -e '^extlib' | grep -v DataObject:: |grep -v "function staticGet"|cut -d: -f1 |sort |uniq)
If you're applying this, remember to change the Managed_DataObject and Memcached_DataObject function definitions of staticGet to getKV!
This might of course take some getting used to, or modification fo StatusNet plugins, but the result is that all the static calls (to staticGet) are now properly made without breaking PHP Strict Standards. Standards are there to be followed (and they caused some very bad confusion when used with get_called_class)
Reasonably any plugin or code that tests for the definition of 'GNUSOCIAL' or similar will take this change into consideration.
commit bd23a7da105d635414643dfcedd9c8f710d565b8
Author: Evan Prodromou <evan@e14n.com>
Date: Sat Jun 29 07:49:03 2013 -0400
Make the after flag work correctly
commit 5c5845a2f866f0bbffedd8e2e5d1f512f87d5329
Author: Evan Prodromou <evan@e14n.com>
Date: Sat Jun 29 06:14:43 2013 -0400
Add an 'after' flag for backup script
commit bd23a7da105d635414643dfcedd9c8f710d565b8
Author: Evan Prodromou <evan@e14n.com>
Date: Sat Jun 29 07:49:03 2013 -0400
Make the after flag work correctly
commit 5c5845a2f866f0bbffedd8e2e5d1f512f87d5329
Author: Evan Prodromou <evan@e14n.com>
Date: Sat Jun 29 06:14:43 2013 -0400
Add an 'after' flag for backup script
commit cd43ac412c90722e3b83ec750d9232a2ac2f12c9
Merge: dad72cc adaf175
Author: Evan Prodromou <evan@status.net>
Date: Mon Jul 9 09:41:05 2012 -0400
Merge commit 'refs/merge-requests/196' of git://gitorious.org/statusnet/mainline into merge-requests/196
commit adaf17552d3ab35d451c00cdb32d87a107e0e56a
Author: Jeremy Pope <jpope@jpope.org>
Date: Thu Jul 5 12:33:06 2012 -0500
fix for XMPP high CPU usage - issue no 3232
commit e573e8ee6690af94259ff8793a84652a139d0662
Author: Jeremy Pope <jpope@jpope.org>
Date: Thu Jul 5 12:30:34 2012 -0500
fix for queuedaemon and imdaemon not being stopped by stopdaemons.sh
If you're adding a tag that already exists, or deleting a tag that
doesn't exist, using settag.php, the exit value is 0 instead of
previous -1. This makes scripting around tags a wee bit easier.
One of the problems we've had with running large-scale hosting systems
for StatusNet is enabling new plugins. If the plugin is not enabled,
its database tables are not checked at script time. Conversely, if it
is enabled, it may take several hours to run checkschema for tens of
thousands of sites -- during which time users might see DB errors.
A new argument to checkschema lets it pre-load one or more plugins
before checking the schema. This lets us prepare the plugins' database
tables before they're used in production. In a multihome environment,
this can be combined with tags to gradually roll out a new plugin.
In the config file, a stanza like:
$site = Status_network::getFromHostname(...);
if ($site->hasTag('fooenabled')) {
addPlugin('Foo');
}
...will only enable the plugin on certain sites. Meanwhile, a bash
script like this should gradually enable the plugin:
# For all sites...
for site in `php allsites.php`; do
# Update the schema for the Foo plugin
php checkschema.php -s$site.wildcard -xFoo;
# Enable the Foo plugin
php settag.php -s$site.wildcard fooenabled;
done
like leprous boils in our code. So, I've replaced all of them with //
comments instead. It's a massive, meaningless, and potentially buggy
change -- great one for the middle of a release cycle, eh?
UserActivityStream -- used to create a full activity stream including subscriptions, favorites, notices, etc -- normally buffers everything into memory at once. This is infeasible for accounts with long histories of serious usage; it can take tens of seconds just to pull all records from the database, and working with them all in memory is very likely to hit resource limits.
This commit adds an alternate mode for this class which avoids pulling notices until during the actual output. Instead of pre-sorting and buffering all the notices, empty spaces between the other activities are filled in with notices as we're making output. This means more smaller queries spread out during operations, and less stuff kept in memory.
Callers (backupaccount action, and backupuser.php) which can stream their output pass an $outputMode param of UserActivityStream::OUTPUT_RAW, and during getString() it'll send straight to output as well as slurping the notices in this extra funky fashion.
Other callers will let it default to the OUTPUT_STRING mode, which keeps the previous behavior.
There should be a better way to do this, swapping out the stringer output for raw output more consitently.
Moved most of the heavy-lifting for account restoration out of
restoreuser.php and into its own class, with the hope that we'll do
the work from the Web eventually.
common_shorten_links() can only access the web session's logged-in user, so never properly took user options into effect for posting via XMPP, API, mail, etc.
Adds an optional $user parameter on common_shorten_links(), and a $user->shortenLinks() as a clearer interface for that.
Tweaked some lower-level functions so $user gets passed down -- making the $notice_id param previously there for saving URLs at notice save time generalized a little.
Note also ticket #2919: there's a lot of duplicate code calling the shortening, checking the length, and reporting near-identical error messages. These should be consolidated to aid in code and translation maintenance.
* add some sanity checking: abort on failures instead of plodding through
* add some progress / error output
* fetch the target database server name from the status_network entry and use that to target the DROP DATABASE
Note that database names and other overrides in status_network entry may still not be seen.
* Moved notification sending from Notice::saveReplies to distrib queue handler, so it'll pull from the reply set we've saved regardless of how we got it.
* Set up gettext infrastructure for command-line scripts; gets localization mail notifications etc working from background queues.
* Adjusted locale switching: common_switch_locale() works at runtime for bg scripts, forces a message catalog update