I was storing the full objects in the cache for the listGet()
function. I've changed it to store only pkeys, and use pivotGet() to
get all the corresponding values.
This also required changing pivotGet() so it can get objects with
multi-column pkeys, which complicated the whole thing quite a bit. But
it seems to work OK.
This method lets you get all the objects with a given variable key and
another set of "fixed" keys. A good example is getting all the avatars
for a notice list; the avatar size stays the same, but the IDs change.
Since it's very similar to multiGet(), I refactored that function to
use pivotGet().
And, yes, I realize these are kind of hard to follow.
L10n/i18n updates.
Superfluous whitespace removed.
Add FIXME for a few i18n issues I couldn't solve this quickly.
Takes care of documentation for all core code added in merge of "people tags" feature (Commit:e75c9988ebe33822e493ac225859bc593ff9b855).
For groups that require a private scope, we force every notice to be limited to group scope.
Changed the group-discovery code so we only get groups once -- regardless if they were provided or not.
Notice::saveNew() now does these checks directly when making a repeat:
* make sure the original is valid and existing
* stop you from repeating your own message
* stop you from repeating something you've previously repeated
* prevent repeats of any non-public messages
* explicit inScope() check to make sure you can read the original too (just in case there's a funky extension at play that changes scoping rules)
These error conditions throw exceptions, which the caller either uses as an error message or passes on up the stack, without having to duplicate the checks in each i/o channel.
Numbered parameters when more than one used in a message.
L10n updates for consistency.
i18n for non-translatable exception.
Updated translator documentation.
Removed superfluous whitespace.
We disallow repeating a notice (or whatever) if the scope of the
notice is too private. So, only notices that are public scope
(available to everyone in the world) or site scope (available to
everyone on the site) can be repeated.
Enforce this rule at a low level in Notice.php, and in the API,
commands, and Web UI. Repeat button doesn't appear on tightly-scoped
notices in the Web UI.
We've been muddling through with 6- or 8-argument functions for managing streams. I'd
like to start thinking of streams as their own thing, and give them some more value.
So, the new NoticeStream class takes over the Notice::stream() function and Notice::getStreamByIds().
There's probably some fine-tuning to do on the object interface.
like leprous boils in our code. So, I've replaced all of them with //
comments instead. It's a massive, meaningless, and potentially buggy
change -- great one for the middle of a release cycle, eh?
As a hack this removes the mysql_timestamp bit from the field settings on reply.modified so that our value actually gets saved. This *should* work ok as long as system timezone is set correctly, which we now set to UTC to match when connecting.
I've extended the rights framework (centering on the Right class and Profile::hasRight()) to cover
Web login and API use. This will make it possible to prevent login and API use by users.
I added two new Right constants to the Right class: WEBLOGIN and API. I check these rights using
Profile::hasRight() when initializing users. If the rights check fails, I throw an exception.
I created a new AuthorizationException class for this particular
exception, in order to allow a different UI for these kinds of exceptions (or whatever).
Workaround for deleted profiles still appearing in cached subscriptions/subscribers lists: if we couldn't fetch them, don't include them in the ArrayWrapper.
ArrayWrapper doesn't deal well with null entries, which aren't meant to happen in how it works. This code has recently changed from dying directly with a PHP fatal error in that case to throwing an exception, which allows tracking down the caller.
It looks like there might be some cases where profiles and their matching subscriptions get deleted, but the subscription entries don't get properly cleared from cache... that still bears further investigation. The regular code path looks ok; calls Subscription::cancel() from code called in Profile::delete(); but if they're batch-deleted instead of one row at a time, that could fail to trigger.
$config['site']['logperf'] = true; // to record & dump total hits of each type and the runtime to syslog
$config['site']['logperf_detail'] = true; // very verbose -- dump the individual cache keys and queries as they get used (may contain private info in some queries)
Seeing 180 cache gets on a timeline page seems not unusual currently; since these run in serial, even relatively small roundtrip times can add up heavily.
We should consider ways to reduce the number of round trips, such as more frequently storing compound objects or the output of processing in memcached.
Doing parallel multi-key lookups could also help by collapsing round-trip times, but might not be easy to fit into SN's object model. (For things like streams this should actually work pretty well -- grab the list, then when it's returned go grab all the individual items in parallel and return the list)
* dropped unnecessary join on notice table
* made the function actually static, since it makes no sense as an instance variable. The only caller (in AttachmentList) is updated.
MySQL stores TIMESTAMP columns as UTC, but with a local time interface. (SRSLY?!) DATETIME columns are always bare and assumed to be local time, but we keep only UTC in them.
Forcing the session time_zone to UTC means we won't have to worry as much about what we're sending/receiving in there.
Also will let us remove the hack in master commit a7abb2323e for session tweaks
Had to tweak statusnet.ini to remove the DB_DATAOBJECT_MYSQLTIMESTAMP bitfield constant on session.modified; while it sounds like a useful and legit setting, it actually just means that DB_DataObject silently fails to pass through any attempts to explicitly set the value. As a result, MySQL does its default behavior which is to insert the current *LOCAL* time, which is useless.
This was leading to early GC west of GMT, or late GC east of it. Early GC could at worst destroy all live sessions (whoever's session *triggered* GC is fine, as the session then gets saved right back.)
Previously, if someone you subscribe to repeats a notice by someone you've blocked, you got the message and had to just roll your eyes.
Now blocks are checked against both the current notice's posting profile, and the poster of the original if it's a repeat.
It seems to have actually been saving correctly, but the update of the colors on the form success page wasn't working properly.
When a design object is pulled out of the database, the numeric fields are read in as strings, so black comes back as "0".
But, when we populate the new object and then stick it live, we've populated it with actual integers; with memcache on these might live for a while in the cache...
The fallback code in Design::toWebColor() did a check ($color == null) which would be false for the string "0", but counts as true for the *integer* 0.
Thus, the display code would initially interpret the correctly-saved black color as "use default".
Changing the check to === against null and "" empty string avoids the false positive on integers, and lets us see our nice black text immediately after save.
* adds Right::CREATEGROUP
* logic in Profile::hasRight() checks for silencing
* NewgroupAction checks for the permission before letting you see or process the form in the UI
* User_group::register() logic does a low-level check on the specified initial group admin, and rejects creation if that user doesn't have the right; guaranteeing that API methods etc will also have this restriction applied sensibly.
Made two new functions, Subscription::bySubscriber() and
Subscription::bySubscribed(), to get streams of Subscription objects.
Converted Profile::getSubscribers() and Profile::getSubscriptions() to
use these functions.
Big thanks to the folks at http://pecl.php.net/bugs/bug.php?id=16745 for the secret juju!
Classes were being torn down before session save handlers got called at the end of the request, which exploded with complaints about being unable to find various classes.
Registering a shutdown function lets us explicitly close out the session before everything gets torn down.
Big thanks to the folks at http://pecl.php.net/bugs/bug.php?id=16745 for the secret juju!
Classes were being torn down before session save handlers got called at the end of the request, which exploded with complaints about being unable to find various classes.
Registering a shutdown function lets us explicitly close out the session before everything gets torn down.
We had two ways to generate an activity entry from a notice; one through
Notice::asAtomEntry() and one through Notice::asActivity() and
Activity::asString(). The code paths had already diverged somewhat. I
took the conditions that were in Notice::asAtomEntry() and made sure
they were replicated in the other two functions. Then, I rewrote
Notice::asAtomEntry() to use the other two functions instead.
This change passes the ActivityGenerationTests unit tests, but there
may be some other stuff that's not getting covered.
common_shorten_links() can only access the web session's logged-in user, so never properly took user options into effect for posting via XMPP, API, mail, etc.
Adds an optional $user parameter on common_shorten_links(), and a $user->shortenLinks() as a clearer interface for that.
Tweaked some lower-level functions so $user gets passed down -- making the $notice_id param previously there for saving URLs at notice save time generalized a little.
Note also ticket #2919: there's a lot of duplicate code calling the shortening, checking the length, and reporting near-identical error messages. These should be consolidated to aid in code and translation maintenance.
Code was doing a batch call to $avatar->delete() which fails to properly engage the file deletion code. Calling the existing profile->delete_avatars() function deletes them individually, which makes it all work nice again.
This option may be useful for intranet sites that don't have direct access to the internet, as they may be unable to successfully fetch those resources.
Newly supported:
- TwitPic: added a local function using TwitPic's API, since the oohembed implementation for TwitPic produced invalid output which Services_oEmbed rejects. (bug filed upstream)
Tweaked...
- Flickr: works, now using whitelist to use their endpoint directly instead of going through oohembed
- Youtube: worked around a bug in Services_oEmbed which broke the direct use of API discovery info, so we don't have to use oohembed.
Not currently working...
- YFrog: whitelisting their endpoint directly as the oohembed output is broken, but this doesn't appear to work currently as I think things are confused by YFrog's servers giving a '204 No Content' response on our HEAD checks on the original link.
The old code attempted to compare the value of the notice.created field against now() directly, which tends to explode in our current systems. now() comes up as the server/connection local timezone generally, while the created field is currently set as hardcoded UTC from the web servers. This would lead to breakage when we got a difference in seconds that's several hours off in either direction (depending on the local timezone). New code calculates a threshold by subtracting the number of seconds from the current UNIX timestamp and passing that in in correct format for a simple comparison. As a bonus, this should also be more efficient, as it should be able to follow the index on profile_id and created.
* moved some translator comments that were not directly above the line with the message to the correct location.
* i18n for UI text.
* superfluous whitespace removed.
I've consolidated the checks for which user to use for single-user mode into User::singleUser(), which now uses the configured nickname by preference, falling back to the site owner if it's unset.
This is now called consistently from the places that needed to use the primary user's nickname in routing setup.
Setting $config['singleuser']['nickname'] should now work again as expected.
Doesn't clear all possible cached entries, but this should get the ones that matter most: lookups by id, nickname, and alias. This should ensure that if a group name gets reused as a new group or alias, it should work properly.
There are some user-visible areas that aren't clear such as the 'top groups' lists on the GroupsAction sidebar; if a deleted group appears in those lists it'll go away within an hour when the cached query expires.
When bogus SSL sites etc were hit through a shortening redirect, sometimes link resolution kinda blew up and the user would get a "Can't linkify" error, aborting their post.
Now catching this case and just passing through the URL without attempting to resolve it. Could benefit from an overall scrubbing of the freaky link/attachment code though...! :)
http://status.net/open-source/issues/2513
When bogus SSL sites etc were hit through a shortening redirect, sometimes link resolution kinda blew up and the user would get a "Can't linkify" error, aborting their post.
Now catching this case and just passing through the URL without attempting to resolve it. Could benefit from an overall scrubbing of the freaky link/attachment code though...! :)
http://status.net/open-source/issues/2513
SubMirror: redid add-mirror frontend to accept a feed URL, then pass that on to OStatus, instead of pulling from your subscriptions.
Profile: tweaked subscriberCount() so it doesn't subtract 1 for foreign profiles who aren't subscribed to themselves; instead excludes the self-subscription in the count query.
Memcached_DataObject: tweak to avoid extra error spew in the DB error raising
Work in progress: tweaking feedsub garbage collection so we can count other uses
On my test setup, this fixes inbox delivery to 10,000 local recipients from background queuedaemon running with a 32mb memory limit, completes the job within a minute from start.
Pretty much everything in File and File_redirection initial processing needs to be rewritten to be non-awful; this code is very hard to follow and very easy to make huge bugs. A fair amount of the complication is probably obsoleted by the redirection following being built into HTTPClient now.
* Fake_XMPP back to Queued_XMPP, refactor how we use it and don't create objects and load classes until we need them.
* fix fatal error in IM settings while waiting for a Jabber confirmation.
* Caching fix for user_im_prefs
* fix for saving multiple transport settings
* some fixes for AIM & using normalized addresses for lookups
Users and administrators can set how long an URL can be before it's
shortened, and how long a notice can be before all its URLs are
shortened. They can also turn off shortening altogether.
Squashed commit of the following:
commit d136b39011
Author: Evan Prodromou <evan@status.net>
Date: Mon Apr 26 02:39:00 2010 -0400
use site and user settings to determine when to shorten URLs
commit 1e1c851ff3
Author: Evan Prodromou <evan@status.net>
Date: Mon Apr 26 02:38:40 2010 -0400
add a method to force shortening URLs
commit 4d29ca0b91
Author: Evan Prodromou <evan@status.net>
Date: Mon Apr 26 02:37:41 2010 -0400
static method for getting best URL shortening service
commit a9c6a3bace
Author: Evan Prodromou <evan@status.net>
Date: Mon Apr 26 02:37:11 2010 -0400
allow 0 in numeric entries in othersettings
commit 767ff2f7ec
Author: Evan Prodromou <evan@status.net>
Date: Mon Apr 26 02:36:46 2010 -0400
allow 0 or blank string in inputs
commit 1e21af42a6
Author: Evan Prodromou <evan@status.net>
Date: Mon Apr 26 02:01:11 2010 -0400
add more URL-shortening options to othersettings
commit 869a6be0f5
Author: Evan Prodromou <evan@status.net>
Date: Sat Apr 24 14:22:51 2010 -0400
move url shortener superclass to lib from plugin
commit 9c0c9863d5
Author: Evan Prodromou <evan@status.net>
Date: Sat Apr 24 14:20:28 2010 -0400
documentation and whitespace on UrlShortenerPlugin
commit 7a1dd5798f
Author: Evan Prodromou <evan@status.net>
Date: Sat Apr 24 14:05:46 2010 -0400
add defaults for URL shortening
commit d259c37ad2
Author: Evan Prodromou <evan@status.net>
Date: Sat Apr 24 13:40:10 2010 -0400
Add User_urlshortener_prefs
Add a table for URL shortener prefs, a corresponding class, and the
correct mumbo-jumbo in statusnet.ini to make everything work.
* Moved notification sending from Notice::saveReplies to distrib queue handler, so it'll pull from the reply set we've saved regardless of how we got it.
* Set up gettext infrastructure for command-line scripts; gets localization mail notifications etc working from background queues.
* Adjusted locale switching: common_switch_locale() works at runtime for bg scripts, forces a message catalog update
Conflicts:
actions/imsettings.php
lib/jabber.php
Made a quick attempt to merge the new JID validation into the XmppPlugin, have not had a chance to test that version live yet.
Should also move over the test cases.
Should help with dupes that come in when inbox distrib jobs die and get restarted, etc.
Conflicts:
classes/Inbox.php
Looks like this was implemented on master recently and not copied up to testing. Merging to my version on testing as I've added some doc comments and extracted a couple functions for future ease of use.
to (profile_id, id) instead of (profile_id, created, id).
It's been falling back to PRIMARY instead, which is really
very inefficient for a profile that hasn't posted in a few
months. Even though forcing the index will cause a filesort,
it's usually going to be better. Even for large profiles it
seems much faster than the badly-indexed query.
to (profile_id, id) instead of (profile_id, created, id).
It's been falling back to PRIMARY instead, which is really
very inefficient for a profile that hasn't posted in a few
months. Even though forcing the index will cause a filesort,
it's usually going to be better. Even for large profiles it
seems much faster than the badly-indexed query.
The magic __call() method is used to implement a getter and setter interface, and simply didn't bother to throw an error for things it didn't recognize.
This may expose a number of existing errors where mistyped method names are called and we're not noticing that they're failing.
This bug was hitting a number of places where we had the pattern:
$db->find();
while($dbo->fetch()) {
$x = clone($dbo);
// do anything with $x other than storing it in an array
}
The cloned object's destructor would trigger on the second run through the loop, freeing the database result set -- not really what we wanted.
(Loops that stored the clones into an array were fine, since the clones stay in scope in the array longer than the original does.)
Detaching the database result from the clone lets us work with its data without interfering with the rest of the query.
In the unlikely even that somebody is making clones in the middle of a query, then trying to continue the query with the clone instead of the original object, well they're gonna be broken now.
* Subscription::start was sometimes passing users instead of profiles to hooks, which broke OStatus subscription notifications; now normalizing to profiles for processing.
* H-card parsing would trigger a lot of PHP warnings and notices in hKit. Now suppressing warnings and notices for the duration of the call to keep them out of output when display_errors is on.
* H-card parsing would trigger a PHP fatal error if the source page was not well-formed XML and Tidy was not present on the system. Switched normalization to use the PHP DOM module which is always present, as we have no need for Tidy's extra features here.
* Trying to fetch avatars from Google profiles failed and triggered a PHP warning due to the relative URL not being resolved during h-card parsing. Now passing profile page URL into hKit by sneaking a <base> tag in while we normalize the HTML source.
* Profile pages without a "Link" header could trigger PHP notices due to a bad NULL -> array(NULL) conversion in LinkHeader::getLink(). Now checking that there was a return value before converting single return value into array.
Base problem is that our caching-on-insert interferes with relying on column default values; the cached object is missing those fields, so they appear to be empty (null) when the object is retrieved from cache.
Now explicitly setting them when inserting subscriptions, and cleaned up some code that had alternate code paths.
May also have made auto-subscription work for remote OStatus subscribers, but can't test until magic sigs are working again.
While deletion is in progress, the account is locked with the 'deleted' role, which disables all actions with rights control.
Todo:
* Pretty up the notice on the profile page about the pending delete. Show status?
* Possibly more thorough account disabling, such as disallowing all use for login and access.
* Improve error recovery; worst case is that an account gets left locked in 'deleted' state but the queue jobs have gotten dropped out. This would leave the username in use and any undeleted notices in place.
This bug was hitting a number of places where we had the pattern:
$db->find();
while($dbo->fetch()) {
$x = clone($dbo);
// do anything with $x other than storing it in an array
}
The cloned object's destructor would trigger on the second run through the loop, freeing the database result set -- not really what we wanted.
(Loops that stored the clones into an array were fine, since the clones stay in scope in the array longer than the original does.)
Detaching the database result from the clone lets us work with its data without interfering with the rest of the query.
In the unlikely even that somebody is making clones in the middle of a query, then trying to continue the query with the clone instead of the original object, well they're gonna be broken now.
It's not currently used, and won't be efficient when we update the notice.profile_id_idx index to optimize for our id-based sorting when pulling user post lists for profile pages, feeds etc.
On my test system (without memcache), while testing the LDAP
authentication plugin, when I sign in for the first time, triggering
auto-registration, I get these messages in the output page:
Warning: ksort() expects parameter 1 to be array, null given in /home/jeff/Documents/code/statusnet/classes/Memcached_DataObject.php on line 219
Warning: Invalid argument supplied for foreach() in /home/jeff/Documents/code/statusnet/classes/Memcached_DataObject.php on line 224
Warning: assert() [function.assert]: Assertion failed in /home/jeff/Documents/code/statusnet/classes/Memcached_DataObject.php on line 241
(plus two "Cannot modify header information..." messages as a result of
the above warnings)
This change appears to fix this (although I can't really explain exactly
why).
Also stripping id from foreign HTML messages (could interfere with UI) and disabled failing attachment popup for a.attachment links that don't have a proper id, so you can click through instead of getting an error.
Issues:
* any other links aren't marked and saved
* inconsistent behavior between local and remote attachments (local displays in lightbox, remote doesn't)
* if the enclosure'd object isn't referenced in the content, you won't be offered a link to it in our UI
We only need one author for user feeds: the user themselves. So, show
the user as the activity:subject, and don't repeat the same
activity:actor for every notice unnecessarily.
In a federated system, "@nickname" is insufficient to uniquely
identify a user. However, it's a very convenient idiom. We need to
guess from context who 'nickname' refers to.
Previously, we were using the sender's profile (or what we knew about
them) as the only context. So, we assumed that they'd be mentioning to
someone they followed, or someone who followed them, or someone on
their own server.
Now, we include the notice information for context. We check to see if
the notice is a reply to another notice, and if the author of the
original notice has the nickname 'nickname', then the mention is
probably for them. Alternately, if the original notice mentions someone
with nickname 'nickname', then this notice is probably referring to
_them_.
Doing this kind of context sleuthing means we have to render the
content very late in the notice-saving process.
We add a local_group table to store data about local groups. It has
the unique key for nickname, so /group/<nickname> looks up here.
Updated DB data object classes and data files.
- added rel="ostatus:attention" links for group delivery
- added events for plugins to override group profile/permalink pages
- pulled Notice::saveGroups up to save-time so we can override;
it's relatively cheap and gives us a clean list of target
groups for distrib time even with customized delivery.
- fixed notice::getGroups to return group objects as expected
- added some doc on new parameters to Notice::saveNew
- 'groups' list of group IDs to push to in place of parsing
- messages that come in via PuSH and contain local group targets
are delivered to local group members
- messages that come in via PuSH and contain remote group targets
are delivered to local members of the remote group
Todo:
- handle group posts that only come through Salmon
- handle conflicts in case something comes in both through Salmon and PuSH
- better source verification
- need a cleaner interface to look up groups by URI
- need a way to handle remote groups with conflicting names
Combined the code that finds mentions of other profiles into one place.
common_find_mentions() finds mentions and calls hooks to allow
supplemental syntax for mentions (like OStatus).
common_linkify_mentions() links mentions.
common_linkify_mention() links a mention.
Notice::saveReplies() now uses common_find_mentions() instead of
trying to parse everything again.
The subs_* functions in subs.php have made a lot of assumptions
about users versus profiles. I've refactored the functions to
be methods of the Subscription class instead, and to use Profile
objects throughout.
Some of the checks for blocks or existing subscriptions depended
on users or profiles, so I've moved those methods around a bit.
I've left stubs for the subs_* functions until we get time to replace
them.
- Multiplexing queues into groups and for multiple sites.
- Sharing vs breakout configurable per site and per queue via $config['queue']['breakout']
- Detect how many times a message is redelivered, discard if it's killed too many daemons
- count configurable with $config['queue']['max_retries']
- can dump the items to files in $config['queue']['dead_letter_dir']
Queue daemon memory & resource leak fixes:
- avoid unnecessary reconnections to memcached server (switch persistent connections back in on second initialization, assuming it's child process)
- monkey-patch for leaky .ini loads in DB_DataObject::databaseStructure() - was leaking 200k per active switch
- applied leak fixes to Status_network as well, using intermediate base Safe_DataObject for both it and Memcache_DataObject
Misc queue fixes:
- correct handling of child processes exiting due to signal termination instead of regular exit
- shutdown instead of infinite respawn loop if we're already past the soft memory limit at startup
- Added --all option for xmppdaemon... still opens one xmpp connection per site that has xmpp active
Cache updates:
- add Cache::increment() method with native support for memcached atomic increment
statusnet.links.ini file could not be read anymore due to the entry for nonce containing a comma in its key value.
PHP's parse_ini_file() function no longer allows commas in keys, and rejects the *ENTIRE FILE* if it's present, breaking various automatic joins.
* detection of group feeds is currently a nasty hack based on presence of '/groups/' in URL -- should use some property on the feed?
* listing for the remote group is kinda cruddy; needs to be named more cleanly
* still need to establish per-author profiles (easier once we have the updated Atom code in)
* group delivery probably not right yet
* saving of group messages still triggering some weird behavior
Added support for since_id and max_id on group timeline feeds as a free extra. Enjoy!
* Treat linkless feed posts as status updates; drop the "New post:" prefix and quotes on them.
* Use stable user IDs for atom/rss2 feed links instead of unstable nicknames
* Pull Atom feed preferentially when subscribing -- can now put the remote user's profile page straight into the feed subscription form and get to the right place.
* Clean up naming for push endpoints
No change in efficiency for the common case where nothing's deleted: does the same bulk fetch of just the notices we think we'll need as before, then if we turned up short keeps checking one by one until we've filled up to our $limit.
This can leave us with overlap between pages, but we already have that when new messages come in between clicks; seems to be the lesser of evils versus not getting a 'before' button.
More permanent fix for that will be to switch timeline paging in the UI to use notice IDs.
No change in efficiency for the common case where nothing's deleted: does the same bulk fetch of just the notices we think we'll need as before, then if we turned up short keeps checking one by one until we've filled up to our $limit.
This can leave us with overlap between pages, but we already have that when new messages come in between clicks; seems to be the lesser of evils versus not getting a 'before' button.
More permanent fix for that will be to switch timeline paging in the UI to use notice IDs.
* testing: (130 commits)
HTTP auth provided is evaluated even if it's not required
Rename rc3to09.sql to rc3torc4.sql to avoid confusion if we add a last-minute change after this!
Add new oauth tables and modifications to 'consumer' table for rc4
Centred leaderboard ad
camelcase the uap param names
move leaderboard to after the header
Moved rectangle ad into aside and leaderboard to the right in header.
Aligning wide skyscraper to the right instead of left
CSS ids and classes fixed in UAPPlugin
wrong height for rectangle in BlankAd
Add the moved BlankAdPlugin
make BlankAd dir and change to use a 1x1 image
move BlankAdPlugin to its own dir
Add BlankAdPlugin to test ad layout in different themes
make uapplugin an abstract class
move UAP plugin to core
Lowercased switch cases in UAP Plugin
Plugin for Universal Ad Package. Outputs four most widely used ad types.
Add persistent:true property to Stomp messages so ActiveMQ doesn't decide to discard them even though persistence is enabled on the broker. :) (Thanks Aric!)
quick fix: use common_path() on realtime update JS so it works with the new JS path code (will pull from main server for now)
...
Conflicts:
actions/apioauthaccesstoken.php
actions/apioauthauthorize.php
actions/apioauthrequesttoken.php
actions/editapplication.php
actions/newapplication.php
lib/apiauth.php
lib/queuemanager.php
lib/router.php
queuectl.php --update -s<site>
queuectl.php --stop
queuectl.php --restart
Default control channel is /topic/statusnet-control. For external utilities to send a site update ping direct to the queue server, connect via Stomp and send a message formatted thus:
update:<nickname>
(Nickname here, *not* server hostname! The rest of the queues will be updated to use nicknames later.)
Note that all currently-connected queue daemons will get these notifications, including both queuedaemon.php and xmppdaemon.php. (XMPP will ignore site update requests for sites that it's not handling.)
Limitations:
* only implemented for stomp queue manager so far
* --update may not yet handle a changed server name properly
* --restart won't reload PHP code files that were already loaded at startup. Still need to stop and restart the daemons from 'outside' when updating code base.
$sn->tags() returns tag list as array; $sn->hasTag('blah') to check for a particular tag only
Could be used to control things in config file:
$sn = Status_network::setupSite($_server, $_path, $_wildcard);
if (!$sn) { die("No such site"); }
if ($sn->hasTag('individual')) { /* blah */ }
Note memcached keys are unchanged; if tags are changed from an external tool clear:
statusnet:<dbname>:status_network:<key>:<val>
for <key>s 'nickname', 'hostname', and 'pathname'
Moved much of the writing that happens when posting a notice to a new
queuehandler, distribqueuehandler. This updates tags, groups, replies
and inboxes at queue time (or at Web time, if queues are disabled).
To make this work well, I had to break up the monolithic
Notice::blowCaches() and make cache blowing happen closer to where
data is updated.
Squashed commit of the following:
commit 5257626c62750ac4ac1db0ce2b71410c5711cfa3
Author: Evan Prodromou <evan@status.net>
Date: Mon Jan 25 14:56:41 2010 -0500
slightly better handling of blowing tag memory cache
commit 8a22a3cdf6ec28685da129a0313e7b2a0837c9ef
Author: Evan Prodromou <evan@status.net>
Date: Mon Jan 25 01:42:56 2010 -0500
change 'distribute' to 'distrib' so not too long for dbqueue
commit 7a063315b0f7fad27cb6fbd2bdd74e253af83e4f
Author: Evan Prodromou <evan@status.net>
Date: Mon Jan 25 01:39:15 2010 -0500
change handle_notice() to handle() in distributqueuehandler
commit 1a39ccd28b9994137d7bfd21bb4f230546938e77
Author: Evan Prodromou <evan@status.net>
Date: Mon Jan 25 16:05:25 2010 -0500
error with queuemanager
commit e6b3bb93f305cfd2de71a6340b8aa6fb890049b7
Author: Evan Prodromou <evan@status.net>
Date: Mon Jan 25 01:11:34 2010 -0500
Blow memcache at different point rather than one big function for Notice class
commit 94d557cdc016187d1d0647ae1794cd94d6fb8ac8
Author: Evan Prodromou <evan@status.net>
Date: Mon Jan 25 00:48:44 2010 -0500
Blow memcache at different point rather than one big function for Notice class
commit 1c781dd08c88a35dafc5c01230b4872fd6b95182
Author: Evan Prodromou <evan@status.net>
Date: Wed Jan 20 08:54:18 2010 -0500
move broadcasting and distributing to new queuehandler
commit da3e46d26b84e4f028f34a13fd2ee373e4c1b954
Author: Evan Prodromou <evan@status.net>
Date: Wed Jan 20 08:53:12 2010 -0500
Move distribution of notices to new distribute queue handler
I made a bad merge on Jan 10th from master to 0.9.x. This lost a number
of memcache enhancements made on the 0.9.x branch. I've been able to
re-do the manual merge, and this represents the changes. Most of them
are related to caching on insert.
Queue handlers for XMPP individual & firehose output now send their XML stanzas
to another output queue instead of connecting directly to the chat server. This
lets us have as many general processing threads as we need, while all actual
XMPP input and output go through a single daemon with a single connection open.
This avoids problems with multiple connected resources:
* multiple windows shown in some chat clients (psi, gajim, kopete)
* extra load on server
* incoming message delivery forwarding issues
Database changes:
* queue_item drops 'notice_id' in favor of a 'frame' blob.
This is based on Craig Andrews' work branch to generalize queues to take any
object, but conservatively leaving out the serialization for now.
Table updater (preserves any existing queued items) in db/rc3to09.sql
Code changes to watch out for:
* Queue handlers should now define a handle() method instead of handle_notice()
* QueueDaemon and XmppDaemon now share common i/o (IoMaster) and respawning
thread management (RespawningDaemon) infrastructure.
* The polling XmppConfirmManager has been dropped, as the message is queued
directly when saving IM settings.
* Enable $config['queue']['debug_memory'] to output current memory usage at
each run through the event loop to watch for memory leaks
To do:
* Adapt XMPP i/o to component connection mode for multi-site support.
* XMPP input can also be broken out to a queue, which would allow the actual
notice save etc to be handled by general queue threads.
* Make sure there are no problems with simply pushing serialized Notice objects
to queues.
* Find a way to improve interactive performance of the database-backed queue
handler; polling is pretty painful to XMPP.
* Possibly redo the way QueueHandlers are injected into a QueueManager. The
grouping used to split out the XMPP output queue is a bit awkward.
Conflicts:
scripts/xmppdaemon.php
Queue handlers for XMPP individual & firehose output now send their XML stanzas
to another output queue instead of connecting directly to the chat server. This
lets us have as many general processing threads as we need, while all actual
XMPP input and output go through a single daemon with a single connection open.
This avoids problems with multiple connected resources:
* multiple windows shown in some chat clients (psi, gajim, kopete)
* extra load on server
* incoming message delivery forwarding issues
Database changes:
* queue_item drops 'notice_id' in favor of a 'frame' blob.
This is based on Craig Andrews' work branch to generalize queues to take any
object, but conservatively leaving out the serialization for now.
Table updater (preserves any existing queued items) in db/rc3to09.sql
Code changes to watch out for:
* Queue handlers should now define a handle() method instead of handle_notice()
* QueueDaemon and XmppDaemon now share common i/o (IoMaster) and respawning
thread management (RespawningDaemon) infrastructure.
* The polling XmppConfirmManager has been dropped, as the message is queued
directly when saving IM settings.
* Enable $config['queue']['debug_memory'] to output current memory usage at
each run through the event loop to watch for memory leaks
To do:
* Adapt XMPP i/o to component connection mode for multi-site support.
* XMPP input can also be broken out to a queue, which would allow the actual
notice save etc to be handled by general queue threads.
* Make sure there are no problems with simply pushing serialized Notice objects
to queues.
* Find a way to improve interactive performance of the database-backed queue
handler; polling is pretty painful to XMPP.
* Possibly redo the way QueueHandlers are injected into a QueueManager. The
grouping used to split out the XMPP output queue is a bit awkward.
Queue handlers for XMPP individual & firehose output now send their XML stanzas
to another output queue instead of connecting directly to the chat server. This
lets us have as many general processing threads as we need, while all actual
XMPP input and output go through a single daemon with a single connection open.
This avoids problems with multiple connected resources:
* multiple windows shown in some chat clients (psi, gajim, kopete)
* extra load on server
* incoming message delivery forwarding issues
Database changes:
* queue_item drops 'notice_id' in favor of a 'frame' blob.
This is based on Craig Andrews' work branch to generalize queues to take any
object, but conservatively leaving out the serialization for now.
Table updater (preserves any existing queued items) in db/rc3to09.sql
Code changes to watch out for:
* Queue handlers should now define a handle() method instead of handle_notice()
* QueueDaemon and XmppDaemon now share common i/o (IoMaster) and respawning
thread management (RespawningDaemon) infrastructure.
* The polling XmppConfirmManager has been dropped, as the message is queued
directly when saving IM settings.
* Enable $config['queue']['debug_memory'] to output current memory usage at
each run through the event loop to watch for memory leaks
To do:
* Adapt XMPP i/o to component connection mode for multi-site support.
* XMPP input can also be broken out to a queue, which would allow the actual
notice save etc to be handled by general queue threads.
* Make sure there are no problems with simply pushing serialized Notice objects
to queues.
* Find a way to improve interactive performance of the database-backed queue
handler; polling is pretty painful to XMPP.
* Possibly redo the way QueueHandlers are injected into a QueueManager. The
grouping used to split out the XMPP output queue is a bit awkward.
- NOTICE_INBOX_SOURCE_* constants moved to common.php since Notice_inbox.php not always loaded
- fixed typo in User::staticGet() call which caused user #1 to receive messages once for each subscriber instead of for him/herself
- 'continue' -> 'continue 2' inside switch() statement to fix loop escape (PHP considers switch() a looping construct for break & continue)
Key changes:
* Initialization code moved from common.php to StatusNet class;
can now switch configurations during runtime.
* As a consequence, configuration files must now be idempotent...
Be careful with constant, function or class definitions.
* Control structure for daemons/QueueManager/QueueHandler has been refactored;
the run loop is now managed by IoMaster run via scripts/queuedaemon.php
IoManager subclasses are woken to handle socket input or polling, and may
cover multiple sites.
* Plugins can implement notice queue handlers more easily by registering a
QueueHandler class; no more need to add a daemon.
The new QueueDaemon runs from scripts/queuedaemon.php:
* This replaces most of the old *handler.php scripts; they've been refactored
to the bare handler classes.
* Spawns multiple child processes to spread load; defaults to CPU count on
Linux and Mac OS X systems, or override with --threads=N
* When multithreaded, child processes are automatically respawned on failure.
* Threads gracefully shut down and restart when passing a soft memory limit
(defaults to 90% of memory_limit), limiting damage from memory leaks.
* Support for UDP-based monitoring: http://www.gitorious.org/snqmon
Rough control flow diagram:
QueueDaemon -> IoMaster -> IoManager
QueueManager [listen or poll] -> QueueHandler
XmppManager [ping & keepalive]
XmppConfirmManager [poll updates]
Todo:
* Respawning features not currently available running single-threaded.
* When running single-site, configuration changes aren't picked up.
* New sites or config changes affecting queue subscriptions are not yet
handled without a daemon restart.
* SNMP monitoring output to integrate with general tools (nagios, ganglia)
* Convert XMPP confirmation message sends to use stomp queue instead of polling
* Convert xmppdaemon.php to IoManager?
* Convert Twitter status, friends import polling daemons to IoManager
* Clean up some error reporting and failure modes
* May need to adjust queue priorities for best perf in backlog/flood cases
Detailed code history available in my daemon-work branch:
http://www.gitorious.org/~brion/statusnet/brion-fixes/commits/daemon-work
(We need to keep it returning a reference because the extlib parent class is stuck in PHP 4-land and uses references everywhere, including this function's return value. Yuck!)
Also changed pkeyGet to drop the reference, since it doesn't have an upstream equivalent.
* added ProfileDeleteRelated event to match UserDeleteRelated, to allow plugins to add extra related tables on profile deletion
* UserFlagPlugin: deleting flags when target profile is deleted
* UserFlagPlugin: deleting flags when flagging user is deleted
* UserFlagPlugin: fix for autoloader -- class names are case-insensitive. We may get lowercase class names coming in at times, such as when creating DB objects programatically from a table name.
Note that any already-existing bogus entries need to be removed from the database:
select * from user_flag_profile where (select id from profile where id=profile_id) is null;
select * from user_flag_profile where (select id from user where id=user_id) is null;
In web interface and retweet/repeat API we show the original untrimmed text, but some back-compat API messages will still show the trimmed 'RT' version.
This matches Twitter's behavior on overlong retweets, though we're outputting the RT version in more API results than they do.
In web interface and retweet/repeat API we show the original untrimmed text, but some back-compat API messages will still show the trimmed 'RT' version.
This matches Twitter's behavior on overlong retweets, though we're outputting the RT version in more API results than they do.
* We now cache negative lookups; clear them in Memcached_DataObject->insert()
* Mark file.url as a unique key in statusnet.ini so its negative lookups are cleared properly (first save of a notice with a new URL was failing due to double-insert)
* Now using serialization for default in-process cache instead of just saving objects; avoids potential corruption if you save an object to cache, change the original object, then fetch the same key from cache again
There's great value in knowing that something doesn't exist. We
now cache this information, and carefully compare the results from
cache as $results !== false instead of !empty($results), since some
empty values (null, 0, empty array, empty string) are stored in the
cache.
Caching staticGet() and pkeyGet() now store DB misses in the cache,
and cachedQuery() checks for empty results from the cache.
There were some problems with the automated cache/uncache system
for data objects that made us cache unfindable keys (with null
attributes and sometimes null names). Fixed those problems and
refactored the encache() and decache() methods so they use a helper
to find the cache keys to use.