The magic __call() method is used to implement a getter and setter interface, and simply didn't bother to throw an error for things it didn't recognize.
This may expose a number of existing errors where mistyped method names are called and we're not noticing that they're failing.
This bug was hitting a number of places where we had the pattern:
$db->find();
while($dbo->fetch()) {
$x = clone($dbo);
// do anything with $x other than storing it in an array
}
The cloned object's destructor would trigger on the second run through the loop, freeing the database result set -- not really what we wanted.
(Loops that stored the clones into an array were fine, since the clones stay in scope in the array longer than the original does.)
Detaching the database result from the clone lets us work with its data without interfering with the rest of the query.
In the unlikely even that somebody is making clones in the middle of a query, then trying to continue the query with the clone instead of the original object, well they're gonna be broken now.
* Subscription::start was sometimes passing users instead of profiles to hooks, which broke OStatus subscription notifications; now normalizing to profiles for processing.
* H-card parsing would trigger a lot of PHP warnings and notices in hKit. Now suppressing warnings and notices for the duration of the call to keep them out of output when display_errors is on.
* H-card parsing would trigger a PHP fatal error if the source page was not well-formed XML and Tidy was not present on the system. Switched normalization to use the PHP DOM module which is always present, as we have no need for Tidy's extra features here.
* Trying to fetch avatars from Google profiles failed and triggered a PHP warning due to the relative URL not being resolved during h-card parsing. Now passing profile page URL into hKit by sneaking a <base> tag in while we normalize the HTML source.
* Profile pages without a "Link" header could trigger PHP notices due to a bad NULL -> array(NULL) conversion in LinkHeader::getLink(). Now checking that there was a return value before converting single return value into array.
Base problem is that our caching-on-insert interferes with relying on column default values; the cached object is missing those fields, so they appear to be empty (null) when the object is retrieved from cache.
Now explicitly setting them when inserting subscriptions, and cleaned up some code that had alternate code paths.
May also have made auto-subscription work for remote OStatus subscribers, but can't test until magic sigs are working again.
While deletion is in progress, the account is locked with the 'deleted' role, which disables all actions with rights control.
Todo:
* Pretty up the notice on the profile page about the pending delete. Show status?
* Possibly more thorough account disabling, such as disallowing all use for login and access.
* Improve error recovery; worst case is that an account gets left locked in 'deleted' state but the queue jobs have gotten dropped out. This would leave the username in use and any undeleted notices in place.
This bug was hitting a number of places where we had the pattern:
$db->find();
while($dbo->fetch()) {
$x = clone($dbo);
// do anything with $x other than storing it in an array
}
The cloned object's destructor would trigger on the second run through the loop, freeing the database result set -- not really what we wanted.
(Loops that stored the clones into an array were fine, since the clones stay in scope in the array longer than the original does.)
Detaching the database result from the clone lets us work with its data without interfering with the rest of the query.
In the unlikely even that somebody is making clones in the middle of a query, then trying to continue the query with the clone instead of the original object, well they're gonna be broken now.
It's not currently used, and won't be efficient when we update the notice.profile_id_idx index to optimize for our id-based sorting when pulling user post lists for profile pages, feeds etc.
On my test system (without memcache), while testing the LDAP
authentication plugin, when I sign in for the first time, triggering
auto-registration, I get these messages in the output page:
Warning: ksort() expects parameter 1 to be array, null given in /home/jeff/Documents/code/statusnet/classes/Memcached_DataObject.php on line 219
Warning: Invalid argument supplied for foreach() in /home/jeff/Documents/code/statusnet/classes/Memcached_DataObject.php on line 224
Warning: assert() [function.assert]: Assertion failed in /home/jeff/Documents/code/statusnet/classes/Memcached_DataObject.php on line 241
(plus two "Cannot modify header information..." messages as a result of
the above warnings)
This change appears to fix this (although I can't really explain exactly
why).
Also stripping id from foreign HTML messages (could interfere with UI) and disabled failing attachment popup for a.attachment links that don't have a proper id, so you can click through instead of getting an error.
Issues:
* any other links aren't marked and saved
* inconsistent behavior between local and remote attachments (local displays in lightbox, remote doesn't)
* if the enclosure'd object isn't referenced in the content, you won't be offered a link to it in our UI
We only need one author for user feeds: the user themselves. So, show
the user as the activity:subject, and don't repeat the same
activity:actor for every notice unnecessarily.
In a federated system, "@nickname" is insufficient to uniquely
identify a user. However, it's a very convenient idiom. We need to
guess from context who 'nickname' refers to.
Previously, we were using the sender's profile (or what we knew about
them) as the only context. So, we assumed that they'd be mentioning to
someone they followed, or someone who followed them, or someone on
their own server.
Now, we include the notice information for context. We check to see if
the notice is a reply to another notice, and if the author of the
original notice has the nickname 'nickname', then the mention is
probably for them. Alternately, if the original notice mentions someone
with nickname 'nickname', then this notice is probably referring to
_them_.
Doing this kind of context sleuthing means we have to render the
content very late in the notice-saving process.
We add a local_group table to store data about local groups. It has
the unique key for nickname, so /group/<nickname> looks up here.
Updated DB data object classes and data files.
- added rel="ostatus:attention" links for group delivery
- added events for plugins to override group profile/permalink pages
- pulled Notice::saveGroups up to save-time so we can override;
it's relatively cheap and gives us a clean list of target
groups for distrib time even with customized delivery.
- fixed notice::getGroups to return group objects as expected
- added some doc on new parameters to Notice::saveNew
- 'groups' list of group IDs to push to in place of parsing
- messages that come in via PuSH and contain local group targets
are delivered to local group members
- messages that come in via PuSH and contain remote group targets
are delivered to local members of the remote group
Todo:
- handle group posts that only come through Salmon
- handle conflicts in case something comes in both through Salmon and PuSH
- better source verification
- need a cleaner interface to look up groups by URI
- need a way to handle remote groups with conflicting names
Combined the code that finds mentions of other profiles into one place.
common_find_mentions() finds mentions and calls hooks to allow
supplemental syntax for mentions (like OStatus).
common_linkify_mentions() links mentions.
common_linkify_mention() links a mention.
Notice::saveReplies() now uses common_find_mentions() instead of
trying to parse everything again.
The subs_* functions in subs.php have made a lot of assumptions
about users versus profiles. I've refactored the functions to
be methods of the Subscription class instead, and to use Profile
objects throughout.
Some of the checks for blocks or existing subscriptions depended
on users or profiles, so I've moved those methods around a bit.
I've left stubs for the subs_* functions until we get time to replace
them.
- Multiplexing queues into groups and for multiple sites.
- Sharing vs breakout configurable per site and per queue via $config['queue']['breakout']
- Detect how many times a message is redelivered, discard if it's killed too many daemons
- count configurable with $config['queue']['max_retries']
- can dump the items to files in $config['queue']['dead_letter_dir']
Queue daemon memory & resource leak fixes:
- avoid unnecessary reconnections to memcached server (switch persistent connections back in on second initialization, assuming it's child process)
- monkey-patch for leaky .ini loads in DB_DataObject::databaseStructure() - was leaking 200k per active switch
- applied leak fixes to Status_network as well, using intermediate base Safe_DataObject for both it and Memcache_DataObject
Misc queue fixes:
- correct handling of child processes exiting due to signal termination instead of regular exit
- shutdown instead of infinite respawn loop if we're already past the soft memory limit at startup
- Added --all option for xmppdaemon... still opens one xmpp connection per site that has xmpp active
Cache updates:
- add Cache::increment() method with native support for memcached atomic increment
statusnet.links.ini file could not be read anymore due to the entry for nonce containing a comma in its key value.
PHP's parse_ini_file() function no longer allows commas in keys, and rejects the *ENTIRE FILE* if it's present, breaking various automatic joins.
* detection of group feeds is currently a nasty hack based on presence of '/groups/' in URL -- should use some property on the feed?
* listing for the remote group is kinda cruddy; needs to be named more cleanly
* still need to establish per-author profiles (easier once we have the updated Atom code in)
* group delivery probably not right yet
* saving of group messages still triggering some weird behavior
Added support for since_id and max_id on group timeline feeds as a free extra. Enjoy!
* Treat linkless feed posts as status updates; drop the "New post:" prefix and quotes on them.
* Use stable user IDs for atom/rss2 feed links instead of unstable nicknames
* Pull Atom feed preferentially when subscribing -- can now put the remote user's profile page straight into the feed subscription form and get to the right place.
* Clean up naming for push endpoints
No change in efficiency for the common case where nothing's deleted: does the same bulk fetch of just the notices we think we'll need as before, then if we turned up short keeps checking one by one until we've filled up to our $limit.
This can leave us with overlap between pages, but we already have that when new messages come in between clicks; seems to be the lesser of evils versus not getting a 'before' button.
More permanent fix for that will be to switch timeline paging in the UI to use notice IDs.
No change in efficiency for the common case where nothing's deleted: does the same bulk fetch of just the notices we think we'll need as before, then if we turned up short keeps checking one by one until we've filled up to our $limit.
This can leave us with overlap between pages, but we already have that when new messages come in between clicks; seems to be the lesser of evils versus not getting a 'before' button.
More permanent fix for that will be to switch timeline paging in the UI to use notice IDs.
* testing: (130 commits)
HTTP auth provided is evaluated even if it's not required
Rename rc3to09.sql to rc3torc4.sql to avoid confusion if we add a last-minute change after this!
Add new oauth tables and modifications to 'consumer' table for rc4
Centred leaderboard ad
camelcase the uap param names
move leaderboard to after the header
Moved rectangle ad into aside and leaderboard to the right in header.
Aligning wide skyscraper to the right instead of left
CSS ids and classes fixed in UAPPlugin
wrong height for rectangle in BlankAd
Add the moved BlankAdPlugin
make BlankAd dir and change to use a 1x1 image
move BlankAdPlugin to its own dir
Add BlankAdPlugin to test ad layout in different themes
make uapplugin an abstract class
move UAP plugin to core
Lowercased switch cases in UAP Plugin
Plugin for Universal Ad Package. Outputs four most widely used ad types.
Add persistent:true property to Stomp messages so ActiveMQ doesn't decide to discard them even though persistence is enabled on the broker. :) (Thanks Aric!)
quick fix: use common_path() on realtime update JS so it works with the new JS path code (will pull from main server for now)
...
Conflicts:
actions/apioauthaccesstoken.php
actions/apioauthauthorize.php
actions/apioauthrequesttoken.php
actions/editapplication.php
actions/newapplication.php
lib/apiauth.php
lib/queuemanager.php
lib/router.php
queuectl.php --update -s<site>
queuectl.php --stop
queuectl.php --restart
Default control channel is /topic/statusnet-control. For external utilities to send a site update ping direct to the queue server, connect via Stomp and send a message formatted thus:
update:<nickname>
(Nickname here, *not* server hostname! The rest of the queues will be updated to use nicknames later.)
Note that all currently-connected queue daemons will get these notifications, including both queuedaemon.php and xmppdaemon.php. (XMPP will ignore site update requests for sites that it's not handling.)
Limitations:
* only implemented for stomp queue manager so far
* --update may not yet handle a changed server name properly
* --restart won't reload PHP code files that were already loaded at startup. Still need to stop and restart the daemons from 'outside' when updating code base.
$sn->tags() returns tag list as array; $sn->hasTag('blah') to check for a particular tag only
Could be used to control things in config file:
$sn = Status_network::setupSite($_server, $_path, $_wildcard);
if (!$sn) { die("No such site"); }
if ($sn->hasTag('individual')) { /* blah */ }
Note memcached keys are unchanged; if tags are changed from an external tool clear:
statusnet:<dbname>:status_network:<key>:<val>
for <key>s 'nickname', 'hostname', and 'pathname'
Moved much of the writing that happens when posting a notice to a new
queuehandler, distribqueuehandler. This updates tags, groups, replies
and inboxes at queue time (or at Web time, if queues are disabled).
To make this work well, I had to break up the monolithic
Notice::blowCaches() and make cache blowing happen closer to where
data is updated.
Squashed commit of the following:
commit 5257626c62750ac4ac1db0ce2b71410c5711cfa3
Author: Evan Prodromou <evan@status.net>
Date: Mon Jan 25 14:56:41 2010 -0500
slightly better handling of blowing tag memory cache
commit 8a22a3cdf6ec28685da129a0313e7b2a0837c9ef
Author: Evan Prodromou <evan@status.net>
Date: Mon Jan 25 01:42:56 2010 -0500
change 'distribute' to 'distrib' so not too long for dbqueue
commit 7a063315b0f7fad27cb6fbd2bdd74e253af83e4f
Author: Evan Prodromou <evan@status.net>
Date: Mon Jan 25 01:39:15 2010 -0500
change handle_notice() to handle() in distributqueuehandler
commit 1a39ccd28b9994137d7bfd21bb4f230546938e77
Author: Evan Prodromou <evan@status.net>
Date: Mon Jan 25 16:05:25 2010 -0500
error with queuemanager
commit e6b3bb93f305cfd2de71a6340b8aa6fb890049b7
Author: Evan Prodromou <evan@status.net>
Date: Mon Jan 25 01:11:34 2010 -0500
Blow memcache at different point rather than one big function for Notice class
commit 94d557cdc016187d1d0647ae1794cd94d6fb8ac8
Author: Evan Prodromou <evan@status.net>
Date: Mon Jan 25 00:48:44 2010 -0500
Blow memcache at different point rather than one big function for Notice class
commit 1c781dd08c88a35dafc5c01230b4872fd6b95182
Author: Evan Prodromou <evan@status.net>
Date: Wed Jan 20 08:54:18 2010 -0500
move broadcasting and distributing to new queuehandler
commit da3e46d26b84e4f028f34a13fd2ee373e4c1b954
Author: Evan Prodromou <evan@status.net>
Date: Wed Jan 20 08:53:12 2010 -0500
Move distribution of notices to new distribute queue handler
I made a bad merge on Jan 10th from master to 0.9.x. This lost a number
of memcache enhancements made on the 0.9.x branch. I've been able to
re-do the manual merge, and this represents the changes. Most of them
are related to caching on insert.
Queue handlers for XMPP individual & firehose output now send their XML stanzas
to another output queue instead of connecting directly to the chat server. This
lets us have as many general processing threads as we need, while all actual
XMPP input and output go through a single daemon with a single connection open.
This avoids problems with multiple connected resources:
* multiple windows shown in some chat clients (psi, gajim, kopete)
* extra load on server
* incoming message delivery forwarding issues
Database changes:
* queue_item drops 'notice_id' in favor of a 'frame' blob.
This is based on Craig Andrews' work branch to generalize queues to take any
object, but conservatively leaving out the serialization for now.
Table updater (preserves any existing queued items) in db/rc3to09.sql
Code changes to watch out for:
* Queue handlers should now define a handle() method instead of handle_notice()
* QueueDaemon and XmppDaemon now share common i/o (IoMaster) and respawning
thread management (RespawningDaemon) infrastructure.
* The polling XmppConfirmManager has been dropped, as the message is queued
directly when saving IM settings.
* Enable $config['queue']['debug_memory'] to output current memory usage at
each run through the event loop to watch for memory leaks
To do:
* Adapt XMPP i/o to component connection mode for multi-site support.
* XMPP input can also be broken out to a queue, which would allow the actual
notice save etc to be handled by general queue threads.
* Make sure there are no problems with simply pushing serialized Notice objects
to queues.
* Find a way to improve interactive performance of the database-backed queue
handler; polling is pretty painful to XMPP.
* Possibly redo the way QueueHandlers are injected into a QueueManager. The
grouping used to split out the XMPP output queue is a bit awkward.
Conflicts:
scripts/xmppdaemon.php
Queue handlers for XMPP individual & firehose output now send their XML stanzas
to another output queue instead of connecting directly to the chat server. This
lets us have as many general processing threads as we need, while all actual
XMPP input and output go through a single daemon with a single connection open.
This avoids problems with multiple connected resources:
* multiple windows shown in some chat clients (psi, gajim, kopete)
* extra load on server
* incoming message delivery forwarding issues
Database changes:
* queue_item drops 'notice_id' in favor of a 'frame' blob.
This is based on Craig Andrews' work branch to generalize queues to take any
object, but conservatively leaving out the serialization for now.
Table updater (preserves any existing queued items) in db/rc3to09.sql
Code changes to watch out for:
* Queue handlers should now define a handle() method instead of handle_notice()
* QueueDaemon and XmppDaemon now share common i/o (IoMaster) and respawning
thread management (RespawningDaemon) infrastructure.
* The polling XmppConfirmManager has been dropped, as the message is queued
directly when saving IM settings.
* Enable $config['queue']['debug_memory'] to output current memory usage at
each run through the event loop to watch for memory leaks
To do:
* Adapt XMPP i/o to component connection mode for multi-site support.
* XMPP input can also be broken out to a queue, which would allow the actual
notice save etc to be handled by general queue threads.
* Make sure there are no problems with simply pushing serialized Notice objects
to queues.
* Find a way to improve interactive performance of the database-backed queue
handler; polling is pretty painful to XMPP.
* Possibly redo the way QueueHandlers are injected into a QueueManager. The
grouping used to split out the XMPP output queue is a bit awkward.
Queue handlers for XMPP individual & firehose output now send their XML stanzas
to another output queue instead of connecting directly to the chat server. This
lets us have as many general processing threads as we need, while all actual
XMPP input and output go through a single daemon with a single connection open.
This avoids problems with multiple connected resources:
* multiple windows shown in some chat clients (psi, gajim, kopete)
* extra load on server
* incoming message delivery forwarding issues
Database changes:
* queue_item drops 'notice_id' in favor of a 'frame' blob.
This is based on Craig Andrews' work branch to generalize queues to take any
object, but conservatively leaving out the serialization for now.
Table updater (preserves any existing queued items) in db/rc3to09.sql
Code changes to watch out for:
* Queue handlers should now define a handle() method instead of handle_notice()
* QueueDaemon and XmppDaemon now share common i/o (IoMaster) and respawning
thread management (RespawningDaemon) infrastructure.
* The polling XmppConfirmManager has been dropped, as the message is queued
directly when saving IM settings.
* Enable $config['queue']['debug_memory'] to output current memory usage at
each run through the event loop to watch for memory leaks
To do:
* Adapt XMPP i/o to component connection mode for multi-site support.
* XMPP input can also be broken out to a queue, which would allow the actual
notice save etc to be handled by general queue threads.
* Make sure there are no problems with simply pushing serialized Notice objects
to queues.
* Find a way to improve interactive performance of the database-backed queue
handler; polling is pretty painful to XMPP.
* Possibly redo the way QueueHandlers are injected into a QueueManager. The
grouping used to split out the XMPP output queue is a bit awkward.
- NOTICE_INBOX_SOURCE_* constants moved to common.php since Notice_inbox.php not always loaded
- fixed typo in User::staticGet() call which caused user #1 to receive messages once for each subscriber instead of for him/herself
- 'continue' -> 'continue 2' inside switch() statement to fix loop escape (PHP considers switch() a looping construct for break & continue)
Key changes:
* Initialization code moved from common.php to StatusNet class;
can now switch configurations during runtime.
* As a consequence, configuration files must now be idempotent...
Be careful with constant, function or class definitions.
* Control structure for daemons/QueueManager/QueueHandler has been refactored;
the run loop is now managed by IoMaster run via scripts/queuedaemon.php
IoManager subclasses are woken to handle socket input or polling, and may
cover multiple sites.
* Plugins can implement notice queue handlers more easily by registering a
QueueHandler class; no more need to add a daemon.
The new QueueDaemon runs from scripts/queuedaemon.php:
* This replaces most of the old *handler.php scripts; they've been refactored
to the bare handler classes.
* Spawns multiple child processes to spread load; defaults to CPU count on
Linux and Mac OS X systems, or override with --threads=N
* When multithreaded, child processes are automatically respawned on failure.
* Threads gracefully shut down and restart when passing a soft memory limit
(defaults to 90% of memory_limit), limiting damage from memory leaks.
* Support for UDP-based monitoring: http://www.gitorious.org/snqmon
Rough control flow diagram:
QueueDaemon -> IoMaster -> IoManager
QueueManager [listen or poll] -> QueueHandler
XmppManager [ping & keepalive]
XmppConfirmManager [poll updates]
Todo:
* Respawning features not currently available running single-threaded.
* When running single-site, configuration changes aren't picked up.
* New sites or config changes affecting queue subscriptions are not yet
handled without a daemon restart.
* SNMP monitoring output to integrate with general tools (nagios, ganglia)
* Convert XMPP confirmation message sends to use stomp queue instead of polling
* Convert xmppdaemon.php to IoManager?
* Convert Twitter status, friends import polling daemons to IoManager
* Clean up some error reporting and failure modes
* May need to adjust queue priorities for best perf in backlog/flood cases
Detailed code history available in my daemon-work branch:
http://www.gitorious.org/~brion/statusnet/brion-fixes/commits/daemon-work
(We need to keep it returning a reference because the extlib parent class is stuck in PHP 4-land and uses references everywhere, including this function's return value. Yuck!)
Also changed pkeyGet to drop the reference, since it doesn't have an upstream equivalent.
* added ProfileDeleteRelated event to match UserDeleteRelated, to allow plugins to add extra related tables on profile deletion
* UserFlagPlugin: deleting flags when target profile is deleted
* UserFlagPlugin: deleting flags when flagging user is deleted
* UserFlagPlugin: fix for autoloader -- class names are case-insensitive. We may get lowercase class names coming in at times, such as when creating DB objects programatically from a table name.
Note that any already-existing bogus entries need to be removed from the database:
select * from user_flag_profile where (select id from profile where id=profile_id) is null;
select * from user_flag_profile where (select id from user where id=user_id) is null;
In web interface and retweet/repeat API we show the original untrimmed text, but some back-compat API messages will still show the trimmed 'RT' version.
This matches Twitter's behavior on overlong retweets, though we're outputting the RT version in more API results than they do.
In web interface and retweet/repeat API we show the original untrimmed text, but some back-compat API messages will still show the trimmed 'RT' version.
This matches Twitter's behavior on overlong retweets, though we're outputting the RT version in more API results than they do.
* We now cache negative lookups; clear them in Memcached_DataObject->insert()
* Mark file.url as a unique key in statusnet.ini so its negative lookups are cleared properly (first save of a notice with a new URL was failing due to double-insert)
* Now using serialization for default in-process cache instead of just saving objects; avoids potential corruption if you save an object to cache, change the original object, then fetch the same key from cache again
There's great value in knowing that something doesn't exist. We
now cache this information, and carefully compare the results from
cache as $results !== false instead of !empty($results), since some
empty values (null, 0, empty array, empty string) are stored in the
cache.
Caching staticGet() and pkeyGet() now store DB misses in the cache,
and cachedQuery() checks for empty results from the cache.
There were some problems with the automated cache/uncache system
for data objects that made us cache unfindable keys (with null
attributes and sometimes null names). Fixed those problems and
refactored the encache() and decache() methods so they use a helper
to find the cache keys to use.
Moved the important parts of the location-argument-handling stuff
to a single function. Handles defaults and overrides correctly, and
easy to use. Changed Web and API channels to use it.
Moved the important parts of the location-argument-handling stuff
to a single function. Handles defaults and overrides correctly, and
easy to use. Changed Web and API channels to use it.
Sorting on notice.id when our primary selector was notice_inbox.user_id caused a filesort and table scan of the notice table.
Switchng to notice_inbox's notice_id means we can use our index, and everything comes right up.
Before:
mysql> explain SELECT notice.id AS id FROM notice JOIN notice_inbox ON notice.id = notice_inbox.notice_id WHERE notice_inbox.user_id = 18574 AND notice.repeat_of IS NULL ORDER BY notice.id DESC LIMIT 61 OFFSET 0;
+----+-------------+--------------+--------+------------------------------------+---------+---------+-------------------------------+--------+----------------------------------------------+
| id | select_type | table | type | possible_keys | key | key_len | ref | rows | Extra |
+----+-------------+--------------+--------+------------------------------------+---------+---------+-------------------------------+--------+----------------------------------------------+
| 1 | SIMPLE | notice_inbox | ref | PRIMARY,notice_inbox_notice_id_idx | PRIMARY | 4 | const | 102600 | Using index; Using temporary; Using filesort |
| 1 | SIMPLE | notice | eq_ref | PRIMARY | PRIMARY | 4 | stoica.notice_inbox.notice_id | 1 | Using index |
+----+-------------+--------------+--------+------------------------------------+---------+---------+-------------------------------+--------+----------------------------------------------+
After:
mysql> explain SELECT notice.id AS id FROM notice JOIN notice_inbox ON notice.id = notice_inbox.notice_id WHERE notice_inbox.user_id = 18574 AND notice.repeat_of IS NULL ORDER BY notice_id DESC LIMIT 61 OFFSET 0;
+----+-------------+--------------+--------+------------------------------------+---------+---------+-------------------------------+--------+--------------------------+
| id | select_type | table | type | possible_keys | key | key_len | ref | rows | Extra |
+----+-------------+--------------+--------+------------------------------------+---------+---------+-------------------------------+--------+--------------------------+
| 1 | SIMPLE | notice_inbox | ref | PRIMARY,notice_inbox_notice_id_idx | PRIMARY | 4 | const | 102816 | Using where; Using index |
| 1 | SIMPLE | notice | eq_ref | PRIMARY,notice_repeatof_idx | PRIMARY | 4 | stoica.notice_inbox.notice_id | 1 | Using where |
+----+-------------+--------------+--------+------------------------------------+---------+---------+-------------------------------+--------+--------------------------+
Sorting on notice.id when our primary selector was notice_inbox.user_id caused a filesort and table scan of the notice table.
Switchng to notice_inbox's notice_id means we can use our index, and everything comes right up.
Before:
mysql> explain SELECT notice.id AS id FROM notice JOIN notice_inbox ON notice.id = notice_inbox.notice_id WHERE notice_inbox.user_id = 18574 AND notice.repeat_of IS NULL ORDER BY notice.id DESC LIMIT 61 OFFSET 0;
+----+-------------+--------------+--------+------------------------------------+---------+---------+-------------------------------+--------+----------------------------------------------+
| id | select_type | table | type | possible_keys | key | key_len | ref | rows | Extra |
+----+-------------+--------------+--------+------------------------------------+---------+---------+-------------------------------+--------+----------------------------------------------+
| 1 | SIMPLE | notice_inbox | ref | PRIMARY,notice_inbox_notice_id_idx | PRIMARY | 4 | const | 102600 | Using index; Using temporary; Using filesort |
| 1 | SIMPLE | notice | eq_ref | PRIMARY | PRIMARY | 4 | stoica.notice_inbox.notice_id | 1 | Using index |
+----+-------------+--------------+--------+------------------------------------+---------+---------+-------------------------------+--------+----------------------------------------------+
After:
mysql> explain SELECT notice.id AS id FROM notice JOIN notice_inbox ON notice.id = notice_inbox.notice_id WHERE notice_inbox.user_id = 18574 AND notice.repeat_of IS NULL ORDER BY notice_id DESC LIMIT 61 OFFSET 0;
+----+-------------+--------------+--------+------------------------------------+---------+---------+-------------------------------+--------+--------------------------+
| id | select_type | table | type | possible_keys | key | key_len | ref | rows | Extra |
+----+-------------+--------------+--------+------------------------------------+---------+---------+-------------------------------+--------+--------------------------+
| 1 | SIMPLE | notice_inbox | ref | PRIMARY,notice_inbox_notice_id_idx | PRIMARY | 4 | const | 102816 | Using where; Using index |
| 1 | SIMPLE | notice | eq_ref | PRIMARY,notice_repeatof_idx | PRIMARY | 4 | stoica.notice_inbox.notice_id | 1 | Using where |
+----+-------------+--------------+--------+------------------------------------+---------+---------+-------------------------------+--------+--------------------------+
self-subscription) via the API. Additionally, make it impossible
to block yourself or unsubscribe from yourself, period.
I also made User use the subs.php helper function for unsubscribing
during a block.
Hopefully, these changes will get rid of the problem of people
accidentally deleting their self-subscriptions once and for all
(knock on wood).
* master: (67 commits)
Ticket 2038: fix bad bug tracker link
Fix regression in group posting: bug introduced in commit 1319002e15. Need to use actual profile object rather than an id on a variable that doesn't exist when checking blocks :D
Log database errors when saving notice_inbox entries
Drop the username from the log id for now; seems to trigger an error loop in some circumstances
request id on logs... pid + random id per web request + username + method + url
Add OpenID ini info back into statusnet.ini as a stopgap until we can
Some changes to the OpenID DataObjects to make them emit the exact same
OpenID plugin should set 'user_openid.display' as unique key
Remove relationship: user_openid.user_id -> user.id. I don't think this
Have OpenID plugin DataObjects emit their own .ini info
Revert "Allow plugin DB_DataObject classes to not have to use the .ini file by overriding keys(), table(), and sequenceKey() for them"
Catch and report exceptions from notice_to_omb_notice() instead of letting the OMB queue handler die.
Fix regression in remote subscription; added hasRole() shadow method on Remote_profile.
Fix fatal error on OMB subscription for first-timers
Remove annoying log msg
Drop error message on setlocale() failure; this is harmless, since we actually have a working locale set up.
Catch uncaught exception
Fixed bug where reply-sync bit wasn't getting saved
Forgot to render the nav menu when on FB Connect login tab
Facebook plugin no longer takes over Login and Connect settings nav menus
...
Conflicts:
db/08to09_pg.sql
db/statusnet_pg.sql
locale/pt_BR/LC_MESSAGES/statusnet.mo
plugins/Mapstraction/MapstractionPlugin.php
DB_DataObject hides errors by silently returning null for any non-existent method call, making it harder to tell what the heck's going on... the rights check for blocked remote users returned null for the check for subscribe rights, thus eval'ing to false. We now log a note in this circumstance, which would have cut about 3 hours off of the debug time.
DB_DataObject hides errors by silently returning null for any non-existent method call, making it harder to tell what the heck's going on... the rights check for blocked remote users returned null for the check for subscribe rights, thus eval'ing to false. We now log a note in this circumstance, which would have cut about 3 hours off of the debug time.
Added a right for new notices, realized that the hasRight() method
should be on the profile, and moved it.
Makes this a less atomic commit but that's the way it goes sometimes.
Added EmailAuthenticationPlugin
Added ReverseUsernameAuthenticationPlugin
Changed the StartChangePassword and EndChangePassword events to take a user, instead of a nickname
User::allowed_nickname was declared non-static, but used as if it was static, so I made the declaration static
Upgrade notes:
* Index names have changed from hardcoded 'Identica_people' and 'Identica_notices' to use the database name and actual table names. Must reindex.
New events:
* GetSearchEngine to override default search engine class selection from plugins
New scripts:
* gen_config.php generates a sphinx.conf from database configuration (with theoretical support for status_network table, but it doesn't seem to be cleanly queriable right now without knowing the db setup info for that. Needs generalized support.)
* Replaced old sphinx-indexer.sh and sphinx-cron.sh with index_update.php
Other fixes:
* sphinx.conf.sample better matches our live config, skipping unused stopword list and using a more realistic indexer memory limit
Further notes:
* Probably doesn't work right with PostgreSQL yet; Sphinx can pull from PG but the extraction queries currently look like they use some MySQL-specific functions.
For various reasons, it's nicer to have a class for theme-file paths
and such. So, I've rewritten the code for determining the locations of
theme files to be more OOPy.
I changed all the uses of the two functions in the module (theme_file
and theme_path) to use Theme::file and Theme::path respectively.
I've also removed the code in common.php that require's the module;
using a class means we can autoload it instead.
The User_openid data object was explicitly listed as a related field to delete from in User::delete(); this class doesn't exist anymore by default since OpenID was broken out to a plugin.
Added UserDeleteRelated event for plugins to add related tables to delete from at user delete time.
Caching support will be added in future work after unit tests have been added.
* extlib: add PEAR HTTP_Request2 0.4.1 alpha
* extlib: update PEAR Net_URL2 to 0.3.0 beta for HTTP_Request2 compatibility
* moved direct usage of CURL and file_get_contents to HTTPClient class, excluding external-sourced libraries
* adapted GeonamesPlugin for new HTTPResponse interface
Note some plugins haven't been fully tested yet.
Caching support will be added in future work after unit tests have been added.
* extlib: add PEAR HTTP_Request2 0.4.1 alpha
* extlib: update PEAR Net_URL2 to 0.3.0 beta for HTTP_Request2 compatibility
* moved direct usage of CURL and file_get_contents to HTTPClient class, excluding external-sourced libraries
Note some plugins haven't been tested yet.
This reverts commit 15f9c80c28.
So, so, elegant! And so, so, incorrect!
We can't have a user named 'notice' because that would interfere with
URLs like /notice/1234. However, there is no file named 'notice' in
the Web root.
If there were a way to automatically pull out the virtual paths in the
root dir, this may make sense. Until then, we keep track here.
Canon urls that have a protocol followed by a host (and no path) automatcally get a trailing slash by the canon function - make the unit test match that
The reason for this is that table 'file' column 'url' is a VARCHAR(255) in MySQL and it silently truncates URLs longer than 255 characters, breaking the url.
The proper fix for this is to improve this column, making its type TEXT, but there are no database changes for 0.8.x, so this is the next best thing for data integrity. A migration script for 0.9.x could be written to audit the database checking for redirects and updating these urls to their proper canonical url.
As a first step to pluginizing our OpenID support, I've moved the
important OpenID-related files to a dedicated plugin directory. Many
of these classes are still referred to by libraries that are still in
core.
I added some code so that the site-wide design can be set, using the
configuration interface.
I also moved the configuration option from
$config['site']['design']['background'] to just
$config['design']['background'], but the old syntax will still work.
I changed the reply-to algorithm so that we only say a notice is in
reply to another notice if a) we receive that information from the Web
or API or b) it's in a "low bandwidth" (XMPP, SMS) channel, and begins
with "@nickname" or "T NICKNAME".
The goal is to avoid false-positives and make conversation trees more
accurate and useful.
* candrews-review:
maildaemon makes mail attachments into notice attachments
File classes does not use the $FILES array directly, as users of this class aren't necessarily from the web
* '0.8.x' of git@gitorious.org:laconica/dev: (61 commits)
Using default theme design values (it was previously set to identica
Updated default colour theme and IE6 colours for transparent values
chmod +x delete_status_network.sh
rm -Rf, not rmdir
script to delete a status network
chmod allsites.php
script to show all sites on a network
use different name for connection and database
use /etc/laconica/setup.cfg instead of local file
other base directories
On XHR notice post, calls NoticeAttachment to trigger thumbnail and
oembed and thumbnail don't need sequences
add innodb by default to status networks
pwgen not pwdgen
make pwgen command configurable
a little sql script to drop full-text index and use innodb for profile and notice
remove common_debug from newnotice
append uploads to content rather than showing them double
use a subclass for single notice items to show attachments
make file command configurable
...
Some minor database changes for file tables. Namely:
* Added a timestamp to all tables
* Added a filename column for local files
* Change some tables that had unnecessary auto-increment primary
keys when they had another unique column that should act as
the primary key
* Change engine from MyISAM to InnoDB for a couple of files.
Also, rebuilt the DB_DataObject files for all these tables.
* userdesign: (56 commits)
Fix for background image repetition for various page heights
Removed height:100% for better background image repetition
A little more specific selector for notice reply
Have user favorites page show user's design
Placed a check to make sure there is a reply button in a notice before
Make MailboxAction read only
Remove stale reference to deprecated personal.php
Uppercase hex color values
Default to image being on, no tile after upload
Fix sidebar color bug default design
Update background image settings to use bitflags
It was accidently removed
Dynamically tile background image and turn background image on or off
Show a background img in settings form
IE7/8 CSS update for user design
Enable tiling of background imgs for Designs
Added background image tile flag to Design
Init styles for tile and image use on/off for user design settings
Added form option to tile background image and to turn it on and off
Add background dir
...
* 0.8.x:
Moved url handling to its proper place, from newnotice to Notice.php
Removed more dead code.
Brought back borders for content, navigation, aside_primary but
Minor margin value change
More contrast for tabs
UI updates:
* 0.8.x:
Revert "Using neutral colour for notice hover"
Using neutral colour for notice hover
forgot to disinherit Memcached_DataObject in Status_network
Add some basic memcached handling to status_network
Status_network can't be a subclass of Memcached_DataObject -- the
latter is too entrenched in Laconica's memc handling functions, which
aren't loaded when Status_network is running! But the importance of
caching these values can't be overstated. So, a considerably
slimmed-down version of the Memcached_DataObject code is transcribed
into Status_network.
* 0.8.x:
a little better query handling in redirect code
a little better query handling in redirect code
forgot some functions aren't available at status time
redirect on non-canonical server name
don't show create-a-group link if not logged in
allow a configured base for cache keys
Missing call to getProfile() caused verify_credentials to fail.
change mods for setup script
Script to set up new status networks
strncmp -> strcasecmp
Return network from network setup function
Configurable avatar directory
* 0.8.x: (32 commits)
updates to Status_network
makeadmin action
make admins of groups
show aliases when showing a group
Link and distribute notices tagged for a group alias
Code for adding and saving group aliases
Styles for group block
add correct li for css magic for block stuff
typo in profileminilist class
return count from show
try to get the right class for profileminilist
fix perms for classes/statusnet.ini
fixup perms for classes
Added Group_alias class
add a table for group aliases
Cross-browser notice_attach
Allow users to be unblocked from a group
Some UI improvements for blocking and unblocking
The rest of the things necessary to make group block work
Make group block work
...
Conflicts:
db/laconica.sql
lib/common.php
Correctly link and distribute notices tagged for a group alias. Added
a helper function, getForNickname(), to User_group, to make it easier
to get a group by its nickname or aliases.
Added code to add and save group aliases. Like tags, aliases are
free-texted in to the group admin page. configurable max number of
aliases, default is three.
List users who are blocked from joining a group. Add a form to let
them be unblocked. Add an action that removes the block. Includes
changes to group and groupblock classes.
This reverts commit 84072aa5cf.
This commit caused grievous harm to old notices on identi.ca.
Reverting until we figure out how to convert the old notices.
This reverts commit 84072aa5cf.
This commit caused grievous harm to old notices on identi.ca.
Reverting until we figure out how to convert the old notices.
This is the beginning of the code for status.net and related status
farms. It will read basic information about a site from a shared,
central database and use the data stored there to switch on the
hostname.
Took the various places that we create an atom entry for a notice, and
jammed them together into one function of the notice class, and then
used that function. Also, added Atom threading extension and
categories for hashtags.
The OAuth store was failing on getting a request token, because the
token value was forced to be non-null in the DB. Let this value be
null, and use the correct primary key (consumer, timestamp, nonce).
Drop the reference to token table, and don't ever use it.
Added a local directory for locally-installed software. This is where
you should put any code you write, themes, plugins, etc. so they don't
get stomped by upgrades.
We optionally ignore some notice sources from the public page.
Typically these are automatic notice sources like twitterfeed that
don't usually represent the community on the site very well.
The classes/ subdir is primarily for the DB_DataObject classes. Stuff
in there can get stomped by various generation scripts.
I've moved the lurkers there -- related to command-handling -- to
lib/. Since auto-loading works fine with lib/, there shouldn't be much
of a visible change here.
Moved the common_avatar_* functions to the Avatar class. Typically
either as methods on the object or as static methods. Replaced all the
uses of the functions in other modules.
"Rememberme" logins aren't allowed to make changes to an account
(since cookie-stealing is too easy). Users have to re-authenticate.
Previously, it was impossible to do so without having a username and
password; this change lets you do it with OpenID, too.
These command-output channels were using the old common_element_*
functions. They now take an $out constructor parameter, and use that
for output.
The WebChannel has pretty remedial output; it would be nice if it
output a real formatted page.
This reverts commit 0f2c43bd04.
Making Channel a subclass of Action for no other reason than to let
the AjaxWebChannel do some output is the really, really wrong way to
do this. A Channel is not an Action.
I'll change AjaxWebChannel so it takes an Action as a constructor
paramater and uses that Action for its output. We do this for most
Widget subclasses and it makes sense here, too.
Another gigantor PEAR coding standards patch. Here, I've moved the
opening curly bracket on a class statement to the following line.
darcs-hash:20081223194923-84dde-77a93de314caadbcb5b70bf346a4648be77a864e.gz
More PEAR coding standards global changes. Here, I've changed all
instances of TRUE to true and FALSE to false.
darcs-hash:20081223194428-84dde-cb1a1e6f679acd68e864545c4d4dd8752d6a6257.gz
Another huge change, for PEAR code standards compliance. Function
headers have to be in K&R style (opening brace on its own line),
instead of having the opening brace on the same line as the function
and parameters. So, a little perl magic found all the function
definitions and move the opening brace to the next line (properly
indented... usually).
darcs-hash:20081223193323-84dde-a28e36ecc66672c783c2842d12fc11043c13ab28.gz
Another global search-and-replace update. Here, I've replaced the PHP
keyword 'NULL' with its lowercase version. This is another PEAR code
standards change.
darcs-hash:20081223192129-84dde-4a0182e0ec16a01ad88745ad3e08f7cb501aee0b.gz
The PEAR coding standards decree: no tabs, but indent by four spaces.
I've done a global search-and-replace on all tabs, replacing them by
four spaces. This is a huge change, but it will go a long way to
getting us towards phpcs-compliance. And that means better code
readability, and that means more participation.
darcs-hash:20081223191907-84dde-21e8efe210e6d5d54e935a22d0cee5c7bbfc007d.gz
Changed the flag on notices that says whether the notice is local, so
that it's -1 for local-but-blacklisted. This should keep blacklisted
users off the public timeline.
darcs-hash:20081202184258-5ed1f-cd87ea5c528ea0c90cb31eeb59d4d1ba4f85e9ad.gz
We do some extra caching of streams, at ';last'. If a notice is
deleted, we need to blow those caches, too. So, this deletes them.
darcs-hash:20081124003240-84dde-aa4561e5e68b0ccc0598ac86294ea54f9be5775a.gz
On identi.ca, certain users (http://identi.ca/derricklo) publish 5-10
automated notices every half hour or hour. This can flood the public
stream, making it unreadable for casual readers.
We don't want to prevent anyone from using the site for personal use.
However, if their personal use clouds up the public space, we can
gently remove them from that public space without interfering with
their personal activity.
So: this change prevents selected people's notices from appearing in
the public stream. It's hand-configured by an administrator, and
probably doesn't scale beyond 10-20 blacklisted users. It's a stopgap
measure.
darcs-hash:20081120183722-84dde-8a8401fbcbb6abb60a8b36de249323586ea0b22c.gz
I moved the 4 streams for a user (with friends, faves, replies,
personal) into functions on the User object. Added a helper function
in Notice for making notice streams. Also, will fetch notice streams
out of the memcached server, if possible. Made the API, RSS, and HTML
output all use the same streams (hopefully cached).
Added some code to Notice to blow the cache when a notice is posted.
Also, added code to favor and disfavor actions to blow the faves
cache, too.
darcs-hash:20080928120119-5ed1f-ead542348bcd3cf315be6f42934353154402eb16.gz
I added a new class, Memcached_DataObject, that will (optionally)
fetch data out of a memcached server if it's available. This only
works on 'staticGet'.
Methods that write to the database (insert, update, delete) will clear
and set the cache correctly, too.
darcs-hash:20080926160941-5ed1f-922de078b4c1941853ad014edf9a17fae486f8cf.gz
Added an inbox and outbox for direct messages.
Factored common code to mailbox.php. Factored common code with
stream.php to personal.php.
darcs-hash:20080916195346-84dde-b5c846f713a970c41fd1b0671cb333e91f3cb920.gz
Add the code to registration to handle invitation codes.
Some edge cases on invitations: is the user already subbed to this
person? Tell them. Is the person already on the system? Sub the user
to them, then, and tell the user.
Add some code to User to auto-sub invitees whenever the email address
changes. Call it from a new registration with an invite code, and also
from confirmaddress.
Some whitespace cleanup in the files touched.
darcs-hash:20080827001927-84dde-b50e5d921ca3f2fb894821730ff93cac09d2ba66.gz
noticesWithFriends is turning out to be one of our most expensive
queries. The join is costly, and this method is hit over and over and
over by desktop clients and other API users.
So, I've added a first pass at caching the results. I store a "window"
of notices -- equal to the first 3 pages of notices, plus one for
pagination -- in the memcached cache. If with-friends notices are
requests, I fetch the whole window out of the cache and grab the slice
requested. If the requested notices are outside the window, we just do
the query. If there's nothing in the cache, we request the window and
store it, then return a slice.
I had to add a NoticeWrapper class that works like DB_DataObject
(well, just the fetch() part...) but just holds an array of notices
instead of a DB cursor.
Finally, saving a new notice blows away the caches for subscribed users.
darcs-hash:20080915065616-84dde-1b1e814c2294498a10b763b779cbb62c3f96aa84.gz
* No need to check $source's value before inserting
* No need to update the notice if the $uri was known in advance
darcs-hash:20080902173804-57fc3-496ceaf8192694db43e62f7af1f57785a1a16a01.gz
Make "#sanfrancisco", "#SanFrancisco", "#san_francisco", "#San.Francisco", and "#SAN-FRANCISCO" all link to http://identi.ca/tag/sanfrancisco but preserve appearance
darcs-hash:20080901025932-e3c0d-c0a939eaf7e242d88cbcb0d651c9d53718c60a9d.gz
Make "#test", "#Test", and "#tEsT" all preserve appearance but link to the same tag
darcs-hash:20080901001241-e3c0d-b466f35f4f023c6c90a6d2817487c97be9a1bbca.gz
Breaking up to use multiple queue handlers means we need multiple
queue items for the same notice. So, change the queue_item table to
have a compound pkey, (notice_id,transport).
darcs-hash:20080827211239-84dde-db118799bfd43be62fb02380829c64813c9334f8.gz
Eventually, the poor xmppdaemon has become overloaded with extra
tasks. So, I've broken it up. Now, we have 5 background scripts, and
more coming:
* xmppdaemon.php - handles incoming XMPP messages only.
* xmppqueuehandler.php - sends notices from the queue out through XMPP.
* smsqueuehandler.php - sends notices from the queue out over SMS
* ombqueuehandler.php - sends notices from the queue out over OMB
* xmppconfirmhandler.php - sends confirmation requests out over XMPP.
This is in addition to maildaemon.php, which takes incoming messages.
None of these are "true" daemons -- they don't daemonize themselves
automatically. Use nohup or another tool to background them. monit can
also be useful to keep them running.
At some point, these might become fork()'ing daemons, able to handle
more than one notice at a time. For now, I'm just running multiple
instances, hoping they don't interfere.
darcs-hash:20080827205407-84dde-97884a12f5f4e54c93bc785bd280683d1ee7e749.gz
Added some code to make finishing the OpenID login work.
Changed the OID storage so that there's a "canonical" URL and a
display URL. This is because of i-names, which is annoying.
If the login succeeds, we try to find a local user associated with the
canonical URL. If they don't exist, we let the user either create a
new account, or login to an existing account and connect to it.
A totally unrelated change is that the DB engine now uses InnoDB.
darcs-hash:20080618052638-84dde-909e51dbd5b9eadadf18cd010868baa18ea2349a.gz
Extracted the code for setting a new original avatar to the Profile
class, and moved some of it to Avatar, too. This makes it easier to
have the same functionality whether an avatar is set using the profile
settings (for our users), or on a remote subscription. Necessitated
changing the filenaming function to just take an ID.
darcs-hash:20080605193708-84dde-a441cc0474951ce7f1a1da9310b5145c0b7c3070.gz
First pass at a server-side storage model. New tables for consumers,
tokens, and nonces, with associated classes. An OAuthDataStore class
interfaces with the OAuth.php library to enable server logic.
Some additional work to get pretty-OK random number generation into
the utilities library. Use /dev/urandom if available; else use
mt_rand().
darcs-hash:20080527200721-84dde-308c047af2ebc2c4d753c1e1e24af20fef862a7e.gz
Ran everything through php -l, found out that it didn't compile.
So: fixed the am-I-running-in-Laconica check at the top of each file.
Some syntax fixes in shownotice, showstream, common.
darcs-hash:20080517154701-84dde-8d38da89c5b9cb3b40704adb04a4de880c204181.gz