Moved the various classes used by the Activity class to their own
files. There were >10 classes in the same file, with around 1500 lines
in the file. Just too big.
This change makes autoloading work for these classes, so also removed
the hard require in lib/common.php.
Superfeedr (sp.?) posts entries without author information. We can
assume that this is intended to be by the original author.
Re-structured the checks for entries that come in by PuSH so they can
either have no author or an empty author, but not a different author.
RSS feeds have the format
<rss><channel><item/><item/><item/></channel></rss>. The element named
$rss was actually the <channel> element, so I renamed the variable so
I wouldn't hurt my head.
This bug was hitting a number of places where we had the pattern:
$db->find();
while($dbo->fetch()) {
$x = clone($dbo);
// do anything with $x other than storing it in an array
}
The cloned object's destructor would trigger on the second run through the loop, freeing the database result set -- not really what we wanted.
(Loops that stored the clones into an array were fine, since the clones stay in scope in the array longer than the original does.)
Detaching the database result from the clone lets us work with its data without interfering with the rest of the query.
In the unlikely even that somebody is making clones in the middle of a query, then trying to continue the query with the clone instead of the original object, well they're gonna be broken now.
* 'testing' of gitorious.org:statusnet/mainline:
Parse RSS items as activities
Remove hkit and do our own hcard parsing
Work around weird bug with HTML normalization via PHP DOM module; if source had xmlns and xml:lang I ended up with double output, breaking the subsequent parsing. Will have to track this down later and report upstream if not already resolved.
First steps to parsing RSS items as activities. RSS feeds don't seem
to have enough data to make good remote profiles, but this may work
with some "hints".
Parsing hcards for the data we need wasn't hard enough to justify using
hkit. It was dependent on a number of external systems (something to
run tidy), and only could handle XHTML.
We now parse HTML with the PHP dom libraries used elsewhere, and
scrape out our own hcards. Seems to work nicer and faster and most of
all works with Google Buzz profile URLs.
* 'testing' of gitorious.org:statusnet/mainline:
OStatus discover fixes:
Remove xpm support (no one really uses it, and IMAGETYPE_XPM is undefined, causing warnings)
Fix notice warning about unused var -- was renamed during refactoring.
* Subscription::start was sometimes passing users instead of profiles to hooks, which broke OStatus subscription notifications; now normalizing to profiles for processing.
* H-card parsing would trigger a lot of PHP warnings and notices in hKit. Now suppressing warnings and notices for the duration of the call to keep them out of output when display_errors is on.
* H-card parsing would trigger a PHP fatal error if the source page was not well-formed XML and Tidy was not present on the system. Switched normalization to use the PHP DOM module which is always present, as we have no need for Tidy's extra features here.
* Trying to fetch avatars from Google profiles failed and triggered a PHP warning due to the relative URL not being resolved during h-card parsing. Now passing profile page URL into hKit by sneaking a <base> tag in while we normalize the HTML source.
* Profile pages without a "Link" header could trigger PHP notices due to a bad NULL -> array(NULL) conversion in LinkHeader::getLink(). Now checking that there was a return value before converting single return value into array.