Current test run includes:
* register accounts (via web form)
* local post
* @-mention using path (@domain/path/to/user)
Subscriptions, webfinger mentions, various paths to subscription and unsubscription, etc to come.
* drop OStatusPlugin::localProfileFromUrl(), we can just look up on user.uri
* clean up a few edge cases that returned null through Ostatus_profile::ensure* code paths, now throws clear exception when we can't find a feed from the given profile url
* add some doc comments on the ensure* methods
Moved the various classes used by the Activity class to their own
files. There were >10 classes in the same file, with around 1500 lines
in the file. Just too big.
This change makes autoloading work for these classes, so also removed
the hard require in lib/common.php.
Superfeedr (sp.?) posts entries without author information. We can
assume that this is intended to be by the original author.
Re-structured the checks for entries that come in by PuSH so they can
either have no author or an empty author, but not a different author.
RSS feeds have the format
<rss><channel><item/><item/><item/></channel></rss>. The element named
$rss was actually the <channel> element, so I renamed the variable so
I wouldn't hurt my head.
This bug was hitting a number of places where we had the pattern:
$db->find();
while($dbo->fetch()) {
$x = clone($dbo);
// do anything with $x other than storing it in an array
}
The cloned object's destructor would trigger on the second run through the loop, freeing the database result set -- not really what we wanted.
(Loops that stored the clones into an array were fine, since the clones stay in scope in the array longer than the original does.)
Detaching the database result from the clone lets us work with its data without interfering with the rest of the query.
In the unlikely even that somebody is making clones in the middle of a query, then trying to continue the query with the clone instead of the original object, well they're gonna be broken now.
* 'testing' of gitorious.org:statusnet/mainline:
Parse RSS items as activities
Remove hkit and do our own hcard parsing
Work around weird bug with HTML normalization via PHP DOM module; if source had xmlns and xml:lang I ended up with double output, breaking the subsequent parsing. Will have to track this down later and report upstream if not already resolved.
First steps to parsing RSS items as activities. RSS feeds don't seem
to have enough data to make good remote profiles, but this may work
with some "hints".
Parsing hcards for the data we need wasn't hard enough to justify using
hkit. It was dependent on a number of external systems (something to
run tidy), and only could handle XHTML.
We now parse HTML with the PHP dom libraries used elsewhere, and
scrape out our own hcards. Seems to work nicer and faster and most of
all works with Google Buzz profile URLs.