Given a notice in the local system, we package it up as an Atom entry and MagicSig it up.
We run the magicenv verification on it locally to make sure our own functions can decode it.
Optionally with --verify we can send to Tuomas Koski's verification test service (not sure if this is working 100%)
If given --slap= with a target Salmon endpoint, we'll sent it on and see if it liked it. (Note that StatusNet will reject if there's not a relevant mention, but will report acceptance for dupes so you can use a message that's already been delivered as a test.)
Added StartRegistrationTry/EndRegistrationTry calls into those three, and moved the actual recording hook to EndUserRegister which is guaranteed to be called from User::register (so we don't need to worry about other auth methods forgetting to call the other UI-code hooks).
We were passing DOM nodes directly into the queues for the final bookmark import stage; unfortunately these don't actually survive serialization.
Moved the extraction of properties from the HTML up to the first-stage handler, so now we don't have to worry about moving DOM nodes from one handler to the next. Instead passing an associative array of properties, which is fed into the Bookmark::saveNew by the per-bookmark handler.
delicious bookmark exports use the godawful HTML bookmark file format that ancient versions of Netscape used (and has thus been the common import/export format for bookmarks since the dark ages of the web :)
This arranges bookmark entries as an HTML definition list, using a lot of implied close tags (leaving off the </dt> and </dd>).
DOMDocument->loadHTML() uses libxml2's HTML mode, which generally does ok with muddling through things but apparently is really, really bad about handling those implied close tags.
Sequences of adjacent <dt> elements (eg bookmark without a description, followed by another bookmark "<dt><dt>"), end up interpreted as nested ("<dt><dt></dt></dt>") instead of as siblings ("<dt></dt><dt></dt>").
The first round of code tried to resolve the nesting inline, but ended up a bit funky in places.
I've replaced this with a standalone run through the data to re-order the elements, based on our knowing that <dt> and <dd> cannot directly contain one another; once that's done, our main logic loop can be a bit cleaner. I'm not 100% sure it's doing nested sublists correctly, but these don't seem to show up in delicious export (and even if they do, with the way we flatten the input it shouldn't make a difference).
Also fixed a clearer edge case where some bookmarks didn't get imported when missing descriptions.
I was trying to generate URIs for Bookmarks based on (profile, crc32(url), created).
I failed at that. CRC32s are unsigned ints, and our schema code didn't like that.
On top of that, my code to encode and restore created timestamps was problematic.
So, I switched back to using a meaningless unique ID for Bookmarks.
One way to do this would be to use an auto-incrementing integer ID. However, we've been
kind of crabbed out a few times for exposing auto-incrementing integer IDs as URIs, so
I thought maybe using a random UUID would be a better way to do it.
So, this patch sets random UUIDs for URIs of bookmarks.
Had some problems with PuSH and Salmon use of Bookmarks; they were
being required to generate Atom versions of the bookmark _before_ the bookmark was saved.
So, I reversed the order of how things are saved, and associate notices and bookmarks
by URI rather than notice_id.
Form for saving bookmarks that looks like the delicious.com form.
Save a new notice with the right text, but attach a new notice_bookmark
table which marks this as a bookmark. Tags, URLs are kept the same.
Fixes for Twitter bridge breakage on 32-bit servers. New "Snowflake" 64-bit IDs have become too big to fit in the integer portion of double-precision floats, so to reliably use these IDs we need to pull the new string form now.
Machines with 64-bit PHP installation should have had no problems (except on Windows, where integers are still 32 bits)
Conflicts:
plugins/TwitterBridge/twitterimport.php <- as this hasn't been broken out, the import code is NOT FULLY UPDATED HERE.
Some of our caching systems, like the disk cache or memcached, have
significant overhead (network connections or disk I/O).
This plugin adds an additional layer of in-process cache, so we don't
need to reconnect to external cache systems when we've already
received a data item from the cache. There are some concurrency issues
here, but typically they won't be important at the level of a single
web hit.
Piwik's current default recommended JS for loading creates a <script> tag via document.write(). In addition to being generally evil, this means the browser doesn't know it's going to need piwik.js until that chunk of script gets executed... which can't happen until all scripts referenced *before* it have been loaded and executed.
The only reason for that bit of script though seems to be to pick 'http' or 'https' depending on the current page's scheme. This can be done more simply by using a protocol-relative link (eg "//piwik.status.net/piwik.js"), which the browser will resolve as appropriate. Since it's now sitting in the <script> tag, the browser's lookahead code will now see it and be able to start loading it while earlier things are parsing/executing.
May be better still to move to an asynchronous load after DOM-ready, but I'm not sure if that'll screw with the analytics code (eg, not being able to start things on the DOM-ready events since they're past).
The default full build of OpenLayers.js is 943kb as of 2.10; this gzips down to a couple hundred kb
but is still rather nasty, plus loading it off a remote host could slow things down.
Using a local copy let us cut down the size significantly by discarding unused features, and further
minification with yui-compressor shaves a bit more off. Cuts down to about 1/5 the size of the
original.
Also threw in a bundled & minified copy of the Mapstraction classes plus our usermap.js,
which covers the common case of using the default OpenLayers provider. This cuts out three
additional script loads, two of which weren't getting launched until after the mxn.js main
file got loaded.