April 15, 2013

Californication

There will always be extreme in Californication, from the initial minutes of a blasphemous moment in a church by a agreeable nun, to the moment that dreams do come true.

Maybe boys will be boys and we just enjoy lewdness, or maybe there is a grain of truth in it. But 6 years ago when I started watching the show, I never though the roller coast of emotions it would trigger.

Based on a set of never ending love stories, some peaceful, other just normal and a big sad one, it keeps rocking my socks off with the impossible real situations, and the fulsome dialogs.

You get to live an alternative universe in a even dozen instalments every year, and the only thing left after you are done, is a longing for the next fix.

Thank you, Hank Moody, you degenerate impossible lucky bastard. See you next year.

January 07, 2013

Dreaming in sync

If you ever get me talking about applications and synchronisation of data, you might notice that I'm very passionate about sync.

You'd be right. My favourite topic in college was distributed operating systems. Today, when I design a system, I always envision it as a set of cooperating processes, working together and in parallel for a common goal.

Over the past 15 years, I've kept refining a set of rules of what I think is the ideal features any application that does sync should have.

I've narrowed down to three.

1. Close your laptop and go

If you see a "sync now" button, they blew it.

To paraphrase Vince Lombardi:

Sync is not a sometime thing; it's an all time thing. You don't sync once in a while, you don't do things right once in a while, you do them right all the time. Sync is habit.

If you share application state with your co-workers (say for example a project management app, with issues, goals, notes, design documents, whatever), when you are sitting in your office, your own version of the application is running on your laptop. It has a copy of the state. Maybe not all of the state, maybe just the part that relates to you, but for now let's assume that it has all of the state.

When you close the laptop lid and move to another place, the information on your local copy should be the update version of the shared state up to the moment you lost network connectivity.

There should be any "oh I'll just wait a bit and sync before I disconnect" though crossing your mind. Sync should not be part of your thinking process. It should just be the normal world view, it should be subconscious.

One of the best outcomes of this always-in-sync subliminal state? Given that you'll be always seeing, manipulating, the latest version of something, the opportunity for conflicts based on mis-information is rare.

2. Cherish thy conflicts

Which brings us to the big bad wolf of synchronisation: conflicts. This is the primary fear most people who think about implementing a sync solution share.

What I'm here to tell you, is that conflicts are you friends. They are the tether that binds you to humanity, because they only come up when someone has a different world view from your own.

Having someone who disagrees with you is glorious! It gives you the opportunity to get out of your bubble and interact. And you get to choose how to do so. Maybe you just IM or mail him, or maybe you call him, or even share a few minutes of his time in a hall near a whiteboard.

Conflicts are nature way of telling you that you need to get out a bit and talk to someone.

3. Be a packrat, collect it all

The most important lesson I got from git was not the graph of objects on which it was built, but the fact that git never stores diffs. It will always store a complete version of the new state.

The main advantage of having the entire state blob for each version is simple: you can always improve your conflict resolution, or your diff algorithm, or any part of the UX of both, because you have the raw data available. If you store diff's, you'll be using the state of the art diff algorithm at that moment, and going back to any version means replaying all the diffs since the last full version.

You might think that this is a huge waste of space, and it could be if you have big state blobs on which only a small percentage change between versions, but I posit that this is not the most common case. In fact, big state blobs should sound warning bells in your head, and start to break them up into smaller concepts.

No, you want small state blobs, connected together. They are faster to sync individually, and they provide a smaller surface for conflict resolution when those blissful events are casted upon you.

And with small state blobs, when state changes, you should always store

Let's be about it

So you are hyped now, you want your next application to have sync. What is your next step?

The Dropbox generation

The sync experience of Dropbox is comparable to the automobile experience of a Model T. Or the the sexual pleasure of masturbation. It gets the job done, barely, but you end up thinking that should be more to life than this.

It should, and there is.

Dropbox is a fine product, I depend on it daily for a lot of things, but most of them are files.

To sync files between devices and across several operating systems, there aren't many other solutions out there with the same proven track record. So in recent time, a lot of applications that deal with files have added builtin Dropbox support as their solution for the sync problem. And it works fine, for files, and single user scenarios.

But when you start talking about application state, with multiple users, if you plan on using Dropbox, you better start modelling your data store as a series of independent files. And forget about data protection of any kind.

But it can be done. For the single user case, it is more than enough.

The best example I know (and use, and recommend by the way) is 1Password. A couple of versions back, they switched from a single file to a file bundle as their storage system. You can read all about the Agile keychain design, good stuff in there.

What they did was break a large state blob into several smaller ones.

But the Dropbox API lacks any way to be notified in real-time of changes others may have made to your files. Sure, the desktop client uses a private Dropbox API to be notified of new stuff, but that is not available to you.

So, if you plan on using Dropbox as the sync service for your app, remember that always-in-sync scenario requires the Dropbox desktop client, and you have to monitor (fsevents on Mac OS X, or inotify on Linux, kqueue on FreeBSD...) the filesystem yourself to detect changes.

But, but, but git is magical!

Yes it is. But only for some things. Files, source code, text articles.

If you plan on basing your sync solution on git, thats fine. It is a viable option now that libgit2 is stable.

Just remember that source code conflicts are easier to solve because they happen on something that has structure, something that can be validated,at syntax level by your language compiler or interpreter, and at semantic level by your test suite. You do have one of those, right?

There is no possible test suite that covers your application data semantics. You can have some business rules that must be checked before accepting changes to application state, but they will never be comprehensive.

Also, git lacks any way to notify you in real-time of new commits on remote branches, so you have to roll your own too.

Natives need not apply

You might think that all of this is my subtle way of pushing you from your friendly web to the evil seductive native apps embrace, but no.

Yes, I do believe that the vast majority of your app logic should be executed on the client side. It's not only my distributed systems engineer wet dream, but also your ecological duty not to wast all those processor cycles sitting on your lap or nicely wrapped in your hand.

I also believe that no matter how much bandwidth you have, how blazing fast your servers are, you can't beat the latency of a local app working on local data.

And besides, even if you have always-hopefully-on internet connectivity, isn't it better to hide all those wishful thoughts about "when the user presses this button I will have network access, my server will be up, and everything is peachy" in the smallest part of your application as you can manage it? If your entire app uses local data, and then, in the background, those changes are sent to the remote peers, the user experience is much faster.

So keep you mad Web skills, they are your best bet it this the brave new continent you're about to explore. Just think about bringing along a simple HTTPS server, sitting right there on your local device, holding a copy of your app, and your state.

Interactive sync

There is a crop of products that focus on real-time collaboration. Software like SubEthaEdit and recently Ex-Google Wave.

They are very specific niches of the whole synchronisation topic, and I don't think they are a good solution for most apps. A bit of overkill.

Action!

Finally, my brain dump from having done some synchronisation work.

Divide and conquer. Start with a simple problem. Don't make sync a feature in a future version, plan it from the start. It is very hard to bold sync later on, believe me.

Cherish immutable content, or single-owner content, like comments. No conflicts there. A post has a set of comments, this set is conflict free.

Embrace that conflicts are a human problem, not a technical one, and instead of wasting your time dreading them, focus on the best UX you can provide to make the work of two or more people involved easier to do: how to pick the new world view?

Start with a central meeting place, where all changes flow through. It is easier to do at first, just make sure this central place never accepts conflicted information. All conflicts must be solved on the client side befor any changes are sent.

Start with "rolling sync", the simplest approach to merges: if you have been offline for a bit of, first undo all your offline changes, apply all remote changes done since your last sync, and then roll your changes over that to catch any conflicts. If you know git, think rebase, not merge.

Use UUID's or SHA1 of content as identifiers: there is no clear cut rule on which one to pick, it really depends on the situation.

Remember the truth about clocks: everybody has one, and each one keeps his own perfect time. If only they could agree on what that perfect time is...

Don't ask permission to collect all of what you can from your peers, just do it. Later, your person can always check why was something changed, and who did it.

Its a brave new world

Sync is not just a feature anymore, it must be something that its just there, unquestioned, subliminal.

Do your part, and start designing your apps with "Sync first!" mentality.

September 21, 2012

tidyall: TextMate command, and other tidbits

I've started using Jonathan Swartz excellent tidyall for all my tidy needs. This is TextMate command that I use to replace the Tidy command distributed with the Perl bundle.

require_cmd tidyall 'Requires tidyall, install from CPAN'

### Have a default tidyall.ini on your home, override per project
config="$HOME/.tidyall.ini"
for ini in tidyall.ini .tidyall.ini ; do
  if [ -e "${TM_PROJECT_DIRECTORY}/${ini}" ] ; then
    config="${TM_PROJECT_DIRECTORY}/${ini}"
  fi
done

## Override target file if the current scope is perl: useful when
## tidying a selection of perl code inside a Mason document
target="$TM_FILEPATH"
if [ "${TM_SCOPE:0:11}" == "source.perl" ] ; then
  target="${TM_PROJECT_DIRECTORY}/perl.pl"
fi

tidyall --root "$TM_PROJECT_DIRECTORY" \
        --conf-file "${config}"        \
        --pipe "$target"

To use it, create a new command in your own personal bundle, and set:

  • Save: Nothing;
  • Command(s): the above code;
  • Input: Selected Text or Document
  • Output: Replace Selected Text
  • Activation: Key Equivalent Ctrl+Shift+H (the same as Tidy);
  • Source: source.perl, comment.documentation.perl

The last one, Source, you should also add other scopes that your tidyall config covers.

My current tidyall.ini is this:

[PerlTidy]
select = **/*.{pl,pm,t}

[PodTidy]
argv = --column=100
select = **/*.{pm,pod}

[Perl::AlignMooseAttributes]
select = **/*.pm

[Perl::IgnoreMethodSignaturesSimple]
select = **/*.{pl,pm,t}

[MasonTidy]
select = **/*.{mc,mi,html}
select = **/{autohandler,dhandler}

The MasonTidy section is still new to me, I'm not sure if I like it or not. The rest is very good, specially the clean integration to tidy Method::Signatures declarations provided by the Perl::IgnoreMethodSignaturesSimple plugin.

Although this plugin targets the Method::Signatures::Simple, I'm using it with Method::Signatures, my preferred version of the concept, without problems

I've also used the recent git pre-commit hooks code for a bit, but I'm having problems with it and haven't had the time to really look into it. The code stashes stuff that is not in the index before running, to make sure it is only tidying the parts you are going to commit, and then pops the rest from the stash. This last part causes me a lot of merge failures.

If you don't use hunk or line-based commit building, you should be fine though.

September 19, 2012

Sleep

Our youngest daughter, now 9 months old, still wakes up at least twice during the night to eat. In that she joins her oldest brothers who shared the same predilection for night-time snacks.

One interesting side-effect of this behavior is that I haven't had a full night sleep since the day she was born.

I usually can get right back into dream land after she finishes her milk, but lately I find myself sleepless in bed after each meal. I've now taken extreme measures and doubled the number of paperback books on my bed stand. We'll see how it goes.

Usually I'm more of a audiobook person, but I find that carbon-based versions are more sleep inducing than audio. And no, I don't have a e-reader of any kind yet.

I was thinking about this just now, and how I wished she was older and over this night-time adventures into milk land. I even said out loud to my wife "I wish she was 18 and I could sleep all night"...

... Yeah, I really don't know what is about to hit me.

I guess I'll enjoy this next few years, knowing that she is safely a sleep in the bed just a couple of meters from me. I'll keep enjoying the tiresome moments together during the night. They will go away soon, and that will be a sad day.

September 18, 2012

*Flow setups for git

Pedro Figueiredo pointed me to HubFlow, a twist on the original GitFlow setup to work more closely with Github.

I've used GitFlow for a couple of weeks when I found it for the first time, but dropped it eventually because it didn't fit my needs. I'm not dissing it, it is very well though out and it provides a nice set of tools to make it all work, but it also adds a complexity layer that may not match your environment.

There are two situations in particular where I would recommend not using these *Flow setups: single developer, and if you use a faster release train.

The first should be obvious: all those shenanigans moving code from topic branches to develop to release branches make sense if you have a team of people with defined roles, like module owners who must approve changes to their own turf, or release managers who take a set of features accepted into the develop branch and make a release out of it.

The second is more of a personal preference. I hate the release manager notion. I'm sure that it has value in a lot of projects, but I can say that I'm fortunate enough not to have worked on one of those. My preference came from a previous job were I was the de facto release manager, not because the position was official, but because I was more adept (or masochist, depends on the point of view) to merging CVS branches... Oh yes... CVS... Branches... Hell.

So I moved to keeping the trunk/master branch of my projects always deploy-able, to automate the release process, and to make sure releasing is a one command away, available to anyone on the team.

*Flow setups get in the way of those setups a bit. The develop branch sits between topic/feature branches and master, and delays the move from feature to deployment status of the code. I don't like that.

Having said that, if you have a team of developers, if you use git, and your rough setup is getting in the way, I would suggest you to try one of these *Flow setups. At least read them thoroughly and take the ideas that fit your local conditions.


For future reference, this is the setup I use for single-developer situations. It is based on git, but most of this could be applied to any SCM that has decent branch/merging semantics.

The master branch is the production version: it's always deploy-able, and it's also regularly smoke tested with two configurations:

  • one that mimics production environment as much as possible. If your DB is small enough to have a copy available for smoking, do so. Also use the same versions of the dependencies you have in production. If you are a Perl user, cartoon is very helpful here.
  • the second with the latest version of your language, plus the latest version of your dependencies: I'm using perl 5.14.2 but I smoke it with 5.16.1 and 5.17.latest, with the latest versions of my dependencies. This will catch future problems early.

All development is done on feature branches, one branch per feature. Feature branches can/should be pushed to the central repository to be picked up by the CI to be tested with a more production-like setup. When fully ready, or ready enough to be hidden from end-users behind a feature flag, merge into master and release it. I always use --no-ff when I merge, I like to know certain commits were made outside master.

If a feature is being developed for a long time (more than a month), then either rebase it regularly (my preference) or at least create a integration branch from master and merge the feature branch into it to test. It is essential to test long term branches with the latest production code.

This works for me quite well. Simple, low complexity, I can explain it in less than 3 minutes.

July 28, 2012

Tip: homebrew, old groff and perldoc

With recent perl's, the perldoc command started spouting warnings about an antiquated groff on OS X (both 10.6.8 and 10.8.0):

You have an old groff. Update to version 1.20.1 for good Unicode support. If you don't upgrade, wide characters may come out oddly.

Given that I already have homebrew to fix all my UNIX desires, I promptly executed brew install groff to fix this. You might need to brew tap homebrew/dupes before, given that groff is already included on the base system.

After a couple of minutes (and, on Mountain Lion, an extra brew install --default-names gnu-sed because system sed complains about sed: RE error: illegal byte sequence; you can brew unlink gnu-sed afterwards to revert to system sed), I had my new groff.

But now a new error message awaited me:

Error while formatting with Pod::Perldoc::ToMan: open3: exec of /Users/melo -man -Kutf8 -Tutf8 /.homebrew/bin/groff failed at /Users/melo/perl5/perlbrew/perls/perl-5.14.2/lib/5.14.2/Pod/Perldoc/Toman.pm line 327.

Notice the command Pod::Perldoc::ToMan is trying to execute mixes parameters with the command path.

The problem lies with Pod::Perldoc::ToMan. At some point it decides that it should use groff -man -Kutf8 -Tutf8 as my renderer, and it figure out that my groff is inside my local homebrew install (under /Users/melo/.homebrew, notice the .homebrew). Eventually this command is splitted into command and parameters (at line 301 of ToMan.pm, version 3.17 to be exact) and that is where the problems lies: the regexp used doesn't support dots (.) in the pathname, like my .homebrew and splits at the wrong place.

I've already sent a pull request to fix this, and it was accepted and merged into the distribution, so the next version will work fine. In the meantime, if you come across this problem, you can just hand patch your Pod::Perldoc::ToMan file like I did.

June 02, 2012

Maps

From the invite to the Google Maps event next week:

... will give you a behind-the-scenes look at Google Maps and share our vision. We’ll also demo some of the newest technology and provide a sneak peek at upcoming features...

I translate this to: "we don't really have nothing ready to ship, but we are scared shitless about what Apple is going to announce at WWDC, because it might be really really cool, and it might show just how much we dropped the ball with Maps. We got cozy with all cool stuff we did way back then, and sat on our asses, just toying with new stuff (like 3D, even), but without the focus required to really have something to launch. Yeah, we fucked up, but maybe we can trick you into looking this way for a couple of days, you know, until the WWDC buzz is over."

I really hope I'm wrong. But if I'm not, this is just sad.

January 14, 2012

Digam-me

Caros Srs. Deputados, os meus melhores bons dias,

Para contexto do que vos quero perguntar, sugiro a leitura deste artigo: Matem o monstro.

São dois projectos de lei que andam na alma de todos os que dependem da tecnologia para viver cá em Portugal.

Se souberem de uma maneira eficaz pela qual eu possa lutar contra este absurdo, digam-me. Já estou com um desafio novo este ano, em que umas entidades públicas mudaram as regras de certificação à qual estou sujeito (para melhor acredito sinceramente, assim que os sistemas estiverem estáveis) mas a que já estamos obrigados sem que o regulamento oficial tenha sido distribuído publicamente. Dizem-nos no final do mês... Se tudo correr bem... Uma parte interessante desse regulamento novo (porque sim, o preçário já está on-line no site deles) é que podem fazer quantas fiscalizações quiserem por ano mas eu tenho de pagar algumas centenas de Euros por cada uma delas, independentemente de ser detectada qualquer infracção.

Eu queria focar-me nisso, afinal é o meu negócio, mas parece que tenho de perder tempo com esta pérola que é a PL118 (ainda não li o PL119 mas dizem-me que é do mesmo calibre), uma vez que discos é algo que preciso regularmente para manter o meu serviço no ar, serviço esse que usa apenas conteúdos produzidos pelos meus clientes, mas pronto, pagam todos o dizimo à santa SPA.

Sinceramente, digam-me o que fazer para lutar contra isto. Este ano parece que também podemos vir a ser responsáveis pela segurança social dos nossos colaboradores (são colaborações esporádicas, pontuais) se 80% do rendimento deles for feito via a minha empresa. Eu compreendo a necessidade de apanhar fugas a esses pagamentos, mas imaginem que uma das suas empresas se preocupa em obrigar todos os seus colaboradores e parceiros a passaram facturas e recibos, e se preocupa em declarar tudo o que ganha e paga. Entre essa empresa e outra que não o faz, quem é que vai apanhar com a factura da segurança social?

2012 vai ser um ano giro para nós, os nossos clientes vão ter ainda menos dinheiro para gastar, novos regulamentos oficiais mas não públicos aos quais temos de obedecer, a potencial continha da segurança social, e outros desafios que nós próprios nos atribuímos para crescer o nosso negócio. Sem dúvida muitos desafios. Mas acreditamos que ainda vai ser possível, e até temos um terceiro filho como prova de que acreditamos que pode melhorar.

O que eu não preciso é de uma lei, criada à medida da SPA, cujo modelo de negócio está a desaparecer porque as pessoas compreenderam finalmente que é idiota comprar pequenos discos brilhantes quando é mais prático obter legalmente as coisas on-line, e que decide que todos devem pagar a uma entidade privada só para o caso de alguns andarem a piratear conteúdos. Não preciso, nem tenho tempo ou dinheiro para o que querem implementar cá, mais um regime de IVA.

Por isso, digam-me: como é que se luta contra uma injustiça destas? Ou devo desistir já de investir em Portugal, e ir procurar trabalho lá fora, onde apesar de já ter 40 anos, consigo obter emprego com bom salário amanhã, com base no investimento que Portugal já fez em mim?

November 25, 2011

Hosted.IM XMPP service

This morning I migrated from my old personal ejabberd XMPP server to the Hosted.IM service by Process One.

tl.dr; The migration went smoothly and I'm very happy with the service. I still have two things I need to do (SSL certificate and migrate my XEP-0114 external components), but for personal reasons I won't have the time to finish the migration until next month. So expect a small update when I finish those two.

For now, I'll just describe the service and my migration process, the good parts, the could-be-improved parts and other tidbits.

The Hosted.IM service provides a hosted XMPP service powered by a carrier-grade ejabberd XMPP server. The service was created and is maintained by Process One, the main driving force behind ejabberd development over the past years. I had the pleasure of working with Process One and Mickael in particular when we migrated the SAPO XMPP server.

The basic service is free (5 user accounts, a single domain).

The main roadblock preventing my migration was the couple of custom-made external components I wrote to interact with some web and TCP-based services that I currently host, so I needed proper XEP-0114 support. I asked Mickael for it and less than two weeks later, they delivered. The XEP-0114 support was officially announced yesterday on all non-free plans.

With this final roadblock lifted, I was ready to migrate my domain.

I've registered a new account, and added my domain, simplicidade.org. The first thing they ask you to do is to add a DNS TXT record to validate that you own the domain. I don't understand why they need this TXT records. The XMPP service requires you to add or update a couple of SRV records to point to the Hosted.IM XMPP servers. If you aren't the owner of the domain, you won't be able to update those records, so why ask for an extra DNS record? I hope they clear this process and remove this particular requirement. Or, if this is really something that they really need, add a FAQ explaining why its needed.

I also immediately updated my SRV records to point to them, using the provided examples. This turned out to be a mistake that you should avoid.

If you are migrating an existing domain, I strongly recommend that you don't update the DNS SRV records at first. You should first create the accounts on the new service, migrate the current users rosters and vcards, and only then switch the DNS records. This should be pretty obvious stuff, but I was eager to move and failed here.

The Process One support personal will accept a SQL dump of your rosters and vcards and load them up on the service at Hosted.IM. I was lucky because I was already using ejabberd with the SQL backend, so I only had to cleanup old accounts, dump the SQL database and send them the file.

This data migration process is unfortunately not documented yet. New users don't even know the possibility exists. I had to ask for it on Twitter to discover the possibility. I also don't know what other formats or other servers export files they support. So check with support before you decide to switch to figure out how you'll migrate the roster information.

As I said, I didn't prepare that part, so I had to scramble to dump the rosters and send them the SQL, so that my users didn't end up with a empty roster. Fortunately the support staff was awesome and I quickly had my SQL dump loaded onto the service.

After this was done, I closed the firewall for my old server C2S ports, and started up my XMPP client. I connected without any problems to the new service.

From start to finish, and even with all the discovery and learning the layout of the administration interface, it took me less than 2 hours to have my service migrated. Pretty good.

I then selected the cheapest plan, at €8/month. The payment system is not clearly explained at the site. It works as a prepaid system: you load your account with €nnn and they remove the amount you owe every month. You also get some bonuses if you load your account with large amounts. After spending 10 minutes wondering where the payment interface was (after you change plans, the interface appears in the account section; while on the free plan, it's not visible - this is suboptimal, a link in the Plans & Pricing tab, or near the costs values in the domain administration tab would be more helpful), I loaded my account with €100 and I got a €8 bonus.

Some services cost extra. For example, using your own SSL certificate is a €2/month, and connectivity to other IM networks costs €4/month. Unfortunately, these extra costs are not described in the service homepage. You have to register for the trial account, and then check the Plans and pricing tab inside your domain management admin page.

Aside: on the service homepage, if you select Plans & Pricing in the navigation toolbar at the top, the javascript scrolls down to the #pricing section but fails to update the page location, which makes sharing the direct link harder.

And this is were we stand right now: accounts, rosters and vcards were migrated successfully, and I was able to load my account with enough money to last me for a bit less than a year (I was not counting on the extra €2/month for the certificate).

The next step is creating my own certificate. This part of the process could be improved a lot. For technical reasons, you have to upload the certificate private key. But if that is a requirement (and it is, I understand that part pretty well), then they could save a lot of work to their clients if they just took care of all that: add the option to generate the key on their servers, and send me the Certificate Request file so that I can request a certificate from a CA that supports XMPP certificates (which are slightly different from HTTPS certificates, they require an extra extension). It would be helpful if they recommended a couple of CAs providing the service, but they do not.

In the past, the XMPP Software Foundation provided a free service of XMPP Certificates, but it was shutdown sometime ago. According to their page above, you can buy a XMPP certificate from StartSSL CA, but I'm still figuring out how to do this. It should be straightforward, the same process as a HTTPS server, and I'll update this article after I've successfully done it, but the StartSSL site lacks XMPP-specific information.

After I have that part done, I'll move my external components. Some of them are sub-domains of my main simplicidade.org domain and those should be straightforward.

Others use a completely different domain name. This is an unusual setup. I basically used my own ejabberd server as a XMPP router for some domains. I connected those domains as external components, and pointed the S2S DNS records to the ejabberd server.

The Hosted.IM does not yet support this mode of operation, but I again asked Mickael about it, and this unorthodox configuration should be supported very soon. Awesome.

All in all, a pretty smooth ride.

November 07, 2011

API design

Designing a good API is an art form, unfortunately. I've seen some efforts to graduate this process to science or engineering level, but nothing even close to accepted by a majority or programmers so far.

One particular problem I face from time to time is related to helpful APIs. Those tend to help the programmer complete its task by adding common fallback processing paths. It usually goes something like this: the API designer convinces himself that X this is the most common operation, so when user doesn't do it, we do it for them. The programmers have less to write because the API will do the right thing™.

My problem with those APIs is that they tend to fail silently. The program keeps on going, using the default processing path, and eventually the programmer lack of decision will bite him with an exception or a core dump (pick your poison).

I tend to prefer non-helpful APIs: if the programmer should make a decision, force him to do so and die as soon as possible if he doesn't, compile time if you have that concept and if you can make it happen then.

The decision the programmer needs to make could even be to do the default processing path, but he must make one.

It is a little bit more code to write, but it pays of enormously in long-term readability by eliminating the hidden default behavior. Every action is explicit, so a new programmer looking at the code (eg. you, the same one who wrote it the first time, only 6 months later) will need less background information to understand it.

So help you API users. Die often, die early, be explicit and stop helping lazy programmers.

October 30, 2011

Smart TVs

Since the Isaacson's Steve Jobs biography was released, a lot of articles were written about this one single passage on the topic of TVs: "I finally cracked it."

From that, an explosive number of articles sprang up about Apple doing a integrated TV set, a smart(er) TV. The term disruption has been in almost all of those articles. I'm sure everybody would love to see what Steve Jobs and his team would come up on that space. I have to wonder though, if this was not the last grand-master chess player move against his competitors.

Wouldn't it be very Jobsian to send all of Apple competitors in one last great wild goose chase?

To explain my rationale (and tackle that overused "disruption" term), let me start with a question: please point me to a recent (lets say since 2000) disruption that took hold of a market (and not just a figure of imagination of pageview-whoring tech pundit), that was not accompanied by, or based on, an explosive growth of that market? I'll wait.

You see, I could not remember of any. I'm sure there are some, and I'm hoping that someone will point them out to me, but I though about this for a day and I could not remember a single one.

The last two great disruptions, smart phones and tablets, are still seeing close to 100% growth rates. Maybe Square is sitting on one, with their gorgeous simple payment interface, but they are US-only for now, and they still use VISA and MasterCard, so I don't think they even qualify as disruptive, given that they are not defying the incumbent.

But back to the disruption thing. I don't think it is possible to be disruptive on the TV space.

First, you don't have explosive growth on TV set sales. And without explosive growth, the incumbent has time to adjust to the changes introduced by the contender.

Second, the TV space is filled with big players, with huge financial interests at stake. They form a interlinked web, that is much stronger than any of them standing alone. Apple may have (had?) an influence on Disney, but those are just one single player.

Another example: look back to this 18-week periond in the spring in the USA, with the NFL labour dispute. The entire sport is supported with over one billion dollars TV revenues. I don't know about you but I would be weary of telling 32 times 50 players, each one showing about 2 square meters of hardened muscle, that their business got disrupted (apologies to all NFL players that might be reading this and take it the wrong way, I'm a big fan, and watch your games regularly; I'm sure most of you are cuddly, and friendly to kids and animals).

Third, the fact that the current TV sets are passive. It allows TV programmers to play with the ratings and follow a hit TV show with a less popular one in the hope that the audience is too lazy to change channels (Nielsen tells us, yes, they are...). And this helps them push more ads. The entire business and operation is hoping that the user turns the box on and leaves the remote alone, dozing over average (at best) quality content while you wait for your hit show.

So no, I think disruptive is asking too much. I believe subversive is the best the internet technocracy can expect in the near future.

They will slowly take over functions that current contenders in the same space are doing. For example, the AppleTV with AirPlay, could take over casual gaming from the Wii. Both Google's and Apple's offers, specially teaming up with Netflix, will slowly kill the DVD/BluRay player and the cable operator pay-per-view services, not because they are better at providing movies and TV shows on the TV set (at best they slightly better, and only because the cable operators have been slow to or bad at reacting to the threat; technically the operators are better placed to provide that service), but because they can integrate better and faster with other methods of consumption, from tablets to smart phones to the Web.

And that is what the current TV add-ons boxes (like Google's and Apple's current offerings) are already doing, starting with the basics, moving into games, and soon small apps controlled via smart phones or tablets (and solving the user interface problem in the process).

Given all this, we can allow a different interpretation on Steve Jobs quote: maybe he hasn't cracked the problem, maybe he just finally understood the size of it, and that it would take a long time and effort to slowly chip away at the wall.

And maybe, just maybe, with that final hurrah, he could just send his competitors into a all out race against that same big strong wall.

October 24, 2011

They will never learn

Microsoft and Intel need to stop ruining good looking hardware...

Gorgeous

October 23, 2011

Google HTTPS Search

Recently Google announced they will redirect their logged in users to the HTTPS version of their search engine.

(skip the rambling, take me to the summary: tl;dr)

I think nobody can dispute that this is a good thing. HTTPS will provide an almost (if only Certification Authorities (CA) weren't so prone to hacks...) perfect assurance that you were indeed talking to the correct server. HTTPS with CAs is a solution to man-in-the-middle attacks.

We can only hope that this redirect will eventually become the default behavior, because regular and non-authenticated users will still be using the plain HTTP search, and need to specifically ask for the https://www.google.com/ secure site to be safer. But given Google's goal of securing personalized search results I think is acceptable to limit this to logged in users only for now, given that they are the ones with access to the personalized search results. Mind you that this might also be a way for Google to load test their HTTPS setup.

But starting on the third paragraph things take a turn into a new reality: Google will no longer provide the query information to the site you click on the organic search results page. There is no direct explanation on that article why they will start (or stop) doing this.

Before going on further, lets make one thing clear: there is no technical limitation that prevents Google to forward the individual query. In fact this is working right now. When you use the HTTPS version of Google Search, the URLs are rewritten in a way that they go through a Google jump, and from that they are redirected to the final page. But this Google jump page is hosted on a plain HTTP site so the final redirect to your page includes this as the Referrer, with the full query information. Google could just keep using this scheme to provide sites with the information they want.

What they do tell you is that if you click on a ad from the results page, the query information does get sent to the destination site.

This might seem as a double standard of behavior, but sites that appear on organic search results and sites that appear because they payed Google are clearly two different populations and so they can have different treatments from Google if Google chooses to do so. After all, when I pay to get my site on the search result page, I have a right to get all the information about why my ad was shown there. I payed for that right.

It seems we sometimes forget that Google is a company, and as all companies, the goal is to make a profit selling goods. The goods Google sells are your searches. The fact that Google has become so large and useful as to be considered indispensable, and that some small changes in its behavior can make or destroy entire business models, is something that we should be aware, and if possible try to fight against. Its never good to have so much power in the hands of a private company, be it Google, Apple, Microsoft or Facebook. If Microsoft was investigated in the 90's, I will not be surprised to see the same happen to Google in the next decade.

The point is: Google is deliberately choosing, as is their right to do so, to stop sending valuable information to outsiders for free. And I think this will move them closer to a investigation by some governments.


One of the business models that is threatened by this change is the land of search engine optimization. As we can expect, they are livid, and react accordingly. There is this particular article that caught my attention: Google Puts A Price On Privacy.

The premise of the article is that with this change, Google will only share is search data if you pay them.

Well, duh...

Google is a company, a for-profit company, of course it wants to get payed.

The second paragraph is even better:

Google’s a big company that goes after revenue in a variety of ways some critics feel put users second.

(emphasis mine). Now this is laughable. There are two different types of people that classify as users:

  • users of the search engine that use it to find stuff: we get a free (and great) service, and in exchange we get to see some ads - I think these are the real users of the service;
  • users of the ad system: they pay to show ads of their products on the search result pages - I call these customers of the service.

But no matter which group of users you pick, the change Google announced is good for them.

As users of the service, we get a little bit of extra privacy from third parties: strangers on WiFi networks, governments with intent to control their citizens civil rights, you name it, it gets better.

For customers of the service, they get more value from their payments because now they will have exclusive access to valuable information.

So who are those users who loose? I think they are two other groups, and only one of them really looses a lot. The first is all of us who have a site and were used to receive the search query information. Some of us used it for actual useful features on our site, others would only see them via the analytics system they were using. So we loose a little. We can still get to part of the analytics information via Google Webmaster Tool if we want for some reason optimize our search ranking, but not adjust in real-time to our users queries.

People who make money re-selling the search queries that leaked from Google Search are the really big losers, though. They would use the search queries to target ads on their own sites. And it was a good business.

And I believe the article was written by someone in the second group, or someone who writes for the second group.

The claims are all interesting:

  • "[Google is ] perfectly happy to sell out privacy": not news, the moment ads started showing up on Google Search results, we knew they were selling our privacy - we are the product. Besides, explain why someone who makes a living on the search keywords that leak via the referrer is not violating our privacy also?
  • "...blocking is a pesky side effect to a real privacy enhancement Google made": no, blocking is not a side effect. Today you can search on HTTPS Google search site and still get the information on your non-HTTPS site. Blocking is a deliberate decision by Google, don't blame technology about this one;
  • "Google could have pushed many sites across the web to become more secure themselves": this is based on the logic that if all sites were HTTPS then the referrer information would still flow to the sites. Google is actively trying to change people over to SSL-based communications, to the point that they designed a protocol that requires it (SPDY) and bundled that protocol into their own browser. I don't see how you can accuse Google of not doing plenty for the improvement of the security on the web. Even this change is a clear prod in that direction - as the author notes, if you want to keep receiving the data, switch to HTTPS;
  • "Google could have [...] its default search [redirected] to be secure. [...] using Google’s own figures, [logged in users filter will ] protect less than 10% of Google.com searchers": true, but I believe that this is an engineering decision - lets try with 10%, and if all goes well, switch everybody over.

I guess that if my business was based on referrer information I would be pissed today, and even I don't depend on them, I really do hope that Google keeps sending those lovely q's query parameters in the referrers.

But most of this article facts and complaints are at best self-serving, if not just plain wrong.


My biggest doubt is the SPDY angle. If you use a recent version of Chrome, your access to Google is done using SPDY, which includes TLS security by default. So even if you just type www.google.com you are using a secured version of the search engine, but still the following click will be plain old HTTP with the full referrer information. Will they change this too? What are the rules for a SPDY to HTTP transition? Should they be the same as a HTTPS to HTTP transition?

Bottom line: I appreciate the default redirect to HTTPS, and I agree that Google has a right to provide a better service to paying customers. But I don't believe they will be able to sustain this policy of not forwarding search keywords. Not only its petty, it might also trigger a government investigation on the subject, something that I think they would not want.

October 06, 2011

Here's to the crazy ones

Godspeed Steve.

(image from Ars Technica article about the life of Steve Jobs)

I leave you with my all-time favorite ad. When I learned of this and whenever I think of his impact on our lives, I think of this ad.

June 21, 2011

Apple TV as another billion dollar business

In a recent blog post, Dion Almaer talks about AirPlay and how he can see it as a big part of almost every application you use today. I like AirPlay. In fact, I'm might end up buying a Apple TV just to have an AirPlay- enabled big TV.

I've read about games on the iPad/iPhone using AirPlay to use the HDTV as a display device, and it is a wonderful idea: you sit down with one of your favorite games in front of the big screen, flip a switch on your phone and now you can use the entire screen as a controller. Kind of like a Wii U, I guess.

But there is another scenario I like even better. Let me tell you the story of the "iPad for the kids" here at home.

A couple of months back, when the iPad2 was first announced, we decided not to buy any portable game consoles anymore. The games on those might be better, but the economics are wrong: you pay less for the console, but after the 4th or 5th game, you are already into iPad2 price points.

On the other hand, if you get an iPad2, you can have games between €0.79 and €5 a pop, even without counting the enormous amount of free games. So we bought an iPad2 for the kids and created an account with an allowance on the App Store. Entertainment problem solved. And as a bonus feature, we can load great education apps like Mathboard or Algebra Touch too.

So we have the iPad, mainly for games, and AirPlay (as soon as the Apple TV arrives) to make use of the HDTV. But I'm skeptical about real-time games via AirPlay. I very much doubt that the delay will go unnoticed.

Fortunately there is a better option, one that also might promote the Apple TV from a hobby to another billion dollar business for Apple.

You see, the new Apple TV is powered by an ARM processor and iOS, exactly the same combo the other iOS devices use.

If you could download the games over the air from your iPad/iPhone, they would run on the Apple TV, using the HDTV as display, and use your iOS portable device as a remote control, then your Apple TV becomes a games console too, with no display delay at all.

It would not be powerful as a XBox, PS3, or a Wii, but it would not matter: it would have a killer price point, and you could start a game on the iPad, arrive at home, keep playing on the HDTV, and switch back to the iPad later.

Now that would be awesome.

Contacts

melo@simplicidade.org (XMPP/email)
+351 302 029 050 (voice)
melopt (Skype)

IronMan challenge

Iron Man badge Are you ready to be an Iron Man? Join the challenge and find out! (what is the meaning of this little man?)

Moosaico

Junta-te!

Recent Comments

Powered by Disqus
Creative Commons License
This weblog is licensed under a Creative Commons License.
Powered by
Movable Type 3.2