A List of Web Analytics / Google Analytics Experts in Montreal

Analytics Image

Flickr picture by Michel Balzer

Following a tweet I sent early yesterday morning to try to build a list of Web Analytics / Google Analytics experts offering their services in Montreal, here are some names of consultants and companies in that field. Please note the following:

  • This list is probably not exhaustive. If I’m missing anyone, I apologize in advance. You can let me know in the comments and I’ll update my post.
  • This is not an endorsement of any of the people/companies on my part, just a alphabetical order list of names (and contact information) that were suggested to me.
  • People privately suggested to me analytics experts that work inside large organizations that do not resell their analytics services. I chose to leave those individual names out of the list to avoid poaching.

1) Adviso (Google Analytics Certified Partners )

2) Adapt or Die Marketing (suggested by Pier-Luc Petitclerc)

3) Bell Web Solutions

4) Eric Baillargeon

5) Cossette / Magnet Search Marketing (Google Analytics Certified Partners )

6) Justin Cutroni (he’s in Burlington, Vermont but that’s close enough to Montreal) (Google Analytics Certified Partner)

7) NVI (Google Analytics Certified Partners )

8 ) Ressac Media (Google Analytics Certified Partners )

  • Their main Web site
  • The description of their analytics offer
  • Contact: @ressacmedia on Twitter or getinfo(AT)ressacmedia.com
  • Address: 305 Bellechasse, # 302, Montréal, 514.843.7029

9) Jacques Warren (WAO Marketing)

10) W.illli.am (Google Analytics Certified Partners)


As soon as I published my blog post, I got the following suggestions:

11) Herman Tumurcuoglu

12) AT Internet

  • Their main Web site
  • Their analytics product
  • Contact:  Alexandre Métier, alexandre.metier(AT)atinternet.com or Fehmi Fennia, fehmi.fennia(AT)atinternet.com
  • Address: 33 rue Prince, Montréal, 514 658 3571

13) Samuel Lavoie (Google Analytics Individual Qualification)

14) Stéphane Hamel

15) Alistair Croll

16) Sean Power

17) Nofolo

18) DevRun

For the complete list of Google Analytics Certified Partners outside of Montreal (or the latest up-to-date list of Montreal partners), visit the Google Analytics web site.


On the Importance of Data Payloads

Richard MacManus from ReadWriteWeb examines the consequences of the rise of the Internet of things, where objects start broadcasting data information to the rest of the world. I found this excerpt especially enlightening as it explains the variables that we could soon see as data points:

[Google VP Marissa]Mayer talked about “a sensor revolution,” including data from mobile phones. She remarked that “today’s phones are almost like people,” in that they have senses such as eyes (a camera), ears (a microphone) and skin (a touch screen).

HP’s [Parthasarathy] Ranganathan used the term “ubiquitous nanosensors,” that can have multiple dimensions per sensor:

* Vibration

* Tilt

* Rotation

* Navigation

* Sound

* Air flow

* Light

* Temperature

* Biological

* Chemical

* Humidity

* Pressure

* Location

What it means: imagine a world where every piece of communication comes with a “data payload”. We’re starting to see it with smart phones and Twitter/Facebook (URL links, photos, location, etc.) but we can expect more of these structured data variables in the future. They are key to the development of the Internet as they will provide multifaceted context to filter and interpret the billions of messages around us.

Update: you can read more about Twitter “Annotations”, the method by which we’ll be able to attach complex content payloads to tweets.

Print and Internet Yellow Pages Users Are Ready to Buy: No Surprise There.

A few weeks ago, the Yellow Pages Association released some data related to purchase and purchase intent when using print and online Yellow Pages.


  • 8 out of 10 Internet Yellow Pages searches were from people who said they were ready to buy, with 36 percent reporting they had made a purchase after finding local business information at an Internet Yellow Pages site, and an additional 44 percent saying they intended to make a purchase.

readytobuyindex Yellow Pages

  • 78% print Yellow Pages searches were from people who said they were ready to buy, with 39 percent reporting they had made a purchase after finding local business information in a print Yellow Pages directory, and an additional 39 percent saying they intended to make a purchase.
  • 40% of those who made a purchase said they found and made that purchase from a new company after reviewing local information on an Internet Yellow Pages site. (it’s 35% for print Yellow Pages)

newbusinessrelationships Yellow Pages

  • ·Of searches made by people who used Internet Yellow Pages, 37 percent said they had no company name in mind when they started their search. (34% for print)

nameawareness Yellow Pages

I was curious to see if age made a difference in those numbers and the Yellow Pages Association provide me these data points:

Demographics intent to purchase & purchase data Yellow Pages

As you can see, the differences between age groups are small.

What it means: not surprised with the results. Print and online business directories have always been about intent to purchase and actual purchase. The good news is that it is still true in 2010. What I would have wanted to see was the comparison between “medium”, for example versus search engines or word of mouth, but those data points where not available. I suspect search engines would have received a lower score than Yellow Pages but I was wondering about friend recommendations: do those lead to purchase?

Chris Sacca on Douchebags, Porn and Lube

Second afternoon at LeWeb conference, Chris Sacca, Founder, Lowercase Capital LLC, proposed to us his three dominant trends for 2010 in a very tongue-in-cheek presentation.

Chris Sacca LeWeb Paris December 2009

They are:

1) Douchebags

  • According to Wikipedia, “the term refers to a person with a variety of negative qualities, specifically arrogance and engaging in obnoxious and/or irritating actions without malicious intent”
  • According to Sacca, they create hostile environments on the social Web but he says we’re moving past this with the rise of the real Web
  • This is happening because of distributed authentication. We’re being verified against an actual community plus location information.

2) Porn (the traditional definition, i.e. material that is intended to cause excitement and arousal)

  • Sacca then showed us a series of graph and data charts
  • “Data is porn”
  • “Data enriches all Web services we can provide”
  • “We’ve never known more about people’s preferences”

3) Lube

  • There’s friction in the e-commerce funnel (shopping cart abandonment)
  • Major mistake: we ask people to provide information before giving a service
  • iTunes: makes it easy to buy
  • Amazon: one-click makes it easy to buy also.
  • They’ve removed the friction
  • Signing up is also painful, creating a profile. As a good example, he talked about Posterous (which allows you to signup using e-mail. “E-mail is simpler than logging-in.”
  • He suggested developers provide benefit first before the hurdle of signing in.
  • “Let’s lubricate the Web.”

The New DexKnows Site: Simpler User Experience and Improved Results

Had the opportunity this week to sit down (virtually) with Jeff Porter (VP / General Manager DexKnows.com at RH Donnelley) and some of his team members to go through the various features/functionalities of the new DexKnows.com site. The new site was designed and built by the Business.com team (which was acquired by RHD in 2007). It’s using Lucene open source search technology. The site is currently in beta and offers multiple improvements over the last few versions.

Dex Yellow Pages- Online Phone Book Directory for Local Business Listings

Main areas of improvements include

  1. A simplified user interface (more search engine-like)
  2. A clear focus on breadth, depth and quality of local data
  3. Better drill-down results both from a hyperlocal and hypervertical point of view

I jotted down the following random notes on what struck me as interesting with the new site:

  • Introduction of geo-taxonomy (love that word!). The site offers users different levels of geographic drill-down including metro, city, neighborhood and landmarks.  One of the challenges of local search sites is “guessing” the user geo-intent. Is the user searching for the specific city or for the metro area? In this case, the team decided that they would return multi-city metro results in a refine page showing users additional geo-refining options. For example, a search for Pizza in Seattle assumes by default that you’re doing a metro-area search. It leads you to a metro page showing you very basic listing information and all the metro city options. Those simplified basic listings remind of Google local search results. Depending on headings (think restaurants or lawyers), you can also drill down on “specialties” (think “family-friendly” for restaurants or the types of lawyers). If you select individual cities within a metro area, you get to a city page with more detailed business listings. You can then drill down to specific neighborhoods.
  • Category suggestions based on keyword entered (not the same as keyword suggestions!). It allows for a better mapping between unstructured keyword search and structured results.
  • As usual, each business has a profile page. I really like the integration of Google Street View in there.  Makes me think this business profile would be tremendously valuable in mobile situations (think Dexknows iPhone app).
  • If you search for service categories where work is usually accomplished in your home (plumbers, electrician, etc.), you get a service area map instead of the specific location of the merchant. The scope of the merchant service area is determined by the print directories in which they advertise.
  • You can do brand search (try “Nintendo in Denver“) but you can’t combine keywords (try “Nintendo used games in Denver”, it should return you this merchant but results in a failed search). It’s really the only place we’re I was truly disappointed with results.
  • I have to mention they have a very nice admin section for their advertisers where they can manage their listings and profiles, view their different products and get an estimate of the traffic they should be getting.

What it means: I really like what the RHD team has done with their new site. In the online directories arms war, the game seems to be focused on two main elements: simplified usage and quality of data. And the RHD online team is definitely focused on those elements, the same way we were at Yellow Pages Group when I was there. But it also made me realize that the industry is still very much looking at Google (or Yahoo or MSN) as the local search benchmark.  Instead of doing incremental innovation, how do you leapfrog search engines? In other words, what is keeping Google up at night? The answer to that question leads to a possible new strategic direction.  Community, humans, social interactions, marketplaces are what’s keeping Google up at night. Facebook and Linkedin (for example) have built up amazing identity and social graph connection systems, which they can (and will) leverage as much as they can. And we will get to ask ourselves the age-old question: who do we trust most? Man or Machine?

Update: the official announcement.

Data Portability: LinkedIn Now Allows You To Export Your Data

Big storm this week in the blogosphere as Robert Scoble’s Facebook account was temporarily suspended for breaking the site’s terms of service. He was using a new tool from Plaxo Pulse that was extracting and matching Plaxo and Facebook users. As Robert said: “I wanted to get all my contacts into my Microsoft Outlook address book and hook them up with the Plaxo system, which 1,800 of my friends are already on.” Scoble was eventually reinstated but the debate about data portability now rages on (see also DataPortability.org).

Linkedin export function

A few minutes ago, I discovered that LinkedIn now allows you to export your data in various formats (.CSV and .VCF). I don’t know how long they’ve been offering this feature (not long I suspect) but it’s an extremely smart move to position themselves as the definite business-oriented social network. Bravo!

iBegin Source: Q&A with Ahmed Farooq

About 10 days ago, iBegin released a new service called iBegin Source, “a comprehensive source of nationwide business data”. As I think this is a very innovative way of aggregating and licensing local business information, I contacted Ahmed Farooq to ask him a few questions.

Q: Can you tell the Praized blog readers about who you are and what you do?

A: I am the Director of Enthropia Inc, a web-dev firm located in Toronto. We have roughly 20 people working with us.

Q: What is iBegin Source?

A: iBegin Source is about raw business data. Buying business data is not cheap (and it is woefully inaccurate). We have made the data affordable and have coupled it with an open system (allowing for much easier updates), aiming for more accurate data.

Q: Where did the idea for iBegin Source come from?

A: It actually came as a defensive play. As we worked on iBegin City sites, we realized that we were (in essence) at the mercy of other data providers. We ended up creating our own dataset and even our own geocoder.

Q: What are the data sources for the iBegin database?

A: We seed with the usual suspects – telco records, federal/state agencies. We then supplement these with extra databases (restaurant databases, business license filings, registrations, change of addresses, other purchased databases, etc). In the first week we had roughly 50 user-submitted updates.

Q: iBegin Source could be quite disruptive. What has been the reaction so far of major local data providers like Acxiom, InfoUSA or Localeze?

A: They are watching us. Publicly they are shrugging us off, but once more and more updates come through our system, it will get interesting.

Q: I especially like the track back update mechanism, the idea that all iBegin Source partners will work together to improve the database in the long run (a la Wikipedia almost). We know how much of an issue that is in local search data. How many sites (using iBegin Source) do you think you’ll need to reach critical mass, i.e. constant improvements and best data source out there?

A: The trackback is just one element of our ‘update mechanism’. Based on a churn rate of 3 million, I believe we need 1 million updates a year (2500 a day) to blow the rest out of the water. This number is a combination of user-submissions and trackback updates.

Q: iBegin Source feels like an open source project: for example, you source content from users, the more people use it, the stronger it becomes and it’s free for non-commercial usage. Yet, it’s not structured like other open source project I know (I’m thinking of Music Brainz). Have you thought about setting up iBegin Source as a non-profit organization before launching it or in the future?

A: Data acquisition is expensive. Very expensive. iBegin Source would simply collapse on a non-profit model. The best we can do is what we have now – a free download for non-commercial usage, hopefully pushing enthusiasts and hobbyists to build interesting local sites. We also intend on award free commercial licenses to the most intriguing sites.

Q: Do you think you’ll have a hard time convincing local search partners to work with you given that you already operate a local search destination site?

A: Some people do get confused. I’m sure TrueLocal has that headache too. But what better demo of our data and what you can do with it by showing it off on our own sites? Mind you, we are only in one four cities (two of them Canadian), and in none of the major US markets.

Q: Right now, you offer data for the whole US market. What countries are next?

A: On the drawing board are Canada, Germany, Australia, New Zealand, UK, Italy, and a few more. Estimated time of arrival is unknown at this point.

Thank you Ahmed!