Friday, 29 March 2013

Google Easter Eggs, Google Surf Board & Solar Cars


In this week’s Search In Pictures, here are the latest images culled from the Web, showing what people eat at the search engine companies, how they play, who they meet, where they speak, what toys they have, and more.

Sunday, 17 March 2013

Google Panda Update 25 Seems To Have Hit

There are many webmasters and SEOs believing right now that Google has released an update to their Panda algorithm late yesterday.
We’ve reached out to Google to confirm or deny the Panda update, as we’ve done 24 times previously; but this time, Google told us they are unlikely to confirm future Panda updates since Panda will be incorporated into their indexing processes.

It would not be surprising if this was indeed a Panda update since Matt Cutts, Google’s head of search spam, did say at SMX West that a Panda update will be rolling out this Friday through the weekend. Matt then said although an update is expected this weekend, don’t be surprised if you don’t notice it because the Panda updates are going to be more integrated and less noticeable in the future.

I am not sure if this last push was the last manually updated Panda refresh or if it is already fully integrated into the normal Google indexes process. I think this was Google’s last manual push and they will, from now on, most likely not do manual pushes of the algorithm in the future.

The last Panda update we had confirmation on was Panda 24, so this one would be coined Panda version 25.

Wednesday, 13 March 2013

Google’s Matt Cutts On Upcoming Penguin, Panda & Link Networks Updates

Google’s head of search spam, Matt Cutts, announced new updates with Google’s Penguin and Panda algorithms and new link network targets in 2013. Matt announced this during the SMX West panel, The Search Police.
Significant Penguin Update

Matt said that there will be a large Penguin update in 2013 that he thinks will be one of the more talked about Google algorithm updates this year. Google’s search quality team is working on a major update to the Penguin algorithm, which Cutts called very significant.

The last Penguin update we have on record was Penguin 3 in October 2012. Before that, we had Penguin 2 in May 2012 and the initial release in April.

So, expect a major Penguin release that may send ripples through the SEO industry this year.

A Panda Update Coming This Friday Or Monday
Matt also announced there will be a Panda algorithm update this coming Friday (March 15th) or Monday (March 18th). The last Panda update was version 24 on January 22nd, which is one of the longer spans of time between Panda refreshes we’ve seen in a long time.

Saturday, 2 March 2013

Google Publishes Its Search Quality Rating Guidelines For First Time

As part of the How Search Works interactive infographic Google released today, they have decided to publish their search quality rating guidelines publicly to the world.
You can access the 43-page PDF document over here. It was most recently updated November 2, 2012.

As you may remember, the document has been leaked back in 2008, 2001, 2012 and other times, when finally they said they were considering going public with that document. Today, they have.

Search quality raters are third-party people Google hires through a third-party agency to rate the search results. It is not used to rank search results, but rather, to measure the quality of the search results. We have interviewed a search quality rater in the past.

Between the search quality rating guidelines and the SEO starter guide published and available from Google – you should have plenty to read to brush up on your SEO.

Friday, 1 March 2013

New German Law Will Allow Free “Snippets” By Search Engines, But Uncertainty Remains


The good news for search engines like Google is a proposed German copyright law won’t require them to pay to show short summaries of news content. However, uncertainty remains about how much might be “too much” and require a license. The new law is expected to pass on Friday.
Der Spiegel explains more about the change:

Google will still be permitted to use “snippets” of content from publisher’s web sites in its search results….

What the new draft does not stipulate, however, is the precise definition of the length permitted.

The draft bill introducing an ancilliary copyright for press publishers in Germany (Leistungsschutzrecht or LSR) goes to a final vote at 1oam Germany time on Friday. Below is my background about the hearings that happened this week, which in part lead to the snippets change. From This Week’s Technical Hearing

Despite all the procedural and constitutional objections to the Leistungsschutz bill, there are also a couple of technical and political ones. Critics (and there are plenty of them) raise concerns that the collateral damage by this change in copyright will hurt search engines, innovation in general and especially smaller press publishers.They point to ambiguous language in the bill that will cause legal uncertainty and lawsuits that will take years to be settled.

The German government and supporters of the bill have done little to address these objections. On Saturday, I published an advance copy of the answers by the government in response to a letter of inquiry by the opposition Left Party. There is a continuing pattern in the government’s response referring open questions to be settled by courts or simply by ignoring the question.

One of the last opportunites to discuss the mechanisms of this ancilliary right within the parliament lasted for 90 minutes Wednesday at an expert hearing at the subcommittee for New Media (Unterausschuss Neue Medien, UANM) at the German Parliament.

Public invitations for this hearing were sent out only a couple of days ago, after two weeks of behind-the-curtain negotiation between the governing factions in parliament (Christian Democrats (CDU/CSU) and Liberal Democrats (FDP)) and the opposition factions (Social Democrats, Left Party and Green Party).

CDU/CSU and FDP had previously refused to schedule another hearing next to the judiciary committee hearing in January, saying that all questions could also be addressed in this expert hearing. As it turned out, there were a couple of technical questions that could not be addressed, due to the fact that none of the invited experts in the judiciary committee hearing were experts in the field of technology. How could anyone have known that there are at least two kinds of experts out there!

Invited experts were

Dr. Wieland Holfelder, engineer at Google (there was a consensus agreement by the committee members that he could pass non-technical questions to legal counsel Arnd Heller from Google, who was sitting behind him) Dr. Thomas Höppner, representative from the press publishers’ association BDZV Prof. Dirk Lewandowski, University of Applied Sciences, Hamburg Michael Steidl, International Press Telecommunications Council (IPTC), London

Two experts were invited by the majority factions (Höppner and Steidl), two experts were invited by the opposition (Holfelder and Lewandowski). The procedure was following the usual procedures: There were three rounds of questions for members of parliament, two questions from each faction to one expert or one question to two experts. There was no opportunity for introductory statements by the experts and no strictly enforced time limit on answers.

So, in order for an expert to be allowed to speak, he has to be given a question from a member of parliament. An expert is not allowed to ask questions or offer refutations to other experts directly. This results in a strategy that each side is going to give softball questions to their own experts and potentially compromising questions to the experts from the other side. It has to be assumed at many hearings that questions were exchanged before the meeting and that there is some level of expectation on what the answer might be. This is exceptionally true for partisan experts whose employers directly benefit from or suffer by the outcome of this legislative process.

Some of the softball questions provided the experts the opportunity to explain how robots.txt works (Holfelder) or explain the shortcomings of robots.txt (Steidl and Höppner).

Holfelder introduced himself as engineer who implemented his own web crawler 14 years ago. He distributed printouts of robots.txt examples and the resulting snippets in the search engine results pages. He explained additional meta-tags that Google uses to add or remove content from the Google (or any other of the leading search engines). To some extend, his presentation felt both verbose and strangely elementary. In an ideal world, none of this information would have been new to a subcommittee that specifically focusses on such topics.

Petra Sitte, (Left Party) had asked Holfelder to comment on ACAP, a protocol that was proposed by a few publishers and has failed to get any meaningful level of acceptance by the market. Holfelder provided a few examples in which implementing ACAP will be prone to spammers, as it mandates the way in which provided descriptions have to be shown.

Konstantin von Notz (Green Party) asked Holfelder whether it was possible for a search engine provider to detect whether specitic content on a web site is covered by this LSR or not. This is – in my opinion – one of the most important questions of this bill because it outlines the potential for huge collateral damage or legal uncertainty over the coming years.

The ancilliary copyright is awarded to a press publisher (a press publisher is defined as anyone who does what press usually does) for his press product (a product of what a press publisher usually does). It exists next to copyright awarded to the author who can license his/her content to anyone else. It means that it is not the text itself that defines whether conent is covered by the LSR.

Here is an example: A journalist maintains his personal web site in order to advertise for his services as a freelancer. He has a selection of half a dozend of his articles on his web site that help to inform potential customers on his journalistic skills. These articles are of course protected by copyright. They will not, however, be covered by the ancilliary copyright because he is not a press publisher. The very same texts on the web site of a magazine’s web site will be covered by the LSR. How can a search engine determine if text on a web site is subject to both copyright *and* LSR?

Holfelder replied that Google has a couple of heuristics to determine whether a certain page is provided by a press publisher. However, this law has no provisions for “honest mistakes”. If Google failes to detect LSR content and does not receive prior permission to index such content, Google faces legal consequences. There is no such things as a “warning shot” or an obligation by the press publisher to proactively inform a search engine whether it things a certain page is LSR covered. This is the legal equivalent of a minefield.
Holfelder stated that a search engine would in this scenario tend towards overblocking in order to avoid a lawsuit for violating the LSR.

Höppner, the press publishers’ expert spent his time mocking a comparison about this bill that involves taxis and restaurants. He then stated how services such as Google News substitute visiting the original pages, with some rambling about a Google service called “Google Knowledge”. It was hard to tell whether he meant the failed Google Know project or the Google Knowledge Graph in the standard Google search.

His main argument on robots.txt was a passive-aggressive one. Publishers do not like robots.txt per se, they merely use it to fight for the last crumbs that search behemoths like Google have left them. In other words, if a press publisher is providing meta description text or Twitter cards, this should not be seen as some kind of agreement to actually use this text in order to build snippets in a search engine. I severely doubt that this position would hold in court or among the motivation of press publishers.

Prof Lewandowski’s contribution to the hearing was an interesting one as he is the first expert in a long time who does not seem to have an agenda with respecto to the LSR. His viewed were balanced, nuanced ones, highlighting both the high level of acceptance of robots.txt and some of its shortcomings. He pointed out that at least at Google News, the limited amount of sources and the opt-in-meachnism (yes, it’s more complicated than that) of Google News would permit running such a service in an LSR world.

Steidl used his time to explain IPTC’s contribution to the world of standards and mentioning the RightsML project which is in active development. He criticised robots.txt for being without a governing organisation and for failing to express rights on a sub-article level.

Both Google and the press publishers were not very eager to present actual numbers in Google News usage or how visitors are directed to third party web sites.

In round two, Google’s legal counsel Haller was asked how Google will react to this bill if enacted. He replied that Google does not know the final version of this bill, and that Google has not decided yet on how to implement it. He pointed out that his companry would have to not only deal with publishers from Germany but from the entire European economic area who could exercise their own LSR rights against Google.

Wednesday, 13 February 2013

Google Hits 67 Percent Market Share Again, Bing Hits Another All-Time High



Google logoCore search activity was up pretty substantially in January, and Google’s US market share returned to the 67 percent level that it was at in November — all according to the latest comScore search engine rankings for January 2013.

Google’s market share rose from 66.7 percent in December to 67 percent in January. Bing’s market share was also up, from 16.3 percent in December to 16.5 percent in January — that’s an all-time high for Bing.

The Bing gains, as usually happens, were tempered by a similar decline in Yahoo’s search market share, leaving the “Bing-powered search” combo still in the same 29 percent range that it’s been for some time now.

Friday, 4 January 2013

SEO Isn’t Dead. It Just Got a Life



With every algorithm update or new announcement from one of the major search engines, there will inevitably be a handful of doomsayer’s proclaiming that SEO is on its deathbed. As always, this proclamation is unfounded, unnecessary, and quite frankly, unbelievable.

SEO isn’t dead. It hasn’t abandoned its role in helping websites achieve their goals of improving organic visibility to generate search traffic. It hasn’t stopped helping business owners engage with their audience on their terms and on their turf. No, SEO isn’t dead. It just decided to get a life.

Instead of focusing solely on the whims of search engine algorithms or manipulating the system for rankings, SEO has found new life by embracing the entire digital landscape. It’s embracing social media as another outlet to build relationships to impact organic goals. It has learned that it if wants to grow its organic visibility it needs to speak to the end user, not simply to the search engine.

Winning Friends & Organically Influencing Others

To be clear, just because SEO has learned some new tricks doesn’t mean everything has changed. The best practices that made it so popular in the first place are still very much important. Selecting appropriate keywords, optimizing on-page content and meta tags, and building a site that can be easily crawled by search engines are still fundamental for organic success.

It’s off-page where SEO has changed. What SEO has taken out of its repertoire is low-level manual link building, keyword stuffing and duplicate or spun content. SEO has matured and learned how to identify these tactics as unnatural manipulation; these low quality strategies have been exposed as the spam that they are.

Getting “links” from off-topic directories or irrelevant forums to artificially inflate your rankings may let you sneak by but they won’t do much in the long run. Especially when the links come from less than reputable sources.

And the biggest lesson SEO has learned is to stop trying to keep up with that fickle algorithm. Chasing the algorithm is like Keeping up with the Kardashians. It’s pointless. It’s mind-numbing. And this type of volatile approach to SEO will, at best, lead to minor short-term gains but most likely will lead to penalties and loss of business.

Instead of optimizing for search engines, it’s time to optimize for users.

SEO has learned that in order to be relevant and effective, it has to be more social and outgoing. It has to reach out to engaged users and journals and media sources offering legitimate value in exchange for authentic endorsement.

SEO has also learned to embrace reciprocation: be a good friend and fill up your karma bank BEFORE you have something to promote. Build the relationship first and establish your authority in the space and with the audience so that when you have a legitimate asset to offer the community, it will be received and evaluated based on your reputation within that community. In short, today’s SEO has learned to focus on thought leadership, social outreach and human-centered engagement.

The Habits of Highly Effective SEO

SEO has always been a nuanced approach to marketing but over the past few years especially it has evolved considerably into a delicate balance between art and science. Successful SEO has embraced the popular game theory concept known as “expected value”. We challenge ourselves to consider the expected return on a particular decision over an infinite number of trials. It’s learned that some decisions may get the attention it wants today, but that brief success is more then exception than the rule. It has embraced the fact that the right decision will occasionally (and unfortunately) not yield the results you expect. But, in the long run, implementing legitimate marketing practices as opposed to “optimizing for search engines” will produce more true wins than losses.

Sure, for every door that Google slams shut with a new algorithm, there are a dozen more ways to squeak by with less than honorable tactics. Gaming the system will always be a part of the SEO game as long as it provides quick results and easy victories for those willing to risk it. But for those who decide to take the easy way, it will only create a longer and more treacherous road to success in the future.
Search Engines Just Aren’t That Into You

The biggest realization of this more mature, more socially aware and likeable SEO?

It’s not about them. It’s not about a website or a search engine or keeping up with the algorithms.

It’s about the user. The users who visit search engines with infinite questions looking for a single answer. True SEO (as with true marketing in general) is about appealing to them and their needs, wants and motivations.

It’s going beyond being technical, automated, or dictated solely by process. SEO is not a one-size-fits-all solution that can be boiled down to a simple checklist. SEO involves engagement, participation, and legitimate membership into the community. It involves intimately knowing your audience and catering your marketing, from the on-page to the off-page, to suit their needs. Simply put, it involves being human.

So SEO has branched out to embrace content not for content’s sake, but content that the end user will find valuable and knowledgeable. It is talking to people and making authentic connections, earning quality links and endorsements, expanding its digital visibility in authentic and valuable ways. It has realized that by embracing people for who they are and what they want, and not trying to game the system, it has created a following that no new line of code can destroy.

Monday, 3 December 2012

When Google News Fails, Here’s How To Fix It


In today’s world of instant gratification, with Twitter often “scooping” traditional news sources, we still turn to professional journalists for accurate, timely news, and confirmation of the events that transpired. While “breaking” news offers instant awareness, we still want to read news accounts reported by trained pros, who have dug deeply for facts and have published stories that have been vetted by qualified editors.
Nonetheless, we want “fresh” news, and increasingly we want to sample viewpoints from a diverse number of sources. We definitely don’t want yesterday’s “fish wrappers” as dated print newspapers were once called. So we turn to online news aggregators, and one of the most popular, with more than a billion users per week, according to Google, is Google News. With good reason: Google News offers 72 editions in 30 languages, drawing content from more than 50,000 sources.

Saturday, 24 November 2012

FTC Likely To Abandon “Vertical Search” Antitrust Claims Against Google



There are now enough indications to suggest that any antitrust settlement between the FTC and Google — and the FTC would much prefer to settle than test its case in court — won’t involve “vertical search.” An earlier Reuters report, probably resulting from an internal FTC leak, suggested that vertical search wasn’t the core of the agency’s case against Google.

Today Bloomberg is reporting that the FTC is “wavering” on whether to pursue a formal action against Google. In particular the agency’s own people (anonymous sources) suggest they can’t make the “vertical search bias” claim stick legally:

Federal Trade Commission officials are unsure they have enough evidence to sue Google successfully under antitrust laws for giving its own services top billing and pushing down the offerings of rivals, said the people, who asked for anonymity because the discussions aren’t public. Regulators are also looking at whether the ranking system’s benefits to consumers outweigh any harm suffered by rivals including NexTag Inc. and Kayak Software Corp, the people said.

This is huge. The “search bias” argument is the core of FairSearch and other Google critics’ complaints against the company. While much of the antitrust wrangling playing out in the press is about public relations, there’s a misrepresentation of antitrust law behind Google’s most vocal critic’s arguments. The implication is that somehow antitrust law operates for their benefit — it doesn’t.

Antitrust law is intended to protect consumers rather than competitors. Protecting competition is the means to the end of promoting consumer interests. But protecting the position of individual companies in the market is not an aim or goal of antitrust rules, although when abuse of a dominant market position harms competition antitrust violations may be found. As a practical matter it’s often competitors who agitate for antitrust action, as in this case.

While it may seem deeply unfair to rivals that Google can use its search dominance and traffic to promote services like Google Maps, Google Shopping or Google Hotel Finder these services arguably benefit consumers. And when they’re weak consumers readily turn to others. For example, Kayak’s CEO reported to CNBC a couple of months ago that Google’s travel search services so far had “no impact” on its business.

The FTC would have enormous trouble making the case that Google isn’t entitled to “discriminate” between services with its algorithm — that’s the entire point of Google’s algorithm — or that its “promotion” of Google Maps instead of Mapquest, for example, harms consumers in any way. Then there’s the long-standing problem of remedies and the US intervening in the SERP.

FairSearch has tried to answer these issues and critiques with a list of “principles for evaluating antitrust remedies to Google’s antitrust violations.” Attorney Marvin Ammori, whose firm has been retained by Google, argues point by point that these principles are “ill conceived.”

In Europe Google faces similar claims, arguments and issues. Any decision not to pursue the “vertical search” angle in the US could influence the Europeans to reassess their position on that issue.

Decisions are due very soon on both sides of the Atlantic about whether to bring formal cases against Google. However both sets of regulators would much prefer to settle and avoid a protracted and potentially unsuccessful (and therefore embarrassing) legal battle — if they can.

Thursday, 15 November 2012

Hijacking Google Search Results With Duplicate Content


Dan Petrovic has explained how he hijacked a few pages in Google to show his copied version over the original version of the page.
For example, he was able to confuse Google into thinking a page on MarketBizz should really show on dejanseo.com.au instead of on marketbizz.nl.

How did he do it? He simply copied the full page, source code and everything and put it on a new URL on his site. He linked to the page and gave it a +1 and the result worked days later. He is a picture of Google’s search results for the page using an info command and also searching for the title of the page:
He did the same thing on three other domains with varied levels of success. We emailed Google last week for a comment but have yet to hear back.
In some cases, using a rel=canonical seemed to prevent it from hijacking the result fully but not in all cases. There also seems to be a case where using the authorship might be prevent this as well.

Tuesday, 6 November 2012

Google Releases Panda Update 21, Impacts 1.1% Of US Queries In English



Did you feel it yesterday? Some did, a slight shaking in the Google results. Yes, it was real. Google’s confirmed to us that a Panda Update happened yesterday.

Google said that worldwide, the update will impact about 0.4% of queries that a regular user might notice. For those searching in the United States in English, the percentage is higher. 1.1%, Google says.

This marks the 21st confirmed Panda Update by Google and stays in keeping with the roughly 4-6 week release schedule.

Sunday, 4 November 2012

Infographic: The Death Of SEO, Failed Predictions Over The Years



SEO has been declared “dead” almost from when it first began, as our post from a few years ago, Is SEO Dead? 1997 Prediction, Meet 2009 Reality, covers. Now, a new infographic is out looking at how SEO has been “dying” over the years.

The infographic is from SEO Book and is interesting in that rather than taking a timeline approach, it instead shows examples of various types of people who’ve declared that SEO is dead and why they are, as the infographic puts it, “deluded.”

If you want the infographic for yourself, you’ll find it here: Infographic – Is SEO Dead?

For our own reasons why SEO will never die, well, see our SEO Is Here To Stay, It Will Never Die post from 2010, which says in part:

SEO is about understanding how these search engines get their information and what should be done to gain free traffic from them. SEOs — and search marketers in general — understand the process of search, and they tap into that process to attract visitors.

People have had search needs since as long as they’ve been thinking. Search engines are merely a new, efficient way of answering those needs. The demand for answers isn’t going away; search engines aren’t going away, nor will how search engines provide answers. That means SEO as a way to help ensure you’re one of the answer has a strong future.
The infographic is below; click to enlarge it:

Saturday, 3 November 2012

Google Starts Shutting Down Its City Pages, Shifts To Google+ Local



Google is in the process of shutting down its collection of city pages — a change that follows the shift from Google Places to Google+ Local, and a change that may be reflective of a larger shift in direction for Google’s local efforts.

Mike Blumenthal noticed yesterday that the Portland city page had gone missing. The page used to be accessible at www.google.com/portland/ (which redirected to www.google.com/city/portland/), but that URL now produces a 404 error. The same thing happens with some other city page URLs, like google.com/sandiego and google.com/madison.
br>
Those pages were launched in the summer of 2011 and served as a hub for the Google Places community efforts — which involved “feet on the street” outreach to local business owners and consumers. Here’s what the Portland city page looked like when we first covered the city page launch.

The pages were a central spot for Google’s “recommended places,” upcoming events, current Google Offers and the latest tweets from Google’s staff in each city.

But at least one of the old URLs — google.com/city/austin — now redirects to a corresponding page on Google+ Local: plus.google.com/u/1/+GoogleLocalAustin/posts. And that seems to be the plan for all of the old city pages. We got this statement from Google when we asked about the old city pages being shut down:

Earlier this year we announced Google+ Local – a local search experience that makes it easier to discover and share your favorite places – like a great restaurant or museum. With Google+ Local, information for hundreds of cities around the world including Portland, Austin, San Diego and Madison is streamlined in one place. Google is essentially saying that the old pages have been made redundant thanks to Google+ Local. But the new Google+ pages don’t have the same feature set as the old city pages. There are no local businesses getting the “recommended places” label, for example, and there’s no page that lists Google Offers. So it’s not an exact replacement that Google’s making, at least not at this point.

What’s unclear is the impact — if any — that the change is having on Google’s physical presence in each city. A Google spokesperson told Mike Blumenthal yesterday that “Community Managers are still working with local businesses in a variety of cities around the world,” but there are signs that not all cities are still active in this aspect of Google’s local efforts. For example, the Google+ Local team in Portland is still very active on Twitter and on its new Google+ Local Page, but the original San Diego community manager is now a former Google Places Community Manager. The Google San Diego Twitter account hasn’t posted in more than a year, and there’s no San Diego page that I can find in Google+ Local.

Thursday, 1 November 2012

Google And Rosetta Stone Agree To Settle Suit Over Trademarked Keywords



One of Google’s most well-funded and tenacious opponents in the legal arena, Rosetta Stone, has agreed to settle its trademark suit against the search giant. The language learning software company had contended that the use of its trademarks as “keyword triggers” infringed trademark law and confused consumers.

Terms of the settlement agreement weren’t disclosed.

The three-year-old case was one of the highest-profile legal battles over a practice that has raised the ire of many trademark holders. In the U.S., Google allows AdWords advertisers to bid on trademarked keywords, so someone searching for “Rosetta Stone” could be served ads for competitors.

News of the settlement is likely to be a disappointment to those who have been closely watching the trademarks-as-keywords issue go through the court system. Because Google has been so successful at settling with detractors, there have been few real rulings to establish legal precedent and answer the question once and for all.

The only case Google has definitively won was decided earlier this month. In that case, Daniel Jurin — who holds the trademark for Styrotrim building materials — was the plaintiff. Jurin reportedly lost his attorney and didn’t respond to Google’s summary judgment motion. With no opposition, Google won easily, with the court saying Jurin didn’t provide sufficient evidence.

Earlier this year, The U.S. Court of Appeals revived part of the Rosetta Stone suit — related to direct infringement, contributory infringement and dilution claims — after a Virginia court had issued a summary judgment in favor of Google in 2010.

Now, the two companies are setting aside their differences to “meaningfully collaborate to combat online ads for counterfeit goods and prevent the misuse and abuse of trademarks on the Internet,” according to a joint press release. Court papers made public late in 2010 revealed that the companies — despite their legal dispute — had already been working together on catching counterfeiters and credit card criminals. Rosetta Stone even praises Google’s Trust and Safety team to the Federal Bureau of Investigation (FBI).

Tuesday, 30 October 2012

Google: Disavowing Links Isn’t Replacement For Also Trying To Get Them Removed



Many SEOs cheered that Google’s new disavow links tool would make it easier to recover from a bad backlink profile. No more worrying about directories charging to remove links or trying to get out of bad link networks. But Google says it does want to see a good faith effort to go along with any disavow links request, or those disavow requests might not get honored.

Google: Try To Remove Links

Google had previously suggested that the disavow link tool wasn’t a replacement for making link removal requests. From the company’s blog post, the day the tool launched:

We recommend that you remove from the web as many spammy or low-quality links to your site as possible. This is the best approach because it addresses the problem at the root. By removing the bad links directly, you’re helping to prevent Google (and other search engines) from taking action again in the future…. If you’ve done as much as you can to remove the problematic links, and there are still some links you just can’t seem to get down, that’s a good time to visit our new Disavow links page

Disavow Not Enough? No, Says Google

But given the seemingly automated nature of the disavow link tool, did site owners really need to go through this effort? Why not just submit a list of bad links and save the time? Isn’t that really all you need to do? According to the head of Google’s web spam team, Matt Cutts, no:

I wouldn’t count on this. In particular, Google can look at the snapshot of links we saw when we took manual action. If we don’t see any links actually taken down off the web, then we can see that sites have been disavowing without trying to get the links taken down.

His answer came as part of a long Q&A I posted yesterday about the link disavow tool. It suggests that the link disavow tool is also looking to see some actual removal of bad links, or it won’t kick in.

That’s odd, however. The entire point of having a link disavow tool is that it’s hard to get some of links removed. Having a tool that works to remove links you can’t remove but only if you can get those links removed either defeats the purpose of having the tool or is a Catch-22.

Hit By Manual Action? Don’t Mess Around: Both Remove & Disavow I think the reality is two-fold, however. First, many of the sites impacted by things like the Penguin Update and seeking to remove bad links may have many of them, so that by removing a few, the link disavow tool can help as part of an overall clean-up effort

Second, I actually think the link disavow tool isn’t trying to do some type of cross-checking. If you were hit by an automated action like Penguin based on your backlinks, rather than a manual action, I suspect that just disavowing those bad links (if you can tell what they are) will be sufficient. If you read the comments from Cutts closely, his statement is more about what Google could do, not what it necessarily does.

But having said that, his is the official advice, and so if you think you were hit by a penalty, I’d follow it fully. Try to remove some of those links manually.

Moreover, if you were hit by a manual action, where you know some human at Google has penalized yourself, you’re going to be under even more scrutiny when filing a reinclusion request. That means you’ll want to know that if a Google web spam team member checks on their site, they’ll see you’ve done more than just disavow links.

Monday, 29 October 2012

SMX Social Media Features Del Harvey, The “Matt Cutts” Of Twitter



Our SMX Social Media Marketing show is coming to Las Vegas this Dec. 5 & 6, and as part of our great agenda is a keynote talk by Del Harvey, Twitter’s director of trust and safety.

Not familiar with Del? You should be, if you’re a marketer doing anything involved with Twitter. She oversees what’s considered right and wrong when it comes to promotion on the service.

To put Del in a context our search marketing readers will understand, she’s effectively the “Matt Cutts” of Twitter, overseeing the type of policing in Twitter’s tweets that Cutts does for Google’s search results.

I’ve heard Del speak before, and she’s full of excellent advice. Her keynote on Dec. 5, Twitter Talks: How To Win Friends & Not Be Unfollowed By People (Or Worse), will be one that you won’t want to miss.

Saturday, 27 October 2012

The EMD Update: Google Issues “Weather Report” Of Crack Down On Low Quality Exact Match Domains



The head of Google web spam fighting team Matt Cutts announced on Twitter that Google will be rolling out a “small” algorithm change that will “reduce low-quality ‘exact-match’ domains” from showing up so highly in the search results.

Cutts said this will impact 0.6% of English-US queries to a noticeable degree. He added it is “unrelated to Panda/Penguin. Panda is a Google algorithm filter aimed at fighting low quality content; Penguin is one aimed at fighting web spam.

This should come as no surprise, as Cutts said a couple years ago that Google will be looking at why exact domain matches rank well when they shouldn’t, in some cases.

Likely over the coming days, you will see shifts in the search results where many sites that may rank well based on being an exact match domain may no longer rank as high in Google’s search results.

Exact match domains mean domains that match exactly for the search query. For example, if I sold blue widgets and owned the domain name www.bluewidgets.com, that would be an exact match domain.

Keep in mind that this doesn’t mean sites with keywords they hope to rank for in their domain names are now doomed. Rather, the change aims to target low quality sites that might be riding on on the basis of exact matching.

Friday, 26 October 2012

Google Launches “Get Your Google Back” Campaign For Windows 8 Users



Google has launched a Get Your Google Back site to teach new Windows 8 users how to restore or add Google Search to their devices and to get Chrome. It doesn’t help those with Windows RT devices who seem stuck without Google, however.

Windows 8 was released today, which means a number of people will find Bing as the default search engine. This isn’t always the case. Those who are upgrading and already have Google as default or buying computers from companies that partner with Google to be the default should still have Google, though we’re checking on this.

Still, plenty of people may find things have changed, and the new site is designed to help those who want to get Google back quickly. The page gives you quick links to download icons to your Windows 8 computer, and a video shows how you can add them to the front screen on Windows 8

If you’re a Windows RT users, this site won’t help you. Google Search is simply unavailable for Windows RT unless you go directly to Google in your browser. Our Sorry, Microsoft Surface Users: No Google Search App For You story has more about this.

Thursday, 25 October 2012

What is Online Marketing?


Internet marketing, also known as web marketing, online marketing, webadvertising, or e-marketing, is referred to as the marketing (generally promotion) of products or services over the Internet. Internet marketing is considered to be broad in scope[citation needed] because it not only refers to marketing on the Internet, but also includes marketing done via e-mail and wireless media. Digital customer data and electronic customer relationship management (ECRM) systems are also often grouped together under internet marketing. Internet marketing ties together the creative and technical aspects of the Internet, including design, development, advertising and sales.[2] Internet marketing also refers to the placement of media along many different stages of the customer engagement cycle through search engine marketing (SEM), search engine optimization (SEO), banner ads on specific websites, email marketing, mobile advertising, and Web 2.0 strategies.[citation needed] In 2008, The New York Times, working with comScore, published an initial estimate to quantify the user data collected by large Internet-based companies. Counting four types of interactions with company websites in addition to the hits from advertisements served from advertising networks, the authors found that the potential for collecting data was up to 2,500 times per user per month.

Tuesday, 9 October 2012

Google Panda Update 20 Released


Google Panda Update 20 Released, 2.4% Of English Queries Impacted Google has confirmed with us that on Thursday, September 27th, they released a Panda algorithm update – this would be the 20th Panda update and thus we are naming it Panda 20. This is a fairly major Panda update that impacts 2.4% of English search queries and is still rolling out. Late Friday afternoon, Google announced a exact match domain update that removed the chances of a low-quality exact match domain from ranking well in Google. But over the weekend, many non-exact match domain site owners noticed their rankings dropped as well. What was it? Google confirmed that they pushed out a new Panda algorithm update that isn’t just a data refresh but an algorithm update. Google told us this “affects about 2.4% of English queries to a degree that a regular user might notice.” There is more to come with this update, where Google promises to roll out more to this Panda algorithm update over the next 3-4 days. Here is the comment Google’s Matt Cutts sent us after asking about this update: Google began rolling out a new update of Panda on Thursday, 9/27. This is actually a Panda algorithm update, not just a data update. A lot of the most-visible differences went live Thursday 9/27, but the full rollout is baking into our index and that process will continue for another 3-4 days or so. This update affects about 2.4% of English queries to a degree that a regular user might notice, with a smaller impact in other languages (0.5% in French and Spanish, for example).