Expert’s Opinions

LolaTheriotHow to overcome after Blackmal virus- Hard Drive Data Lost?

Wednesday, May 26th, 2010

For the data to be deleted fro the hard drive, concerns the computer virus as one of the most common factor. Each and every computer virus is different and designed to harm a particular set of files.

Some of the common ways through which a computer virus can enter your system are downloading files from the Internet, attaching a virus-infected external storage media, and e-mails. Though the consequences of a virus attack vary from situation to situation, the most common outcome is permanent deletion of files.

In order to overcome the after-effects of any virus attack and access the files, the user needs to restore the files from an up-to-date backup. However, if in case no backup is available or backup is unable to restore required files, the user needs to opt for a commercial Hard Drive Recovery Software that can perform recovery of deleted files.

The list of computer virus is too long but the most common one is ‘Blackmal’ virus. This virus has till now infected more than 6,00,000 computer systems and resulted in huge data loss. The virus is competent enough to permanently delete eleven varied file types on the third of every month.

The virus is programmed to delete files not only from the system’s hard drive, but also from network-attached storage.

Prevention: To prevent the attack of Blackmal virus, you will need to follow the below mentioned prevention tips:

1.Always keeps an updated anti-virus installed on your system.

2. Always scan the external storage media before attaching it to your system.

3. Always scan an e-mail (specifically with an attachment) using an anti-virus application.

In case, the above prevention tips are not followed and your different file types have been deleted, then you will need to follow the below steps:

Restore the files from an adequate backup.

In case of failure of the restoration process, you will need to recover the files using an effective third-party Hard Drive Recovery Software.

A Hard Drive Recovery utility is a powerful utility that can scan virus infected media and recovers all lost files after virus attack. The tool is built with self-explanatory user-interface and needs no prior technical knowledge to perform recovery.

http://get-a-designer.com

http://www.all1sourcetech.com

Tags: , , , , ,
Posted in Editorial, Expert's Opinions

BobbyeM71hxwHow to recover a corrupt MS SQL Sever database

Monday, May 3rd, 2010

MS SQL Server is a relational database management system (RDBMS), which is specifically developed to be used in the enterprise environment. It provides increased productivity, efficiency, availability, and administrative ease to your organization.

It’s the case with most of the applications it too can face some errors that may lead to data corruption. The data corruption cases may arise because of various issues such as power surges, virus infections, human errors, abrupt shutdown when the database is open, etc. In such cases, you should replace the database with an updated backup. And if the backup is not updated and you need the data urgently, then you should use an SQL MDF repair tool that will help you to repair SQL database.

Let’s consider a scenario wherein you have got MS SQL Server installed on your system, and one day when you open the SQL Server database, it fails to open. An error message is displayed.

“Server can’t find the requested database table.”

Once you encounter this error message, you are unable to access the database.

Cause: Here the error is that the SQL database is corrupt and, thus, inaccessible. It may have got corrupt because of various reasons such as virus infections, human errors, power surges, abrupt system shutdown when the database is open, etc.

Resolution: If the SQL database is not a complex one and there isn’t huge amount of data, then you should try and rebuild the database. However, if the database is complex and contains huge amount of data, then you should consider using a third-party sql server repair application for mdf repair. Such tools are read-only in nature and do not overwrite the data while scanning the databases using fast yet sophisticated algorithms. Also, these tools have rich interface and enable you to repair the database yourself without the intervention of an expert.

SQL Recovery software is an MS SQL repair tool that is able to repair SQL databases created in MS SQL Server 2000, 2005, 2008. It can recover all kinds of database components such as tables, defaults, stored procedures, triggers, views and rules.

In addition, it can recover database constraints such as primary key, foreign key, unique key, and check. In addition to all this, it can repair a database even if it is not repairable by the DBCC CHECKDB command. This tool is compatible with Windows 7, Vista, 2003 Server, XP, 2000, and NT.

Tags: , , , , ,
Posted in Expert's Opinions, Purely Technical

BobbyeM71hxwInside JSF 2.0’s Ajax and HTTP GET Support

Wednesday, March 17th, 2010

With support for these techniques, this newest version of the Java component UI framework enables developers to build truly dynamic web pages simply and easily.

At the highest level, JSF technology provides an API for creating, managing, and handling UI components and a tag library for using components within a web page. The JSF 2.0 release simplifies the web developer’s life by providing the following:

  • Reusable UI components for easy authoring of web pages
  • Well defined and simple transfer of application data to and from the UI
  • Easy state management across server requests
  • Simplified event handling model
  • Easy creation of custom UI components

For GET requests and Ajax integration, we need to drill down into JSF 2.0’s support, as well as the productive features that this support makes possible. The demo application is an online quiz that allows a registered user to answer five simple questions that test his or her general knowledge. Towards the end of the quiz, the application displays the score.

Using GET Requests in JSF 2.0

To support GET requests, JSF 2.0 introduced the concept of View Parameters. View Parameters provide a way to attach query parameters to URLs. You use the tag <f:viewParam> to specify the query parameters.
Take this code for example:
<f:metadata>
<f:viewParam name=”previousScore” value=”#{recordBean.oldScore}” />
</f:metadata>
In this example, the value of the parameter previousScore will be automatically picked up and pushed into the property oldScore of the recordBean. So, when a request like this comes for a URL:
displayData.jspx?previousScore=10

To support GET requests, JSF 2.0 introduced the concept of View Parameters. View Parameters provide a way to attach query parameters to URLs. You use the tag <f:viewParam> to specify the query parameters.

Take this code for example:

<f:metadata>

<f:viewParam name=”previousScore” value=”#{recordBean.oldScore}” />

</f:metadata>

In this example, the value of the parameter previousScore will be automatically picked up and pushed into the property oldScore of the recordBean. So, when a request like this comes for a URL:

displayData.jspx?previousScore=10

The value of the bean property oldScore will be set to 10 when the request is processed, which avoids manually setting the value or using a listener. Another interesting point to notice is that like any other component the tag <f:viewParam> supports conversion and validation. Hence, there is no need for separate conversion/validation logic.

The new tags in JSF 2.0 related to support for the GETrequest.

h:button :- Renders a button that generates a GET request without any handcoding of URLs

h:link :- Renders a link that generates a GET request without any handcoding of URLs

h:outputStylesheet :- Refers to a CSS resource

h:outputScript :- Refers to a JavaScript resource

f:event :- Registers a specification-defined or a user-defined event

f:ajax :- Enables associated component(s) to make Ajax calls

f:metadata :- Declares the metadata facet for this view

f:viewParam :- Used in to define a view parameter that associates a request parameter to a model property

f:validateBean :- Delegates the validation of the local value to the Bean Validation API

f:validateRegex :- Uses the pattern attribute to validate the wrapping component

f:validateRequired :- Ensures the presence of a value

Bookmarking Support

Because all JSF 1.x interactions with the server use only HTTP POST requests, those JSF versions don’t support bookmarking pages in a web application. Even though some tags supported the construction of URLs, it was a manual process with no support for dynamic URL generation. JSF 2.0’s support for HTTP GET requests provides the bookmarking capability with the help of new renderer kits.

A new UI component UIOutcomeTarget provides properties that you can use to produce a hyperlink at render time. The component allows bookmarking pages for a button or a link.

The two HTML tags that support bookmarking are h:link and h:button. Both generate URLs based on the outcome property of the component, so that the author of the page no longer has to hard code the destination URL. These components use the JSF navigation model to decide the appropriate destination.

Take the following code for example:

<h:link outcome=”login” value=”LoginPage” >

<f:param name=”Login” value=”#{loginBean.uname }” />

</h:link>

This bookmarking feature provides an option for pre-emptive navigation (i.e., the navigation is decided at the render response time before the user has activated the component). This pre-emptive navigation is used to convert the logical outcome of the tag into a physical destination. At render time, the navigation system is consulted to map the outcome to a target view ID, which is then transparently converted into the destination URL. This frees the page author from having to worry about manual URL construction.

Support for Ajax

The default implementation provides a single JavaScript resource that has the resource identifier jsf.js. This resource is required for Ajax, and it must be available under the javax.faces library. The annotation@ResourceDependency is used to specify the Ajax resource for the components, the JavaScript function jsf.ajax.request is used to send information to the server in an asynchronous way, and the JavaScript function jsf.ajax.response is used for sending the information back from the server to the client.

On the client side, the API jsf.ajax.request is used to issue an Ajax request. When the response has to be rendered back to the client, the callback previously provided by jsf.ajax.request is invoked. This automatically updates the client-side DOM to reflect the newly rendered markup.

The two ways to send an Ajax request by registering an event callback function are:

  • Use the JavaScript function jsf.ajax.request
  • Use the <f:ajax> tag

JavaScript Function jsf.ajax.request

The function jsf.ajax.request(source, event, options) is used to send an asynchronous Ajax request to the server. The code snippet below shows how you can use this function:

<commandButton id=”newButton” value=”submit”

onclick=”jsf.ajax.request(this,event,

{execute:’newButton’,render:’status’,onevent: handleEvent,onerror: handleError});return false;”/>

</commandButton/>

The first argument in the function represents the DOM element that made an Ajax call, while the second argument (which is optional) corresponds to the DOM event that triggered this request. The third argument is composed of a set of parameters, which is sent mainly to control the client/server processing. The available options are execute, render, onevent, onerror, and params.

<f:ajax> Tag

JSF 2.0 enables page authoring with <f:ajax>, which is a declarative approach for making Ajax requests. You can use this tag instead of manually coding the JavaScript for Ajax request calls. This tag serves two roles, depending on the placement. You can nest it within any HTML component or custom component. If you nest it with a single component, it will associate an Ajax action with that component.

The <f:ajax> tag has four important attributes:

  • render – ID or a space-delimited list of component identifiers that will be updated as a result of the Ajax call
  • execute – ID or a space-delimited list of component identifiers that should be executed on the server
  • event – The type of event the Ajax action will apply to (refers to a JavaScript event without the on prefix)
  • onevent – The JavaScript function to handle the event

Consider the following code from the login page of the online quiz application. The code validates the input of the email field by sending an Ajax call for every keystroke. The validation is done by a managed bean method that acts as a value change listener.

<h:inputText label=”eMail ID”   id=”emailId” value=”#{userBean.email}” size=”20″

required=”true” valueChangeListener=”#{userBean.validateEmail}”>

<f:ajax event=”keyup” render=”emailResult”/>

</h:inputText>

<h:outputText id=” emailResult” value=”#{userBean.emailPrompt}” />

Using Ajax to Validate the Input: The online quiz application validates email field input by sending an Ajax call for every keystroke.

Here, <f:ajax> is nested within the emailId inputText component. For every keyup event that is generated, an Ajax call is sent to the server, which invokes the valueChangeListener. By default, the component in which the tag is nested is executed on the server. So, the execute attribute is not specified. Also, for an input component, the default event is valueChange, so the event attribute is also not used. The render attribute indicates that the outputText component emailResult should be updated after the Ajax call.

If you place this tag around a group of components, it will associate an Ajax action with all the components that support the events attribute.

<f:ajax event=”mouseover”>

<h:inputText id=”input1″ …/>

<h:commandLink id=”link1″ …/>

</f:ajax>

In this example, input1 and link1 will exhibit Ajax behavior on a mouseover event.

<f:ajax event=”mouseover”>

<h:inputText id=”input1″ …>

<f:ajax event=”keyup”/>

</h:inputText>

<h:commandLink id=”link1″ …/>

</f:ajax>

In this example, input1 and link1 exhibit Ajax behavior on keyup and mouseover events, respectively.

Using the Ajax tag enhances the markup of the associated component to include a script that triggers the Ajax request. This the page author to issue Ajax requests without having to write any JavaScript code.

These new features enable developers to build truly dynamic web pages simply and easily.

http://www.developer.com

http://www.all1sourcetech.com

Tags: , , , , , ,
Posted in Expert's Opinions, Purely Technical

BobbyeM71hxwHow to build a PHP Link Scraper with cURL?

Thursday, March 4th, 2010

Let’s build a robot, which scrapes links from web pages and dumps them in a database, and then it read those links from the database and follows them, scraping up the links on those pages, and so on ad infinitum.

To begin, let’s have a look at the groundwork.

The cURL Component-

cURL (or “client for URLS”) is a command-line tool for getting or sending files using URL syntax. It was first used in 2007 by Daniel Stenberg as a way to transfer files via protocols such as HTTP, FTP, Gopher, and many others, via a command-line interface. Since then, many more contributors has participated in further developing cURL, and the tool is used widely today.

Using cURL with PHP-

PHP is one of the languages that provide full support for cURL. (Find a listing of all the PHP functions you can use for cURL.) Luckily, PHP also enables you to use cURL without invoking the command line, making it much easier to use cURL while the server is executing. The example below demonstrates how to retrieve a page called example.com using cURL and PHP.

<?php
$ch = curl_init(“http://www.example.com/”);
$fp = fopen(“example_homepage.txt”, “w”);
curl_setopt($ch, cURLOPT_FILE, $fp);
curl_setopt($ch, cURLOPT_HEADER, 0);
curl_exec($ch);
curl_close($ch);
fclose($fp);
?>

<?php


$ch = curl_init(“http://www.example.com/”);
$fp = fopen(“example_homepage.txt”, “w”);


curl_setopt($ch, cURLOPT_FILE, $fp);
curl_setopt($ch, cURLOPT_HEADER, 0);


curl_exec($ch);
curl_close($ch);
fclose($fp);
?>

The Link Scraper-

For the link scraper, you will use cURL to get the content of the page you are looking for, and then you will use some DOM to grab the links and insert them into your database. You can build the database from the information below; it is really simple stuff.

$query = mysql_query(“select URL from links where visited != 1);
if($query)
{

 	while($query = mysql_fetch_array($result))
 	{

$target_url = $query[‘url’];
$userAgent = ‘ScraperBot’;

Next, grab the URL from the database table inside a simple while loop.

$ch = curl_init();
curl_setopt($ch, cURLOPT_USERAGENT, $userAgent);
curl_setopt($ch, cURLOPT_URL,$target_url);

After instantiating cURL, you use curl_setopt() to set the USER AGENT in the HTTP_REQUEST, and then tell cURL which page you are hoping to retrieve.

curl_setopt($qw, cURLOPT_FAILONERROR, true);
curl_setopt($qw, cURLOPT_FOLLOWLOCATION, true);
curl_setopt($qw, cURLOPT_AUTOREFERER, true);
curl_setopt($qw, cURLOPT_RETURNTRANSFER,true);
curl_setopt($qw, cURLOPT_TIMEOUT, 20);

You’ve set a few more HEADERS with curl_setopt(). This time, you made sure that when an error occurs the script will return a failed result, and you set the timeout of each page followed to 20 seconds. Usually, a standard server will time-out at 30 seconds, but if you run this from your localhost you should be able to set up a no-timeout server.

$html= curl_exec($qw);
if (!$html)
{

 	echo "ERROR NUMBER: ".curl_errno($ch);
 	echo "ERROR: ".curl_error($ch);
 	exit;

}

Grab the actual page by sending the HEADERS along while executing the cURL request using curl_exec(). If an error occurs, it will be reported to PHP by the number and description inside curl_errno() and curl_error, respectively. Obviously, if such an error exists, you exit the script.

$dom = new DOMDocument();
@$dom->loadHTML($html);

Next, you create a document model of your HTML (that you grabbed from the remote server) and set it up as a DOM object.

$xpath = new DOMXPath($dom);
$href = $xpath->evaluate(“/html/body//a”);

Use XPATH to grab all the links on the page.

for ($i = 0; $i < $href->length; $i++) {

 	$data = $href->item($i);
        $url = $data->getAttribute('href');
 	$query = "INSERT INTO links (url, gathered_from) VALUES ('$url', '$gathered_from')";
 	mysql_query($query) or die('Error, insert query failed');
        echo "Successful Link Harvest: ".$url;
 	}

}

Dump all the links into the database, as well as the URL they are gathered from, just so you never go back there again. A more intelligent system might have a separate table for URLs already visited, as well as a normalized relationship between the two.

Going a step further than just grabbing the links enables you to harvest images or entire HTML documents as well. This is kind of where you start when building a search engine. Creating your own search engine may seem naively ambitious, and this little bit of code may inspire you a bit.

Source:- http://www.developer.com

http://www.all1social.com

http://www.all1martpro.com

Tags: , , , , ,
Posted in Editorial, Expert's Opinions, Purely Technical

SteveManorVerizon to allow Skype calls over wireless network

Wednesday, February 17th, 2010

Verizon Wireless will allow the customers to use the Internet phone service Skype to make free calls on some phones, an application that wireless carriers have been slow to allow.

Under a deal announced Tuesday at the Mobile World Congress tradeshow, users of some Verizon phones who have a voice and data plan will be able to download a free Skype application in late March. That will let them call or instant-message other Skype users for free or call regular phone numbers outside the United States for a fee paid to Skype. These calls would go over Verizon’s network and would not use up minutes on a cell phone plan.

However, Minutes would be deducted to use Skype to cal regular phone numbers in the US, according to Verizon.

Initially, the mobile application will be available for nine Verizon phones, including several BlackBerry models and Motorola Inc.’s Droid and upcoming Devour handsets.

According to Verizon’s chief marketing officer John Stratton, the application will be able to run all the time in the background. This means other people should be able to contact you through Skype even if your phone is on standby.

Other wireless carriers have blocked the Skype app from running all the time. It’s available on the iPhone only in Wi-Fi hot spots. In October, AT&T said it would relent and let the program work over its cellular network as well, but Skype has not yet released an application to enable that. Verizon’s version of Skype mobile will not work over Wi-Fi, the companies said.

Skype CEO Josh Silverman in an interview said, working directly with Verizon let Skype do things it couldn’t, such as integrating its service with a phone so Skype is built into the address book.

Originally, wireless carriers feared giving customers a way to avoid using voice minutes in their cell phone plans.

Now the companies are recognizing the value of customers who pay extra for data service. When the carriers “see how popular Skype is with American consumers they realize by offering Skype they can attract more customers,” according to Silverman.

http://get-a-designer.com

http://www.all1sourcetech.com

Tags: , , , , , ,
Posted in Expert's Opinions, Opensource, Technical News

BobbyeM71hxwKaspersky Patents Hardware-Based Antivirus

Wednesday, February 17th, 2010

Kaspersky Lab has made an announcement that they have received a US patent for a hardware-based antivirus solution. The announcement emphasizes that the hardware operates below the level of rootkits and therefore can’t be bypassed by them.

The patent, #7,657,941, is entitled “Hardware-based anti-virus system,” is awarded to inventor Oleg V. Zaitsev (Technology Expert at Kaspersky Lab) and assigned to Kaspersky. The abstract reads:

An anti-virus (AV) system based on a hardware-implemented AV module for curing infected computer systems and a method for updating AV databases for effective curing of the computer system. The hardware-based AV system is located between a PC and a disk device. The hardware-based AV system can be implemented as a separate device or it can be integrated into a disk controller. An update method of the AV databases uses a two-phase approach. First, the updates are transferred to from a trusted utility to an update sector of the AV system. Then, the updates are verified within the AV system and the AV databases are updated. The AV system has its own CPU and memory and can be used in combination with AV application.

So it seems this device is an actual separate computer running an embedded AV application. While the press release and abstract emphasize that the AV functionality doesn’t strictly need a software counterpart running in the host system, it does need host software in order to update itself, because the AV hardware won’t have network access. This update application will need to be trusted and hardened against attack.

The difficulty of detecting rootkits once they have installed does call for unconventional measures. Whether a hardware approach is truly more effective remains to be seen. If the device is just an AV system running below the level of the rootkit then the improvement will be small, as it will still only operate as well as the signature process allows. If the fact that the device is running below rootkits allows it to run heuristic tests which are better capable of detecting rootkit behavior then the difference could be substantial.

There is another advantage to hardware-based AV: Because the device has its own CPU and memory and minimal software running on the host PC, the performance impact on the PC will be lessened. But in fact, this device can not be a complete security solution, since it can only monitor disk operations. Modern security suites also monitor network connections.

Tags: , , , , , , , ,
Posted in Expert's Opinions, Technical News

BobbyeM71hxwThis Week in Geek: Hacking Your Wii, New AMD Chips, ATI Cards, and Core i7 Rumors

Tuesday, February 16th, 2010

This week, the biggest buzz was about Google Buzz, few of the hardware stories that might have been buried under the wave of social-networking stories.

How to Hack Your Wii For Homebrew Apps and Games

Looking to do more with your Wii? The Homebrew Channel provides a thriving community of developers and users looking to bring some amazing new features to the aging Wii console, including emulation, DVD and game playback from other regions, and the ability to run Linux. Mike Keller provides instructions for installing Homebrew on your Wii, which is easier than ever thanks to the mature Homebrew scene. Game on!

AMD Details Speed, Power Saving Features of Fusion

Code-named Llano, the new quad-core AMD Fusion processor running at speeds in excess of 3.0 GHz is due for release in 2011. A hybrid chip that combines a graphics processor and a CPU on a single piece of silicon, Fusion’s integrated graphics processor will natively support DirectX 11, allowing users to view Blu-ray movies or play 3D games. While the heat output from all four cores on a Fusion chip could be 100 watts, AMD’s new power management capabilities allow more efficient control of the chip’s energy draw, ensuring a cool processing and gaming experience.

ATI Introduces Radeon HD 5570, Targets Small Desktop PCs

Right after announcing the affordable Radeon HD 5450 line of GPUs, ATI introduced the energy-efficient, high-performance Radeon HD 5570, which is geared toward small form factor PCs. The new card supports both Direct X 11 and OpenGL 3.2, can drive up to three monitors, and only draws 45 watts when under a full processing load. Newegg is selling the HD 5570s for around $85.

Rumor: Core i7 coming soon to MacBook Pro?

The Apple rumor mill is at it again, but this time it has nothing to do with mythical tablets: Instead, reports are circulating that Apple will release an updated MacBook Pro featuring Intel’s newest Core i7 chips. What leads credence to this rumor? A recent benchmark test spotted on the Geekbench Web site lists a system that identifies itself as a MacBookPro 6,1. Current MacBook Pros identify themselves with “5,x” codes.

Details include a 2.66GHz Core i7 processor and an unreleased build of Mac OS X (10.6.2 Build 10C3067; the current release is 10.6.2 build 10C540). However, the track record of the site that announced this rumor is spotty at best, so your guess is as good as ours as to the validity of this leak.

Tags: , , , , , , , , , , , , ,
Posted in Expert's Opinions, Product Reviews, Technical News

BobbyeM71hxwOpera Mini: 5 Reasons iPhone Owners Need It

Thursday, February 11th, 2010

For the iPhone, Opera has announced plans to release the Opera Mire mobile Web browser, and plans to show it off during Mobile World Congress next week. The early announcement is meant to generate excitement, therefore pressuring Apple into approving this threat to its native Safari browser.

Anyway, not a big deal see the Opera Mini simulator, or check out these five reasons Opera Mini could become your favorite iPhone Web browser, if Apple approves it:

Super Speed

According to claim made by Opera, its mobile Web browser can cut the iPhone’s Web data traffic by 90 percent, thanks to a method of compressing images and text on its own servers. This would, of course, improve the loading time of Web pages as well.

Home Page

Forget loading up a new browser window with nothing in it. Opera Mini’s “Speed Dial” features lets you customize a grid of nine favorite Web sites for quick loading without visiting your list of bookmarks.

Find in Page

The searching inability within a Web page for text is Safari’s most glaring omission. In Opera Mini, it’s as simple as clicking the Tools icon, then clicking “Find in Page” and typing whatever you’re looking for. Sorry Apple, sometimes Web pages just need to be searched.

Greater Flexibility

Here are some other things you can’t do in Safari, all of which can be controlled or enabled in Opera Mini’s settings menu: Saved passwords, adjustable image quality, full screen browsing, adjustable font sizes and customizable skins.

Free, Presumably

Experts of iPhone might point out that there are already plenty of other browsers to choose from, but the vast majority of them cost money. Opera Mini is a free download for other phones, so I assume it’ll be free if Apple approves it for the iPhone. That alone could make it the most attractive Safari alternative yet.

Tags: , , , , , , , ,
Posted in Expert's Opinions, Technical News

BobbyeM71hxwMalicious Firefox Add-ons Installed Trojans

Saturday, February 6th, 2010

Last night, Mozilla announced that two experimental Firefox add-ons, Master Filer and the Sothink Web Video Downloader version 4, infected victim PCs with Trojans when either add-on was installed.

The small-distribution extensions were previously available via Mozilla’s add-on site, but have since been removed. According to Mozilla’s post, the Master Filer add-on had been downloaded about 600 times and installed the Bifrose Trojan. The Sothink Web Video Downloader version 4 slipped in the LdPinch Trojan, and had been downloaded about 4,000 times.

According to the open-source organization, the malicious add-ons managed to sneak by the one malware scanner (unnamed in the post) used by Mozilla. The organization says it will now be scanning with two additional detection tools.

If you happen to have installed either of these malicious add-ons, note that removing the add-on will not remove any installed Trojan. You’ll need to run a separate antivirus scan and disinfection to clean your system. Mozilla’s post includes a list of antivirus software currently known to detect the particular Trojans involved.

This unfortunate incident makes clear why relying solely on one antivirus scanner is never a good idea, as no one program detects everything. Since this has happened at least once before with an infected Vietnamese language pack, I’m curious why Mozilla doesn’t simply switch to uploading all add-on submissions to the free Virustotal.com, which uses about 40 different engines to scan each submission. I’ve also asked Mozilla which scanner it had been using. If I get that information I’ll add it to this post.

According to Mozilla, it had been using ClamAV as its sole scanner prior to this incident. I’d guess Mozilla feels it’s a natural match as an open-source app, but the ClamAV engine didn’t fare well at detection tests when I reviewed the Windows version of the program, ClamWin.

http://www.all1Press.com

http://get-a-designer.com

Tags: , , , , , , , , ,
Posted in Expert's Opinions, Opensource, Technical News

BobbyeM71hxwTop SEO Tools

Friday, February 5th, 2010

Top SEO Tools are a vital component when performing advanced search engine optimization techniques.

  • Search Analytics Tools – establish your marketing goals and establish a baseline for where you are at right now.
  • Keyword Research Tools – discover the keywords your customers are searching for right now.
  • Competitive Research Tools – see what keywords your competitors are targeting.
  • PPC Tools – buy important keywords and track the results to understand how well they convert, which helps you focus you organic SEO strategy on the most profitable keywords. Save money using these free Yahoo! Search Marketing & Microsoft ad Center Coupons.
  • Link Analysis Tools – start building your link profile and track your progress compared to competing websites.
  • Search Engine Ranking Checkers – determine how effective your marketing is by watching your search engine rankings improve.
  • Web CEO is the most complete SEO software package on the planet; plus, our SEO software offers more for free than any other software package or suite for SEO.
  • Sitemap Generator is an excellent tool for creating RSS Feeds
  • SEO Text Browser allows you to see your website as search engines see it.
  • Search Engine Spider Simulator. This tool Simulates a Search Engine by displaying the contents of a web page exactly how a Search Engine would see it.
  • Google Sitemap Generator. This tool creates Google sitemap’s in the correct XML format.
  • Domain Tools supports many other types of searches besides whose records.
  • SEO Digger. Essentially it’s a reverse search engine of sorts that will show you the keywords a URL ranks for.
  • Website Grader is a free seo tool that measures the marketing effectiveness of a website.
  • SEO Elite. Your website or product’s profitability and success is directly dependent on how much traffic, targeted traffic, you get.
  • Seo quake is a powerful tool for Mozilla Firefox and for Internet Explorer, aimed at helping web masters who deal with search engine optimization and internet promotion of web sites.

All1Sourcetechnologies provide proven search engine optimization results using only ethical techniques. Our search engine marketing ensures a high return on investment by achieving maximum visibility for your website within major search engines including Google, MSN & Yahoo! Search.

At All1Source Technologies, all search terms chosen will receive listings on the leading search engines: Google, Yahoo, MSN, Ask, Altavista, Hotbot, and Alltheweb.

http://www.all1Press.com

http://get-a-designer.com

Tags: , , , , ,
Posted in Expert's Opinions, Opensource, Technical News