Thursday, March 10, 2011

Mini-Optical USB Mice

I'm gonna take a break from all of the software programming stuff and talk about an input device: the Belkin Mini-Optical USB Mouse. The product is essentially a bite-sized mouse with a cleverly implemented, retractable USB cord.

This very small laser mouse -- between 1/2 and 3/4 the size of an Apple Magic Mouse -- leaves room for a scroll wheel, as well as left- and right-click buttons. (I eyeballed that -- good thing I have 20/20.) Nevertheless, it is very small and, as a result, your hand is not able to rest on it with ease, but rather remain perched up.

The USB retraction mechanism is still clever, regardless of the fact that it serves the same utility on countless earphones. Nevertheless, the design definitely prevents any tangling the mouse cord can -- usually without fail -- produce.

If you are on the run, and you need a portable mouse, the one I have certainly does the job. However, it is not comfortable in my hand. Additionally, even though one's hand may adjust to the new grip and feel, that does not mean that it's good for one's arms. (For example, the mini-mouse to the regular mouse is, in terms of typing, the obtuse angle to the 90 degree elbow angle: although one may adjust to typing with their elbows bent at over 90 degrees, that makes it neither optimal nor safe.) To reiterate, although this product gets the job done, I would not rely on it for extended use, as it just isn't comfortable and a hindrance to productivity.

Nevertheless, if you must use this mouse -- either once or for an extended period of time -- please try to do this following step: lower the tracking speed, which is basically the ratio between how much physical space the mouse must cover to move the cursor on the screen. Accustomed to a normally sized mouse, a user will likely exert excess force in moving the device and thus, lose control of the mouse. Therefore, as you are adjusting to the product, lower the tracking speed to a point at which you reach a level of basic proficiency; from there, you can slowly increase the tracking speed.

In sum, although I have provided one way to cope with a mini-mouse temporarily, I find the mini-mouse to be an inconvenience, for its small size impedes switching between inputs (e.g. keyboard to mouse), expectedly hindering productivity. To make a statement, I am not using the mini-mouse as I adjust fonts and click "SAVE NOW."

Monday, February 28, 2011

Watson: So Much Potential

Hello.

I'm sure that a large portion of our society is familiar with Watson, IBM's question-answering machine who was recently featured on Jeopardy, where he obliterated reigning champions.

Development Process To Maximize Potential
I have an intriguing idea regarding Watson's potential.

First, of course, Watson has to be made to answer every question accurately. In other words, he should be able to answer all questions correctly. For those that he cannot, the algorithms must be tweaked, and any underlying problems must be fixed. (Please excuse me if I make this sound like an easy task.)

At this point, these algorithms should be further modified to a point, where Watson can "understand" allusions and word plays, and "understand" references (for example, when a sentence, which is preceded by another reads, "He...," Watson can work out what the "He" relates to. Essentially, he can mathematically understand the dynamic nature of language) . Then combine them into a well-structured and cohesive paragraph - or maybe even large essay-sized discourse.

Now notice that this sort of question answering system would be almost the polar opposite of Jeopardy-style. Instead of giving clues and receiving a one-word answer, Watson would receive one-word and return larger, more substantive, piece of writing.

Ultimately, Watson could be digging apart references between all sorts of sentences and finding connections between millions of articles to make one mega article. He could quickly perform such extensive research, beyond the ability of the human mind.

The Potential
Therefore, we may pose grand questions such as "what is life?" Now, of course he wouldn't have access to any documentation or theory that we do not have access to. However, if he were capable of analyzing more sources than a single human could in his or her lifetime, and he could understand allusions, and he could find synonyms (with direct accuracy - unlike the middleschooler who wants Microsoft Words to give him bigger words, without considering the accuracy to the initial word), and incorporate studies from other disciplines, Watson may be able to arrange words that may change the current perspective on "life" - or whatever the question was.

Why that works
Now, one might ask, why do we need to change the current perspective? In his book "The Origins of Modern Science," Mr. Herbert Butterfield provides an answer. He says that the whole transition to and development of Modern science can be traced back to one thing - and one thing alone: a shift in perspective. I read the book a while ago, so I do not remember all of the specifics, but in the study of oxygen, gravity, and the universe, people viewed them differently within their own minds and were able to understand the world differently than the scientists who supported incorrect theories. Watson's discourse may be the necessary tool to jump start us - or, for those Star Wars fans, put us into hyperdrive towards - the future. Watson may be the force, whose change in scope sparks a multi-discipline paradigmatic shift in understanding. (Eventually, he may even be throwing his articles into his own repertoire of "knowledge".)

Let's be practical
Let's consider a more practical question: "How to cure AIDS?." Watson would then research AIDS, other fields, other disciplines, historical examples, and give "his own opinion" about the elusive cure for AIDS, etc. I, by no means, believe that he will spit out the cure, but it will prompt researchers to take a new look at AIDS, and understand the implications of a minor shift in meaning. Einstein even took the same approach - without the computer - when he asked, "what is gravity?" (Provide a source for this)

Closing thoughts
I understand that this will most definitely NOT be easy, but its worth shot. I understand the negative effects - someone could have a thesis at their disposal without asserting any effort - but such an attempt to develop Watson in such a way would, at the least, provide benefits to the computer science field, and help people think logically even when they don't have a real human there to contemplate thoughts with.

In sum, those are great uses and can definitely help the world in the aforementioned ways. Even if this idea, which I find intriguing, does not succeed in the long-term, perspective-changing goals, Watson still has massive potential in those surgeon-performing-surgery scenarios - he may even save your life one day.


Coming Later:
Later a debate on Watson's conscious, the reason why I quoted so many words of human understanding, etc.

Tuesday, February 22, 2011

Robots File

Hello!

A friend of mine brought up robot files in a discussion, so I decided to write about that topic here. I'll provide a definition, display a sample, list some benefits as well as some problems of the robots file. Sources will be cited below.

Definition:
Essentially, in order to prevent web crawlers from crawling certain parts of a website, the administrator can write a "robots.txt" file formatted under guidelines found in the "Robots Exclusion Protocol." With the robots file, one can specify which files certain web crawlers should be unable to access. The benefits of this type of control will be outlined in the section below, "Benefits."

Sample:
1. User-agent: GoogleBot
2. Disallow: /cgi-bin/
3. Disallow: /temp/
If a web crawler comes across this file, it would read the user-agent line and check - via a substring test - whether that line applies to the crawler. If the crawler is mentioned in the user-agent line, it knows not to access whatever files or directories are found in the disallow statements.
General Structure:
1. User-agent: [Crawler name; asterisk means all crawlers]
2. Disallow: [Directory or Filename]

(NOTE: One file or directory per "Disallow" statement.)
(NOTE: The crawler mentioned in the user-agent line is only supposed to be disallowed from accessing each file or directory mentioned until the next user-agent line.)

Explanation of the code:
Line 1 specifies to which crawlers the following code applies. If an asterisk (*) is used, as in the above example, the following code applies to every web crawler.
Line 2 and 3 both start off with "Disallow:". The directory (e.g. /cgi-bin/) that follows the colon is the disallowed file/directory. In other words, line 2 tells the blocked web crawler (in this case all, because of the asterisk) to not access the "/cgi-bin/" directory. Line 3 does the same for the "/temp/" directory.

Benefits:
The robots file serves a multitude of purposes and can greatly assist any web developer, depending on his or her needs.
  1. Robots files can protect your site from resource-hungry crawlers. Essentially, when a crawler visits your site, in order to thoroughly crawl and round up the data, the crawler may execute all of your scripts. Some scripts however, like Facebook's account-creating script, usually do not need to be crawled, for it probably does not serve the search engine a purpose. In that case, a robots file can block it, and thus there will be more resources available for human users. In addition, it can also protect the integrity of an online vote, in that the crawler will, hopefully, not affect the results, if it is asked to not run the scripts.
  2. In another instance, if you do not have a robots file, a crawler might continually click a broken link on your site. When it does not find the page, an error page will be sent back. Since, I would say, most big websites have customized error pages, sending out those pages to a bunch of different crawlers will drain server resources, and waste the crawler's time. In sum, a robots file can prevent drainage of both a server's and a crawler's resources. Therefore, blocking access (through the robots file) to files that may very well contain broken links would be prudent.
  3. Outfront.net (resource link 2) provided another great point. One can use a robots file, while they are slowly developing a site, if they do not want unfinished elements present in a search engine searches - or as previously mentioned, if they do not want to have to waste resources sending out error pages for broken links. This also benefits the search engine, in that the crawler will waste less of its time fruitlessly searches for inexistent pages.

Problems:
There are two problems associated with the robots file:
  1. A robots file does not definitely mean that a web crawler will not interact with the files or directories that are disallowed. It merely provides a list for a well-intentioned crawler to work with. For example, when a crawler accesses your site, it reads the list and can decide where not to go; however, it can completely disregard it. A robots file is like a note, in which you tell your kid not to have a party, while you are out grocery shopping. Your kid may listen to it, but he or she can just as easily decide not to. If you want to block a website from accessing files, you would have to work with the ".htaccess" file, which is a different story.
  2. A robots file is not a place to hide files. It really just tells a crawler where the administrator does not want the crawler to go. It is as if you tell a robber, "do not take the key that I leave under the mat." Now, the robber knows where the key is, and there is nothing actually preventing the robber from taking the key.
In sum, the Robot Exclusion Protocol (REP) is a set of guidelines, by which well-intentioned web crawlers are expected to abide. REP lists files and/or directories that a crawler should not crawl; however, it does not enforce blocking crawlers.

Sources:

Sunday, January 16, 2011

MobileMe Sync Options

Hello!

A long time ago, I wanted to be able to sync any folder with my MobileMe disk. Unfortunately, the only way at the time was to drag that folder over to MobileMe - and I'd have to do that every time I modified that folder. Conveniently for me, I was working with AppleScript at the time. (Perhaps the work with AppleScript even prompted my thought flow to delve into the MobileMe sync script that I was about to write.)

I planned to write an AppleScript that would sync the folders I wanted with the MobileMe (iDisk) folders - and create new MobileMe folders, if they did not exist.

Below is the simple code, which utilizes rsync.
Code:

do shell script "rsync -a -E -4 -v ~/Documents/thisFolder/ ~/Documents/thatFolder/"
NOTE: It is very important that you put a slash at the end of the source and destination, because if you do not rsync will copy a folder into the destination folder. However, the goal is to just copy the files of a folder into the destination folder.
It might seem to some that you could just do without the slash in the source and then tell it to go to a destination one level above where the desired destination would be. By syncing - which in this case could be visualized as copying - a folder into a hierarchially higher destination, the folder would end up one level lower, right where it should be.
Nevertheless, I believe that the aforementioned code is the best, because it allows the user to not have to worry about folder names, etc, and it is generally less confusing.

Later, I will post more dynamic code. Essentially, one AppleScript adjusts a text file with all folder sources and corresponding folder destinations; the other reads from that text file and syncs appropriately.

Monday, January 10, 2011

Waiting for Root Disk

Recently my MacBook Pro suffered a quasi-fatal catastrophe: the hard drive failed. To help out anyone, I have compiled a list of symptoms and a test.

Symptoms:
  1. During the start-up process, either the gray apple never shows up or, at some point, it is replaced by one of the following icons:
    (1) a folder with a question mark

    (2) a gray circle with a slanted line through it

  2. After about a few minutes - or any time that is substantially longer than your usual boot time - the spinner icon that usually appears under the gray apple does not appear.
Test:
  1. Start up your computer
  2. When the screen turns gray and/or you hear the start-up sound (whichever comes first) hold down "command + v" until the screen turns black
  3. A lot of white text will be displayed over a black background
  4. After a while, if the last line of the text says "still waiting for root disk," your hard drive has most likely failed
If you get any combination of the symptoms that I listed, you should run the test; if the test returns positive for possible hard drive failure, it would be prudent to go to the Apple store immediately and learn about your - mind you, expensive - disk-recovery options.

UPDATE:
I went to pick up my computer and one of the Apple Geniuses said that the cable that connected my hard drive to the rest of the computer became loose. There was no actual damage done to my hard drive, so I do not have to get anything recovered - nothing was lost. For anyone who experiences the waiting for root disk problem, definitely take it to the Apple store so they can run further diagnostic tests, because those tests could determine the problem to be something less expensive to fix.

Thursday, December 23, 2010

Share your programming knowledge; don't retain it

A while back, I sought some information about writing emulators. I thought up a general idea of how an emulator would work just to get my brain thinking in that mode. Once I had a plan, I searched the web for forums, articles, and people to correspond with about emulators. (After all, part of the whole programmer/hacker ethic is not reinventing the wheel.)
Finally, I had found a forum. Good and bad came from that. The good was that the forum and discussion confirmed my general plan (I was happy that I had thought of emulating systems the same way that others do); the bad was that some user completely disregarded the let's-not-reinvent-the-wheel idea. Basically, he or she wrote that anyone that has to look up how to write an emulator is not smart enough - and will never be able to - write an emulator.
This bothered me for a multitude of reasons:
  1. His or her statement meant that he believed in reinventing the wheel. After all, if you cannot ask for help, you are your only resources and must, as a result, reinvent the wheel.
  2. He or she clearly either do not know how to write one themselves - otherwise, they would have at least posted something more helpful - or they are just ignorant, or both.
Quite frankly, I believe that information about programming should be spread. I do not mean just give away code for people to copy; I mean prompt them, have them learn on their own, but help them as soon as they hit a bump. There truly is no point to postpone their acquisition of knowledge. Think of postponing that acquisition as slowing down the general advancement of tech knowledge.

On a final - and unrelated - note: if anyone is interested in writing an emulator with me, comment and we could possibly set up some repository for the code. Nevertheless, I'll finish up my emulator, regardless of whether I have any takers.

Share.

Proxy Server

General Information:
Essentially, a proxy server is a server that connects a computer to another server. They are usually used to get access to a blocked website server.

Let me explain - as the previous definition can be confusing without an example. To open a web page, a computer must connect to a server. (Think of a server as another computer that stores a website in a file; picture the file as an object.) With that said, when you want to load a website, the computer you are using connects to that website server and downloads the file. Now, in a school - or work - setting, the administrators (the people that control the computer system) can block access to a server; and thus, block access to a webpage file.

To bypass that, someone can connect to a proxy server, which is NOT blocked, as long as the administrators do not know about it. This server can ask the computer what site the computer wants to connect to. Then, the proxy server can download the appropriate website server's web page data/file and send it back to the computer that could not previously access it.

Basically, the proxy server downloads the webpage file from the blocked server and saves the webpage file on the proxy server so that any computer can access it. That way, your computer can get the same webpage that they wanted to get from the blocked website, without ever actually connecting to that site's server.
As you can see, someone can access a blocked website through a proxy server.

Mind you that proxy servers can be blocked by administrators - they aren't impervious - because once the server is used, the administrators can block it.

Real life example:
Let me, now, give you a real life example. Say I am at school and want to go to "http://www.yahoo.com".
Now, let's pretend my school blocks "http://www.yahoo.com". What I would do is go to the webpage of a proxy server - just like I would with any other website. I would type in the "http://www.yahoo.com". The proxy server would go to "http://www.yahoo.com", download the webpage, and send it back to me. Now, I have the "http://www.yahoo.com" webpage, which was blocked; and I didn't have to actually visit "http://www.yahoo.com".

Simplified definition:
In sum, a proxy server duplicates the website that you want by copying the websites files; and then, it sends the files back to the user.

--------------

Extra:
Of course, in real life, proxy servers are more complex: they have to write an interface for the user; they have to interact with other servers; and, they have to adjust links and functions in the page so all interactions are directed through the proxy server (trying to direct links and functions through the blocked website would not work, because the blocked website is blocked). However, the above examples, etc, should be enough to give someone a general understanding of proxy servers.


Here I just want to write some of the extra considerations and specifics of running a proxy server.
It seems that the programmer of the proxy server would have to adjust all of the links on the webpage. Such functionality can be achieved by simply removing the domain name of the original, blocked website from the links, which tells the browser to apply all links to the end of the current - proxy - domain name. Mind you that advanced websites won't include the domain name in their website, because it is implied that the domain name that the website's links apply to should be the active website, unless explicitly specified otherwise.
Also, the proxy server would have to crawl the website to download all of the necessary files. This would only be hard if the website's robot file prevented crawling, and, as a result, prevent finding and downloading the necessary files.

UPDATES:
This article may or may not be updated in the future. If I decide to make my own proxy server for experience purposes, I will most likely provide an update that will include my experiences, lessons I've learned, etc.

Search This Blog