Skip to main content.
Saturday, December 17th, 2005


Vincent, I see you checking in on me! I’m working on it!

This evening I printed out the code and followed it step by step. I can see that, although I have learned from the way the author wrote the plugin, I would do it sort of differently. Things are complicated by the matter of copyright issues. Although the WordPress codex states that “any license you choose to use must be compatible with the GPL“, the author made no reference to the GPL and explicitly retained copyright in one of his files. This means I can’t publish my modified files. However, I’m going to take a chance and write and publish a howto for modifying the original files, and we’ll let the lawyers work it out. When I get the time, I’ll start on a new plugin from scratch that I will publish under the GPL.

So, back to work. I want to get this out.

Posted by Greg as Programming at 12:06 PST


Friday, December 16th, 2005

Holy Crap!

I’ve been looking at the differences between the published version of the WordPress plugin WP-UserOnline and my hacked implementation of it, and boy did I go to town! I even modified the database table the plugin created so I could add the ability to show visitors’ useragents as well – I remember doing it using mysqladmin. Definitely not a casual hack, although I could probably publish how to modify the installer to do the same thing.

Unfortunately, I can’t even post my own versions of the files without sorting through and publishing a new readme file – in my version, I changed the way the plugin accesses the underlying WordPress database tables. So I’m looking at a lot of nit-picking work just documenting my hacks. I have to wonder whether it’s more efficient to just start my own plugin. I guess I’ll wait for the masses to weigh in on that issue.

Posted by Greg as My Website, Programming at 11:50 PST

1 Comment »

Me and My Big Mouth

Aren’t search engines truly amazing? Somebody noticed my old WordPress User Online plugin gripe, and not only asked me how I got the original plugin working, but how to implement some of the things I was describing I wanted to do. So by popular demand (yes, one person, especially if he claims to be representing others as well, is enough to constitute “popular demand” in my tiny little stake on the Web), I am pulled back into the plugin programming business. About time. I’ve been too caught up in relatively inconsequential tweaking and need to get back to hardcore coding to achieve my goal of truly learning PHP.

Vincent, I am very interested to developing a plugin along the lines I laid out earlier, and would welcome feedback on where to take it. But first, let’s figure out how I got the old usersonline plugin to work and how I get it in the sidebar, so we can at least have that. I don’t remember how I did it off the top of my head but I’m sure I can figure it out. Unfortunately, it’s a pretty busy time of year, so bear with me.

First, to answer my own old question, I had figured out why the plugin recognized me from one place but not at another. It relied on the comment_author_ cookie, which records the name I gave when I wrote a comment, and I hadn’t used the computer at the second place to write any comments – it hadn’t taken me too long to realize that commenting on my own posts was silly.

To answer your question, I installed the WP-UserOnline (WP-UO) by GaMerZ back when I was using WordPress version and found a few problems with it. I started reporting the issues I found on the author’s forums but managed to beat anyone else to the workarounds, so I reported those too. Or something like that. I was pretty frustrated because the author’s style and methodology was so completely different than mine, and he hadn’t followed the plugin guidelines at the Codex.

One of the thing that might stop the whole plugin from working at all when you upgraded to WP 1.5 or higher is that the original plugin relied on hacks that would have been eliminated with file replacement during an upgrade. I ended up upgrading to 1.5.2 less than two weeks after I started with WP-UO, and had to re-hack certain files. Thankfully, I have learned to keep good documentation on any hacks, and have a record of what I did. I also see from earlier posts (unfortunately, I didn’t document everything I did to tweak this plugin) that I was only able to go to the WP-UO page from the root directory of my blog because of a problem handling permalinks, but somehow I solved that.

I’m going to pull all my WP-UO files off my site and run a compare to the originals. When I get back we’ll take a look at the differences, but I just wanted to let you know that I was on it.

Posted by Greg as My Website, Programming at 06:01 PST


Sunday, August 21st, 2005

Plugin Programming Quest

Is there anything so frustrating as using someone else’s code, finding mistakes, and trying to edit it to your satisfaction?

I spent some time trying to clean up the useronline plugin and managed to modify it so that the modifications to the wp-settings.php file were no longer required and was working on removing the requirement to install other files outside of the file in the plugins folder. I also moved the display from the header to the sidebar, where I think it belongs. I’m still stuck with one file in the root directory of the WordPress installation, and got fed up with tracing the program steps (it would be easier if I had complete familiarity with PHP and the WordPress loop, but that is still developing.) So I checked to see if there was another online users plugin and found wp-infos. I installed this plugin for comparison (and to milk the code for examples,) and was dismayed to see just as many errors. Hell, the damned thing isn’t even XHMTL strict compliant – I got quite a few errors when I did a check using the W3C validator.

What I really want is a sidebar section that displays visitors and a little technical information, such as IP address and useragent. I like the useronline division into members, guests and bots, but find fault with its implementation. The author has stated in his forum that the plugin works off ip addresses, and that would explain why I am recognized as a member when accessing from work but not from home, which is unsatisfactory; and his system of identifying bots is clearly inadequate. Of course, if I want visitors to register as users, I have to offer something worth registering for, and don’t have anything yet. I’d settle for a pattern recognition system that identified repeat visitors, which would hopefully allow me to identify friends and family as they dropped in, but pattern recognition is really cutting-edge programming.

I suppose I could use cookies to identify repeat visitors, but that relies on users that are insensitive to cookie storage issues, and I’d have to learn cookie manipulation. Actually, that’s not a bad way to go, and I think I’ll pursue it. By rights I should also develop and publish a privacy policy, but since I don’t have any commercial interest, that shouldn’t be hard.

I was looking for a replacement project for my Google Fame plugin, but this just doesn’t feel as original and attractive. It does have some core learning skills involved, though, so it may well do.

Posted by Greg as My Website, Programming at 05:31 PST


Thursday, August 18th, 2005

WordPress Upgrade 1.5.2

The other night I upgraded my copy of WordPress to 1.5.2. I took me hours, mostly because I have customized some of my administrative and system files, and I’m using that useronline plugin that breaks the rules and installs stuff in the wrong places. (I’m not entirely happy with that plugin, and there are mods that I want to the functionality, so if I find a better plugin I’ll use that.)

Even with my documentation of my mods, I had to carefully sort out the new versions of the files I modified, make the same mods, and redo the documentation. I stashed my old versions away in case the upgrade didn’t take. I wish I could have done every thing on one of my offsite backups first and just uploaded everything, but you have to upload all the new files and run the upgrade first, so after I redid the mods and documentation offsite, I had to match it all up with the online stuff as well. What a pain in the ass. I need to automate the process, but I have spent so much time doing my programming in script languages, I’m a little rusty with free standing code, and I never really got the hang of Windows apps. I don’t even have a compiler installed.

Then I found that my “Write Post” function wasn’t working at all. What a bitch! Did I need to go back and start over? And I couldn’t even post my bitching about it! I looked into setting up my post by email – actually, I still need to set that up – but finally figured out that if I switched over to the standard editor instead of the advanced, the interface showed up in the “Write Post” section, so I’ll use that for now.

I want to post this problem on the WordPress support forums, but etiquette dictates I search the previous posts to see if anyone has experienced a similar problem first. What I really need to do is go to the codex and learn how WordPress works, including stuff like The Loop, so I can do my own troubleshooting, and it would give me a lot better understanding for writing plugins.

Posted by Greg as My Website, Programming at 10:56 PST

Comments Off on WordPress Upgrade 1.5.2

Wednesday, August 10th, 2005

User Statistics

I continued looking at the little quirks and bugs associated with the UsersOnline plugin and have to conclude that it’s an example of some sloppy coding. The author didn’t follow the WordPress plugin guidelines, and he didn’t consider how users customizing their blogs would affect his code. It’s a pity, because the basic concept looks good, and when I tweak his execution errors it seems to work well, although I am frequently listed as a “guest”, even though I am always logged in (so much so, that when I occasionally log off to test something, I have difficulty remembering my username and password. Last time, I had to log into my database and pull up the users table to find my username!)

Still, with so little time to diddle (I’ve been working in the field a lot, which takes away the midmorning, lunch and midafternoon breaks’ writing, tweaking and coding time), I can’t put in the effort that it takes to edit his code, although I find that I am perfectly capable of hunting down the little mistakes and finding the correct way of doing it. I have a new toy to play with – raw server log files!

I can’t recall how I noticed, but my website email had reached it’s 25 MB limit – sorry if you have tried writing me and got a “mailbox full” message. Looking at my provider’s plans, I realized that the 1 GB account was only $10 a year more – a much better deal! I upgraded, and while looking at the offerings, finally broke down and subscribed to the extended traffic monitoring service, which was only $3 a month.

I’ve skimmed through the reports that come with this package but the thing that really jumps out at you is the raw log files. After downloading my entire history in a fraction of a second (3.5 megabits per second at home!), I was able to run some quick searches and learned some very interesting things right away. Such as the fact that someone came to my website from a search of one of my company’s competitor’s names and found the NACE website I’m building (hmmm – maybe that should go in the robots.txt). This someone spent some time poking around in the NACE website, and then later, the NACE site was poked around again, in the evening, from a different IP address. Both IP’s are in my town. Was it the namesake competitor looking to see what I was up to?

Lots of potentially juicy stuff in there – hidden in among my own access and the bot crawlers. I need to set up my own server!! A linux box, Apache, PHP, MySQL – and whatever else I want. I’d have to check my ISP TOS, but I think non-commercial is fine. I’d also have to tweak my upload/dowload ratio and give up some of that juicey speed, but there seems to be plenty of it. hmmmm.

Posted by Greg as My Website, Programming at 21:02 PST


Wednesday, August 3rd, 2005


I couldn’t figure out why my parse_google function wasn’t working (which is the root code of my GFame plugin) and went back a few steps and put in some debugging code to see exactly what Google was returning when I called file_get_contents. And this is it:


We’re sorry…

… but we can’t process your request right now. A computer virus or spyware application is sending us automated requests, and it appears that your computer or network has been infected.

We’ll restore your access as quickly as possible, so try again soon. In the meantime, you might want to run a virus checker or spyware remover to make sure that your computer is free of viruses and other spurious software.

We apologize for the inconvenience, and hope we’ll see you again on Google.

So I did some checking and found the Google Terms of Service, which specifically prohibit what my GFame plugin was trying to do. The only authorized automated requests are through the Google API, which is what I started with, but rejected because the results came out so differently than what you get if you type the same terms into a regular Google interface. I suppose I might have read something about this when I first started playing with the API, but forgot it by the time I decided to try parsing the search results instead. I was never intentionally trying to violate the TOS.

So I guess my little project is going to die. Bummer. I was learning a lot. I’ll have to come up with a new one.

Posted by Greg as My Website, Programming at 15:05 PST

Comments Off on Busted!

Monday, July 11th, 2005

First Real Deployment of GFame

Well, I uploaded it. In my right sidebar is my first real deployment of GFame as code, not a simulation. The data you see displayed is pulled from my website’s MySQL database (db), not hard coded into the template.

I had previously uploaded and activated a file that I called GFame in my WordPress plugin folder, but all it did was establish a page in my administrator’s Options in the backend – it only has one line for input (outdated now, because it asks for my Google API key), and it didn’t do anything. I’ve replaced it with this deployment.

As of now, the GFame plugin contains a set of functions that accesses my db, pulls out the stored values, and displays them (with options). I still need to:

Yes, I’m still doing the lookup searches using my interactive parser and entering the results into the database manually. But I’m making progress. I don’t get much time to code.

Based on stuff I’ve read on the Internet, particularly at Googlebar’s homepage, I probably shouldn’t include the PageRank functions into my distributed plugin. It might make me a target for the Google lawyers. But who knows, maybe the parsing is enough to get them hot – it does bypass their ads.

Posted by Greg as My Website, Programming at 14:42 PST

Comments Off on First Real Deployment of GFame

Wednesday, June 29th, 2005

Google Fame New Start

UPDATE: Although it might be interesting from a technical point of view, I had to abandon the programming effort I describe herein because I found out that it violates the Google Terms of Service.

Well, as I discussed earlier, I have explored using php to generate Google searches and parsing the resultant html files to find the information I want. Parsing refers to taking a big chunk of data and analyzing it to extract needed information. In the case of my Google Fame plugin, I want to send search requests to Google and search through the results to find any links that go to my website. I also want to note Google’s estimate of how many results there are, and how far down the list the first reference to my website is.

Normally you would use a browser to connect to Google. Once connected, you can enter your search terms, maybe adjust a few parameters (like requesting only pages that are in English, for example) and search. The Google homepage is actually a form that you enter data into first, or you might use the Google Toolbar in Internet Explorer or the Googlebar extension in Mozilla or Firefox to pre-enter the form information and skip the Google front page. Either way, you’re sending a formatted request to the Google site, and Google analyzes that request, finds what you want, and sends the information back to you formatted as html code that your browser converts into a webpage.

I’m writing code that skips all the interactive, user-visible steps. I’m going to set up my own interface to determine what the search terms and parameters should be and converting that into a call to Google. For example, if I want to search using the terms “greg perry” and “san diego”, and I want 100 results each time, I could send the string

to Google and it would send me the results back. There are actually many more variables I can put into the request I send to Google, but I’ll keep it simple here.

But I don’t want to look at the results, I want my program to look at them for me and find want I want. So instead of letting my browser display the results, I capture the information that Google returns and put it into a string. It’s a long string – about 100,000 characters, or 100 KB – but nowadays that’s not a problem. Then I use other commands to search through the long string to find the information I want. At this point in time, Google will only give me up to 100 results per search, so if my website isn’t in the first try, I have to generate another call asking for the next 100, and so on, until I find my site, or reach the end of the list and determine that my site didn’t make it.

The commands in php use a complex technique of wildcards, originally developed in the programming language PERL, called “regular expressions” or regex. I found regex to be pretty difficult to understand at first, but using tutorials and samples I found on the web, I was able to cobble together some workable code. It may not be elegant, and it might give errors if unexpected results are encountered, but it’s enough for now.

Now, the Google API is designed to allow programmers to do this sort of thing without parsing the html files. You make program calls to the API and get preformatted results back from Google. The trouble that I found is that the results are wrong – sometimes seriously different from what you get if you go to Google in your browser and send in the same information. You also have to register with Google when you download the API and get a key that you have to include with your API calls – the key allows Google to associate those API calls with you, and the number of calls you can make in a day is limited. So for me, the API isn’t worth the programming steps that it saves.

What’s an API, you might ask? Here’s a good definition from Arizona State University:

Short for Application Program Interface, API is a set of routines, protocols, and tools for building software applications. A good API makes it easier to develop a program by providing all the building blocks. A programmer puts the blocks together. Most operating environments, such as MS-Windows, provide an API so that programmers can write applications consistent with the operating environment. Although APIs are designed for programmers, they are ultimately good for users because they guarantee that all programs using a common API will have similar interfaces. This makes it easier for users to learn new programs.

So far, I’ve created a program that uses a simple form to set up the Google search and go through the results. The program is set up to spit a lot of info back, including the contents of various variables I use so I can check that my code is functioning properly, and the complete listing of all the websites that Google found that match my search request. My program counts and numbers the matches and sets a flag when my site is found in the results. It stops sending calls to Google when it finds a reference to my site, or if I hit the limit of how many results Google is willing to provide. (While playing around with this, I found that Google doesn’t seem to give you more than 1000 results, even if it tells you that there are a lot more.) Then it tells me my website ranked X out of XX results (zero if I didn’t appear at all) which is all I’m really looking to know. You can play with my program so far if you want.

So now I’ve pretty much caught up to where I was when I started getting disgruntled with the Google API. Next, I want to expand the code I’ve written to interact with my online database so that it retrieves preset search terms from one place, gets the results, and stores them in another place in the database. Then I have to create the interfaces. I need two – one in my WordPress Administrator area (also called the “backend”) that allows me to put the search terms into the database; and one in my blog (the “frontend”) – I point again to the the space I marked out in my right sidebar – so that people can see the results. Somewhere I have to have a way of telling my program when to run. My choices are to have it run once a day all by itself; to launch it from my backend; or to make my frontend interface figure out when it’s me looking at it, and offer me a way to run it while I’m there, without bothering to go into the backend.

The next step after that is to package up the various programs as a plugin and make them available to other WordPress users so that they can use it, too. This might be the hardest part – I also have to write programs to install the plugin and set up the required elements (such as the database tables I use) that I just manually set up on my own website. I also need to register the project with WordPress and create an area in my website to make the download available to the public. This area will probably have to include documentation of the plugin and a forum for users. I’ll have to offer technical support for people who have trouble getting it to install and work properly, and respond to feature requests from people that want it to do something else or do it differently. How much work that will require depends on how well I program it in the first place, how popular it gets, and how close I am to anticipating what other people want.

It could end up being a lot of work. So why bother? Here’s what I anticipate getting out of it:

I just spent a significant chunk of time that I could have used programming to document my efforts. Well, that’s ok, too. I also enjoy writing as well.

Posted by Greg as My Website, Programming at 12:41 PST

1 Comment »

Friday, June 17th, 2005

Google API

I’m starting to get thoroughly discouraged about the usefulness of the Google API. I just ran a couple of tests. I searched for “greg perry” “san diego” using the Google API and got a ranking of 183 out of 483. That’s significantly lower than the previous 35 out of 594, but when you have a PageRank of zero you have to take what you can get. Since last time Google was picking up the search terms in a discussion of mine about using them, and that post has since moved off the front page of my blog, I have to assume that some consideration is given to how deep in the website hierarchy the terms are found when determining how relevant Google thinks your page is. As a matter of fact my SDF site showed up significantly higher in the same search – I spotted it in my debug printouts.

But then I ran a http search on Google and got entirely different results – 56 out of 732. I noticed before there was a small difference between the Google interface results and the API results, but this is ridiculous. It makes me think that the API, and therefore my program using the API, is worthless.

I’ll have to give some thought to this. I recently discovered how to use PHP to parse the results of URL’s. I was thinking of using it to pull my Host report out of my stats and run the IP addresses through a whois to make a more readable report. Now I’ll consider whether I should ditch the API approach for my Google Fame plugin and parse the search results from a http call to Google. If the code is economical, it would be a better idea, as the user would not have to sign up for an API key.

Posted by Greg as My Website, Programming at 14:28 PST

Comments Off on Google API

« Previous Page« Previous Entries  Next Entries »Next Page »