26 February 2005

XML Concepts: A user-support-bot discovering new XML-products, cutting costs and increase productivity

Executive Summary

This blog-spot defines a user bot which discovers user tools to increase user productivity.

This note outlines a number of possible tools that such a bot might find; and outlines a series of steps that such a bot would go through to review online information.

The defined tool integrates - synchronizes information from external sources, creates logic mapping tools for user performance, and matches online tools to these maps.

Other ideas - - Add this to your del.icio.us



LEGAL NOTICE


Creative Commons License

This work is licensed under a Creative Commons License.

You may not copy any of this work to promote a commercial product on any site or medium in the universe.

If you see this work posted on a commercial site, it violates the creative commons license; and the author does not endorse the commercial product.

Free to use for non-commercial uses. Link to this original blogspot and cite as .



Discussion

One thing I've noticed about internet tools is that they can make some nice suggestions. In a single platform, Microsoft Office assistant can make some useful suggestions. Also, Google word-suggest can offer some ideas on how to spell thing. The same goes with a search engine that makes a suggestion on another link for similar sites.

It's nice to be able to use a single platform and get suggestions; be offered another word or spelling; and other ideas. But why stop there?

Yet, I'm not really interested in other sites -- I want to get my job done. Faster. I've got all these suggestions to use a single platform to do things, but that's not the way I work. I use multiple platforms. Isn't there something that can help me out?

With all this aggregator stuff, the last thing that I want is more links. What I'd really like is something that will give me a solution, not more information for me to ignore. What to do?

What if there was a way to move away from searches for links; to searches for products; to move beyond a single platform-suggestions; and focus more on the way I actually work: Less on a single platform, and more toward something that helps me out with multiple platforms; less on words and more on concepts; less on additional links and more on solutions.

The way that users work is across many platforms, not necessarily in a predictable order. They jump around from concepts not specific words. The last thing we need is more suggestions on a site to read. What users really want is a tool that will analyze the way they do things, and make suggestions on how to accomplish those multi-platform-tasks more effectively.

I got to thinking about this and came up with an idea. What I'd like to do is let you know what it is and then demonstrate the types of products you might be able to get if this tool actually existed.

What I'm going to do is outline the general concept and then go into some examples that show the concept is viable.


User Bot Concept


A user-support-bot which finds user-tools on the web to increase user productivity


Unlike Microsoft Office Assistant which stays on a single platform, I propose industry begin looking at an XML Assistant that monitors the users actions across multiple platforms, then makes suggestions how to improve their productivity, like:

  • Display new products and features

  • Visually show how task-steps could be streamlined

    The big challenge for the end-user is to find these products that might help them out. Yet, for the most part the end-user-tools are designed for the generic user. There isn't something out there that works for me to create my custom software tools for my individual needs.

    The other challenge is that if I run into a problem, I may be so deep into a disaster that no amount of software solutions is going to get me out of this hole before the competition.

    Rather, I'd like a tool that is going to monitor how I'm doing things, then suggest to me improved ways to do these functions across multiple platforms, so I can save time and be ready for the real disasters.

    Let's take a simple example in the XML world. I come across a link in a feed that I'd like to save and come back to. The solution I'm offered is a clip file. That's nice.

    And the next step I generally take is that I copy some key terms out of this clipped file, and create a watch list. What do I do with that watch list? I tag it; load it up to my aggregator, and then blog a short idea.

    See what just happened? I continue to make a multi-platform action after I do an initial step:


    Sample List of Repetitive Tasks


  • Watchlist created

  • Tagged item saved to bookmarks

  • Watchlist URI loaded to aggregator

  • Tag loaded to blog


  • The above four steps are just one example of things I do across many platforms. I'm not the only one doing these types of things. And I'm not the only one that is doing this simple set of commands.

    In the real world, you also have your own ways of doing things. They work for you.

    Let's take a break

    Take a moment right now and think about the many times that you find yourself doing the same types of tasks over and over again. It may be copying links, transferring data from one platform to another, and then doing something else with some summary data.

    We all have our patterns. The key is to recognize these patterns and find a tool that helps us. Which leads us back to the original point.

    We're busy enough just trying to keep up with information. Who has time to sit back, think about a multiple number of tasks that they do over and over, then while work is piling up, still have the luxury of finding a tool that solves that particular problem?

    Besides, the budget for consulting is only so big. It doesn't do much good to find three years from now a tool that would've saved me alot of time, without the need to spend three years looking for ... something that I'm not sure what it looks like.

    Sure, we've got our online friends. And we hope the blogs and feeds give us some ideas. But those are outside our control.

    What would be nice if we had a custom tool that looked at what we, the individual end-user, actually did; and then this handy-tool hunted down solutions for us that we could incorporate. Right there. In the office, at home, or on the road. Without any wait.

    Just like a search engine can analyze our individual searches then return ads related to those searches and links, so too can a tool be created that will analyze our tasks and return products to us that will expedite our workflow.

    In the ideal world, a tool could be created that would ensure several tasks we perform could be integrated into a single one-step process. I don't know how many times I've come across del.icio.us-support tools that someone has created.

    I keep thinking, "How can I find out about these and actually use them when I need them?"

    Someone once said that every problem has already been solved. And for every solution there's half-a-dozen solutions out there. The trick is to connect the problem with the solution. But who has time to both work and analyze their work?

    Then again, maybe someone is calling this tool a different name. Oh, wouldn't a search-engine-tag thesaurus be nice for XML feeds.

    That's where this tool comes in. If focuses on tasks, not words, links, or tags. And what makes it nice is the tasks-functions are universal: Step 1, Step 2, then Step 3. Some orders are more effective than others.

    This tool will help you have more time to work [or not work], rather than spending time hunting for solutions. That makes you more productive and focused.

    For the rest of this blog spot I'd like to run through some examples of how this type of tool could be used to analyze your workflow, then return to you some suggested tools to help you out.

    What I'm going to do is talk about a general platform that would do this type of search and isolate each phase into steps. This isn't necessarily the order that your custom tool would take, but it illustrates the types of steps a tool might take to analyze your workflow and find a solution for you.

    In each step, I’ll apply the theoretical bot to the web and return back a suggested tool that would help with analyzing your workflow.


    STEP ONE


    One thing everyone does is search for things. As you well know if you use Google, you can get a suggested site or product based on your search. What would also be helpful is a suggested product based on my task.

    Step one of this tool is to notice your nuances in your searches. Again, we’re not focusing on links, we’re focusing on the way that you search; what you do with the data, and what types of information and content that solves your problem.

    Step one of the tool is to simply notice what you are doing. This phase of the analyses is to differentiate between a worthwhile outcome and a result that doesn’t do anything for you.

    Suppose you’re looking for a particular feature that hasn’t been given a formal name; or you are visiting many sites, but only saving some links, but not others.

    Step one of the analysis is for the tool to monitor your site-link relationship and notice what you are actually saving to your bookmarks and your aggregator. Again, we’re moving away from both links and content, and focusing more on the particular task you are engaged in.

    A sample search: Search Engines with XML Feeds

    Let’s suppose your task was to find all sites that have search engines with XML feeds in the search result. As you save your links, the tool would notice that even though you arrived on Sydnic8 [which has feeds for each link], that you didn’t spend much time there, and didn’t save anything from the site.

    As you visit other sites with search engines, the tool would be able to figure out that you’re not interested in just search engines, or just feeds, but you are looking for particular search engine platforms that only deliver a single feed for that overall search.

    Yet, what you don’t want are:

  • Search engines that have no feeds;

  • Sites and engines that give you links to individual feeds;

  • Feeds search engines in foreign languages;

  • Sites which prevent you from typing quotes in your search phrase as you want them; or

  • Search engines that say they provide a feed but do not actually work.

    Again, the tool starts to notice that what you want is a search engine that has a very specific task: Provides a single feed for that search, and there are no feeds for the individual links.

    The tool at this point starts to do the hunting for you, and adjusts your search results. In this case, it returns the following tool:


    XML Search Platform Discovery

    Finds search platforms with a single xml feed option


    The bot tells you that it found a tool that has the following abilities, and that the system is now in place on your computer, ready for you to now use:

  • Stand along system. Discovers search platforms with a single XML feed result.

  • This search platform injects phrases into the secondary platform, tests the outputs, and identifies differences in content and time.

  • Tool evaluates integration reporting output with aggregator. Identifies strengths weakness of each platform. Identifies novel features.

  • Shows which platforms have which features: [Types of possible features]. Compares the advertised steps and features with actual results. Shows which type of search work with which platform.

    The bot knows this is a new tool for you and that you may want to talk to others. The bot says, "To discuss this more use the following tag ."

    Review what the bot just did

    Let’s review. The tool has noticed that you are [A] searching in the platform, and then [B] saving links in your bookmarks, tags, and aggregator. Today, "noticing a user's movement across multiple platforms" is not something the normal search engine does.

    Also, this tool is moving the results to the top that are not related to links, but have specific features that you consistently identify in your saved links, even if you have no name for that feature. The tool analyzes what you are saving and uses a multi-tag array-analysis to figure out which features, aspects, content, or problems you might be trying to solve.

    All the bot did was watch what you were doing, and find something that would make your life easier. The neat part: It did this while you were working, and injected the new steps into your procedures. The bot informs you of what you no longer have to do, and gives you guidance on how to apply the tool in your tasks.

    The key: The bot did the work, found the tool, did the analysis, uploaded the information, and ensured your platforms were integrating, and that the tool would improve your workflow. This saves you alot of time.

    Array Search Report

    When I say a multi-tag array, I mean that we can put the characteristic from site A into a search string, and file this as a response. Then site B has another set of parameters.

    The more sites that we visit, the greater the confidence that our unique characteristics and features start to have greater hits. If we were to compare Site A characteristics alone, there’s no way to know which of those features is more or less important; but as I add site B, I’ll come up with another list of features, and some may overlap.

    As we increase the number of sites visited [theoretically approaching 100%], some characteristics have greater recurrence. That’s where this tool comes in. It analyzes the features by laying out the content, features, and aspects of each saved-link and puts them in an array.

    Rather than only give you search results based on yes-or-no response, the array-approach focuses on weights. If you do a simple 2-by-2 array of 2 factors on one site A, and two factors on one site B, there may be some overlap, and some may have no commonalities.

    Site A may have two factors like RED and SEARCH Box. While Site B has another feature called SEARCH Box and LINK to Google.

    This is just a simple example. But imagine what would happen if we use an array-search-analysis for these two sites: We’ll see that the common link, or higher weighting is associated with SEARCH Box, while the other two parameters in this case have are low in frequency, approaching zero.

    What can happen next is the search tool will put more weight to that common factor, and then take the third array from your next round on site C, and notice what you do across multiple platforms.

    Site C may have a different set of parameters. So the search tool then asks, "What’s the common element here?" And it goes back and reviews site A and B [while you are busy working on another task], and then reviews site C again to find the common pattern.

    In short, what we have is a user-support-bot that analyzes the end-user task performance, and collects data to assign weightings. The bot will identify patterns and characteristics, and then adjust the search to find more of that type of feature.

    The bot knows the types of features that would characterize a successful system:


    Retrieval System Goals


    Analyze movement and identify repetitive duplicate tasks across multi-platforms to find tools that will more efficiently complete user tasks.


    Again, we’re focusing on finding capabilities and tools on the web that will allow you to move through these tasks more easily. The bot knows the useful sites are those sites that do the following:


    Success Criteria of the Returned Systems


  • Reduced time-on-tasks

  • Expedite data integration

  • Eliminate redundant typing, data entry, copying, reposting; reduces chance of minor typographical errors having a large impact and delay in other uses on other platforms

  • Groups repetitive movements and tasks into a single routine; drive uses to select and use tools that allow the user to consolidate many steps into a single one-click approach

  • Cross-platform integration

  • One-entry with multiple uses and approaches


  • This tool is something that will allow users to go about their work while the system discovers the desired tools to consolidate these repetitive tasks into a simpler set of steps.

    The trick of all this will be for there to be an indexing system that can save these results, and at the same time use this information about how you work to hunt down other sites that may be of interest.

    Applying the criteria to our search example

    Let’s look at our XML Search routine. Again, what we’re looking for are search engines that provide an XML feed for the search, not for the individual search result. What we don’t want are search platforms that say they do this, but fail to provide a feed result in cases where there is no existing term or result.

    If we were to apply this tool to this search objective, we would want our bot to figure out what we are looking for, and then inject test phrases into the search engine to ensure that the site actually does what we want: Provide a feed to look for a search, not a feed for a single search-item on that list.

    Also, when we go about our tasks, keep in mind the multiple steps were are taking. Again, to do this search and monitoring task manually, we’re jumping back and forth between the search engine, the target sites, our bookmarks, blog, and aggregator to save the feeds.

    What the tool is doing while we go about our search is analyzing what we save, and then the bot goes into the sites were are using and monitors the integration, reporting, and outputs for each platform.

    In the case of our XML Search Button search, the tool would notice and monitor that we are copying certain links [but not others], saving that particular URI [but not other URIs] and then making predictions about what types of tools would expedite these tasks.

    At the same time, as the tool is saving data on what we like and are doing, it is also testing the returned results to identify strengths and weaknesses of each platform.

    Let's take a look at some of the types of things a bot could be doing for you:


    Bot Analysis


  • Survey: Identify novel features [external site]

  • Catalog: Show which platform have which features [read ="logic mapping"]

  • Assessment: Compare advertised standards capabilities with actual results

  • Integration validation: Show which types of searches would work with which platform


  • No doubt, you already have ideas what could do this in 2005. But think back to the days when you didn't know. Wouldn't it have been nice to have a bot that was doing this hunting for you?


    STEP TWO


    Let’s look at the general features of this user-centered bot, and the types of tasks it performs to support the end-user.


    User-Bot Tasks and Features


  • Tests site features and options

  • Identifies capabilities

  • Identifies new features and tools

  • Recognizes user-scenario when tool may be useful

  • When user does these new tasks, bot estimates "if relying on external-tool-functions would expedite the user-task"


  • This tool essentially generates a bot that will monitor internal activity across the platforms, and then organize the data into useful categories that can be compared with external findings on the web.

    The bot also can download new software; test it out using user-defined preferences; and record the results into categories useful for matching with user-task-performance.

    This is another way of saying that even though we don’t have a particular task that needs to be improved, the bot still hunts down and finds new software that may be of interest in the future. The bot’s role is to organize the results into logic-maps that can be compared to the steps a user takes to accomplish a task.

    In those cases where the user-steps show a definite pattern, the bot can then look at the task-maps assigned to new software and suggest a product, site, or external tool that would reduce the number of steps in the user’s task-map.


    Bot Monitoring Tasks


  • Notice URL lists user is referencing

  • Monitor types of links saved/not-saved to bookmarks and aggregator

  • Identify the common characteristic of interest across many platforms

  • Identify the URL and steps taken to save the URL

  • Conduct a test search, monitor how the saved URI is used or affected by the end-user

  • Monitor the high frequency tasks

  • Evaluate the legacy or the search in terms of test outputs, end-response to the presentation


  • Suffice it to say that the bot could be doing a number of things to analyze the user’s patterns across multi-platforms and also registering this data into templates that could be compared to external capabilities.

    Going back to our search example, this bot could go onto any content platform that matches our recurring criteria. The bot would then do what we are repetitively doing: Finding sites that have a search tool which reports a single feed.

    After a while, the bot would be able to duplicate the steps we were doing, and we could then go about our primary task [something else].

    Again, in the search example, the tool would find sites that have XML buttons for feeds even if the site is not registered or pinged with FeedMesh and not integrated with protocols for a universal feed subscription.

    Rather, the bot would analyze our steps; capture the desired sites; know to duplicate our many uri-link-saving-steps; and automatically inject our target site into the aggregator.

    Think of it as a more advanced version of universal replace in a single-platform word processor. Rather than simply noticing a common misspelling on a single site and doing a universal change [saving the time to manually change each error individually], the bot does the same approach but focuses on tasks, not individual letters or characters.

    The bot saves us time by noticing our pattern and finding methods to more quickly achieve the result, without regard to the content, link, or particular product used.

    The bot acts like a Google-text predictor. It notices what we are doing; finds location with the similar content; displays those similar characteristics into groups; notices what the user is doing with the data and results: How it is saved, where the data is placed, what the subsequent steps are taking; and then monitors you monitor that data, what you check and do next with it.

    The bot allows us to move from a text-based prediction in Google-searches to something that is task-based.

    Sample bot search

    Let’s take a look at this bot in action. There is a product out there that is named an XML Search Button Discovery tool. But we don't know that. Neither does the bot, yet.

    At this stage of our search, the bot is noticing that we are continually going back to a list on FaganFinder in search of XML Search Engines. The bot then starts to look for similar sites with the same features.

    The bot is also noticing that we are saving some of the site URLs in a bookmark and then blogging in draft, and then saving the feed URI into a particular aggregator.

    Then the bot notices that on each of the sites we have saved, that we’re also doing a test search on each site. The bot also notices that not all the sites are then saved; rather, the bot notices that with some types of results, the user removes the item from the list.

    With time the bot starts to notice a pattern. We are looking for a particular site, which specific features, and only saving and really using those sites that deliver a specific kind of feature.

    The bot looks across the web, finds a tool that improves our task-logic-map. The bot returns the following tool:


    XML Search Button Discovery

    Identifies search sites with single xml search feed


    The bot does two things when it finds the site. It integrates with the site's XML-infomration to read the logic map; and the bot also does some testing.

    After a quick look, the bot concludes that the site has the following features:


    Discovery Steps


  • 1. Identifies a target site from anywhere on web. Can have either an active bot hunt down lists, identify an individual site, or read from content in XML feeds.

  • 2. Tools saves sites with button searches in del.icio.us, other bookmarks, or an aggregator.

  • 3. User enters search string into tool; tool then explores each site looking for those sites that return a single XML-feed-button for a single search.

  • 4. Tool saves XML feed URI to aggregator, explores results, zeros out all items as read.

  • 5. Tool monitors changes. Whether the target searches appear, how quickly, and time lags.


  • The bot also runs through the site, creates a site map for itself, and verifies that the tool can indeed answer the following types of questions:



    Measures of Merit


    Tool answers these questions:

  • Can XML button search show case new additions to searches?

  • Will reported results show case only additions since last update?

  • Can the XML search platform differentiate between new content and previously added content?

  • Can the XML aggregator differentiate in the search-feed those results that already existed, and new content added after the start-search-time?


  • The bot lets us know that the returned tool is interesting. This external tool is a system saves baseline searches in one multi-channel feed; then compares this baseline search data against the returned results in another test search.

    The bot lets us know that the search tool correctly differentiates between[a] sites that report a search outputs in a single feed from [b] sites that report search results, but provide a feed for each individual search-result.

    The bot also lets us know where to go to discuss with others when discussing this tool. The bot says "To discuss this more use the following tag ."

    Reviewing what the bot just did

    Let’s go back to our original list of criteria for success. Step one of the analyses was to deploy the bot to monitor our actions, and then find a tool.

    This tool automatically does what we want: Find search engines that have a single XML feed. The tool has satisfied the following criteria:


    System Check: Confirm returned tool expedites user task performance


  • Reduced time, increased productivity: Eliminated duplicate steps of taking data from one search platform, retyping it or copying it from one platform to another. [One entry, multiple uses]

  • Expedited data integration: Analyzed user practices and returned a tool that will accomplish this task

  • Cataloged and mapped the user steps [steps catalogues into a single routine]

  • Done an auto-search of available solutions: Started a search of the bot’s templates to match our task-map with those of product solutions already available



  • STEP THREE


    The bot at this point has more data than before. The user has been issued a potential solution. The bot’s job is then to evaluate the use of this tool against the original task steps.

    In short the bot is comparing the efficiency of using this proposed tool against the continuing steps that have been simulated.

    In general the bot has already noticed which platforms and tasks are related. The bot stores the data, and uses the task-map to compare product-solutions.

    At this point the bot starts to track whether the success criteria have been met. The bot ranks the user performance with this tool. As the user engages in more repetitions, there is a higher probability of successfully listing and raking the relevant criteria.

    Here, the bot notices what the user continues to do with the result after putting it into the aggregator [Ex: Zero-ing out all items as read]; and notices the other tasks associated with that event [Ex: publishing in blog].

    The bot figures out that the two events are related: The user is zero-ing out the aggregator, then publishing something. The bot does the same thing as above, and reports the following tool for the user to apply:


    XML Search Selector

    Hunts search platforms and integrates with aggregator


    The bot tells the user how to user this tool:

  • "Users enter string. Tool hunts down sites with both XML search capability and a single button for that target search.

  • "Tool chooses search site best suited for task, area, search, content, tags and other user-defined parameters.

  • "Tool takes user search string, applies it to site, grabs the search-feed for that particular search, and then imports it into the aggregator."

    Then the bot flashes a message, "To discuss this more use the following tag ."

    What the bot does after returning this product is ask:

  • Can the XML Button Search showcase only new additions?

  • Will using this tool advance the users productivity?

    The bot doesn't know, yet ... until it starts the next phase of its analysis.


    STEP FOUR


    This phase of the process is to monitor the resource utilization. At his stage, the bot is narrowing in on the characteristics of our recurring searches. It is also associating success criteria with the desired features; comparing these criteria to the reviewed products; then assessing whether using this external tool would actually improve our performance.

    This phase is a logic tree analysis. The bot dissect the results, establishes the criteria and decision tree to discriminate among the used vs. non-used resources; establishes a decision tree map to summarize the user movements; and looks for similar templates.

    In short, what the bot is doing is a time-motion study, but translating the decisions into a visual map that can be compared to other product's capabilities. The bot is archiving the user patterns, and comparing them to other decision maps in products to ensure that the user is moving along routes that are both efficient, but also arriving at the outcome.

    The bot is also noticing the users' favored- and least-favored results associated with both the logic map and the desired features.

    The bot is preparing to cache these results along various weights by categorizing the criteria. The bot is preparing the task-map with the task-map of the available features.

    This is where we get into xml link wiring diagrams. This is a summary tool for links. As we manually go through various steps in our search, the bot will notice how we move along the site, what the roadblocks are, and the results of or selections.

    At the same time, it compares what we are doing, with what we appear to desire to do. The bot will create a logic-wiring diagram for our task, sort of like a wiring diagram for a decision tree or a silicon chip.

    The bot will integrate the searched-site-mazes into these maps to show the variety of steps users have to take to make a decision. The task map isn't what the site says. It's what the user attempts to do. It's a record of where the user is, how they got lost, and the paths they take.

    Again, there is no right answer. Rather, there are more efficient answers which the bot can recognize is our work. The goal of the bot at this point is to create a summary diagram of what we are doing with our links, how we are integrating them, and them move the analysis from looking simply at links and content, to conducting an analysis of our tasks.

    With time, the bot will learn which outcomes are desirable, and which types of maps are most conducive to a productivity improvement. The bot can identify the pattern and context of information, and then also identify gaps in the logic-tests used.

    The wiring diagram provides structure to vast data. The common task patterns are distinct by shape. This means that relative distances, angles, and number of inflow and outflow steps of the wiring diagram, task map, or logic tree become features used to discriminate a particular task, and then compare that routine with other options and solutions available on external platforms outsidethe user’s current tools.

    Using our XML Button Search example [looking for sites that have XML feeds for a single search, but not feeds for individual search results], let’s take a look at the types of steps that could be put into a wiring diagram, and the actions the bot would notice are related to those steps.


    Logic Diagram Steps

  • Start task: Report data
  • Decision: Output linked
  • Retry task: User backs up, re-does search, creates link to output
  • Start over: User backtracks, attempts new step


  • The bot tracks the paths in the site’s site map, and realizes the pattern of blocks: The user continues to dance around a function, attempting multiple entry links, but to no avail.

    The bot also notices there are duplications in the searches, and multiple=attempts to enter the same data.

    With time, the bot concludes that the data is entered correctly, but something is not working the platform. The bot concludes the site appears to be correct, but is not functioning, therefore does not currently meet requirements.

    The bot stores this decision tree in the archive; makes a note of where it was on the particular site; and makes a note to recheck the site to see if the site changes.

    If there is a change in site, the bot reattempts the tasks and compares the user original path to the changed path to see whether the outcome meets the apparent objective.

    If the bot concludes that the site has been updated and meets requirements, the bot then signals the user that the site meets the requirements; and the bot then applies the steps the user previously did to this site: Extracts data, conducts a test, saves the link, and then reports the feed URI to the aggregator.

    The user has the choice to go into "do not bother me mode" and let the bot do this without any notification.

    This is a sample product that the tool could return as a valid test to incorporate and run. This type of test could either be discovered, or it could be incorporated with external updates. Here is what the bot finds:


    XML Aggregator Search Integration

    Tests search tool and aggregator compatibility


    One feature is the ability to assess how well the search tool can [a] provide a feed; then [b] inject that feed into the aggregator.

    This feature measures the integration between the aggregator and original search platform. The tool can differentiate between different search platforms; show how the reported results compare across aggregators; and show there is a trace between the original search input and the final search output at the aggregator.

    The bot tells us that there is a report which shows how the XML Search tool reports. The bot is able to create a logic-map of the following decisions in the newly-discovered-platform:


    Report Discussion


  • Does the feed report only changes?

    If not, then the search engine provides no superior service through an XML feed than one might get by using the tool directly or a simple search engine?

  • Can the read-all option be zeroed out?

    If not, then the output list is no different than a direct search on the site or by using a simple search engine.

  • After zeroing out the search results in the aggregator, does the feed-aggregator report only new additions?

    If the aggregator reports more than new additions, the search tool-aggregator are not providing a better service than one might get by searching the XML search platform directly.

  • Is the search result reported as a single search result?

    If not, then the aggregator and tool are not correctly differentiating between [a] search results in a single feed, from [b] results that get assigned an individual feed for each individual return.

  • Are there separate feeds for each search result?

    If yes, then this is not what is needed, and the tool is not working correctly.


  • The bot then sends a message to the user, "To discuss this more use the following tag ."

    The above tool feature is something that does what the user needs. The bot does a good job at analyzing.

    However, this tool appears to be an early generation model. It does not necessarily accomplish the task, nor does it necessarily translate the analysis conclusions into a tool that accomplishes all the tasks,

    Yet, although the results may be specific to a particular aggregator, they are still valuable. If the user decides to transition from one aggregator to another, these results would be useful in deciding which aggregator was most consistent with how the user performs tasks.

    At this phase of the analysis, the bot stores the logic template and continues the monitoring, saving the results for possible future use.


    STEP FIVE


    The bot not only notices immediate tasks users are performing, but also monitors the subsequent uses of that data, and travel to other sites. The tool will generalize the steps to find what the user is attempting to do.

    Here, the user is associating a particular feature on a site with a subsequent task of linking and posting. The bot has better information. It knows [a] the user is looking for XML-feature; and [b] hopes to integrate this feature with a particular follow-on function.

    As it narrows in on requirements and user experience, the bot returns a new product:


    XML Search Evaluation

    Tests external platform support


    The bot learns this new tool will test check the XML search sites’ ability to review a third platform.

    The bot has mapped the site, and looks through the archive for a similar problem statement: Can the XML-button sites be used as a means to search del.icio.us tags, links, or bookmarks?

    The bot runs through the site and discovers: The [1] tool evaluates whether [2] this platform be used as search platform to look at and return as a search a [3] third site.

    The bot sends a message to the user, "To discuss this more use the following tag ."

    What just happened?

    The bot found the above tool. Now, the bot has a new set of commands and features from an external tool and is test checking the ability of the user-preferred-platform to work as desired.

    The bot evaluates the integration across platforms and notices what the user can or cannot search outside the given search engine. The bot then records the results and is ready to monitor a new user-task.


    STEP SIX


    This is the part of the analysis where the tool narrows in on the task map. This phase is called the gap analysis. The bot will compare the current state of information -vs.- what is desired or repeatedly requested.

    The bot discriminates the results: What distinguishes content from each other items of interest.

    In this situation, the bot is noticing content is distinguished by publication status.

    Also, the bot notices that there are a high number of tasks associated with non-pinged content. This tells the bot that there needs to be a method outside FeedMesh to find this data and content.

    Also, the bot notices that there are a high number of searches associated with non-pinged content. Despite the content not appearing on the FeedMesh, the user keeps asking for non-pinged content.

    The bot registers a gap as the difference between what the user desires -vs- what the test platform provides: The system provides content that is pinged, but the user wants content this is available in another way.

    The bot then starts to hunt for a product that will do this: Find content that isn't pinged. After doing a trillion-calculations in a second, the bot finds the following product:


    XML Search Platform Compare

    Compares feasibility of finding content


    One aspect of searching is to be able to find both pinged and non-pinged content.

    The bot discovers after reading the product XML specifications, and doing a simulation on the site, that the feature tests whether the XML button feed can find content that has not been pinged.

    The bot learns this features looks for content not in the service list [not pinged], but does appear on the web. The specifications say that the tool will hunt down content that can be linked to an XML feed, but is not searchable through one of the services.

    Also, the bot learns that the tool can identify whether or not the external search platform can still find content, even though it may not have been pinged, and does not report as a search result in the FeedMesh.

    The bot thinks this is a good prospect, installs the system, and you're ready to go. The bot then says, "To discuss this more use the following tag ."

    What just happened?

    The bot found a tool. It was that fast. The tool monitors how the individual sites are performing. It then narrows in on what new features exist.

    This is more of the process of narrowing in on target requirements relative to user actual tasks.


    STEP SEVEN


    Now the tool focuses on successful results. The bot is noticing what the user is doing and identifies tools with successful results. The bot associates these preliminary steps with the next steps.

    The bot continues to gather data that will help discriminate the feature on the platform. At the same time, the bot is also collecting user-logic-flow on the platform to create two maps: One map for the site-logic-flows; and a second map of what the user is doing on that map.

    The bot will compare the site map with the user map and see what preferred types of layouts would be more preferable. These templates are registered in the central registry, and the bot then puts greater weights on these types of layouts and formats.

    The bot also then monitors the task completion. It monitors the time and motion, stores the data, and compares it with other scenarios. The bot doesn't just look at what should be happening. It verifies the user is getting the job done.

    Now, the bot compares the time improvements vs. the learning curve. The bot asks whether the user needs a change in steps, map, site, or task-order to improve the learning curve to meet production goals.

    At this juncture, the bot is bundling tasks into bunches. The bot is starting to assign performance data to the logic map.

    Also, the tool is segregating common tasks into groups that other tools and platforms already do. The bot is hoping to compare the user's logic-maps with those of external platforms to find the best match.

    Cross-platform communication

    Again, the bot relies on results from other bots to make this analysis and comparison. The bot ensures that the tasks are properly mapped; and in cases where the task flow diverges into multiple lines [there are two or more ways to accomplish the same task], this divergence is noted as an alternate or back-up.

    The bot is narrowing on in the criteria that the tool hunter can use to successfully assign a tool to satisfy these requirements. The bot find the following tool:


    XML Content Discovery

    Evaluates content visibility


    The bot discovers that this tool is different than PubSub in that it looks at the platform search capability, not the content in the reported feeds.


    Desired results


    A. Tool correctly reports platforms which [1] performs multi-platform searches, and [2] produces results in a single feed.

    B. [?] Creates a unique search string for XML search buttons to find.

    C. Comparison between platforms


    Notice one of the task-statements [B] is not clear [?]. The bot knows that there is a potential problem with a platform. Thus, the tool may not work well. Perhaps there are beta errors that need to fixed.

    The bot monitors the developer's progress in fixing the site and lets you know when the tool is ready for your own use.

    Also, the bot identifies which search platform find the target string: Identifies the time lag; which platform found the target string first; and reports which platforms miss the target string.

    The bot is looking at the newly discovered tool and notices how it works:


    Platform Steps


    1. Target string input to one platform

    2. Platform conducts search of external platforms.

    3. Test platform reports results

    4. Identifies time, gaps, whether search platform misses hits

    5. Identifies how to correct the problem in code or template

    6. Identifies options to get content and feed into the search platform.


    The bot think this might be interesting and sends a message to the user, "To discuss this more use the following tag ."

    What just happened?

    The bot has found a tool that will evaluate the content visibility on a given page. The bot is comparing the user-steps to those in the external option.

    By looking at the content discovery tool, the bot maps this tool into the six steps. Then compares these features of the external tool to what the user is actually doing.

    One approach is to assign the tool and provide instructions to use that tool. The other option is to continue to the search for a tool that better fits with the users preferences. The user can define the successful result in terms of task complexity or time to learn the task.

    If the task is of short duration, it may not be prudent to use a new tool that requires learning above and beyond the original task. The bot knows it is not wise to over-engineer a simple task.

    However, if the task is relatively long compared to the time required to learn the task, then the user could choose to incorporate the product into the user's platform, and schedule time to learn the new procedure.

    The bot decides that this tool is favorable, and there is clearly a cost-benefit tradeoff that the user can automatically assign and schedule, but something the user could choose to let the bot take care of.

    Of course, the bot sends an e-mail to the schedule tool, and the is done through an XML feed, thanks to FutureMail.


    STEP EIGHT


    This phase of the process is one where the bot monitors the tools used and choices adopted to see if they are working.

    The bot will integrate and work with unfolding technology and concepts, noticing new tools, steps, codes, and procedures used and available. Notice the bot is not as concerned with words or tag-names.

    The bot will set up methods to check what the user is doing with that new feature; monitor those coding functions; look for similar features; and then make a map of the features available to perform that step.

    The bot will independently check what the user is hoping to accomplish, and conducts a transparent test of the external platform-option to check that the desire results match what actually happens.

    To do these types of procedures quickly, the bot decides it needs an upgrade for its own use. The bot returns the following tool:


    XML Tag Checker

    Validates tag tools


    The bot learns that his tool checks the ability of the platform to identify and report tags. Also, the tool test evaluates whether the tag-format instructions are consistent with the platforms ability to find those same tag formats.

    The bot thinks this is a good tool to monitor a tag and compares [a] the platform-results to [b] non-platform results.

    The bot then gives you a list of features and capabilities. When the tool matches your logic map, the bot can inject this tool into your platform. In this case, there may be something for you to read. the bot gives you a quick report:


    Requirements


    Your bot has found a tool which identifies tags that are:

  • Saved and used;
  • Exist but do not appear in the platform search results; or
  • Used and searchable with other platforms, but not this particular platform.


  • The bot discovers this tool identifies what is searchable, which types of tags work and are searchable, how effective the XML search platform are in finding simple or complex tags. After doing some tests, the bot learns this tool can do a good job of checking whether the multi-word-tags can be generated, used, reported, and cross-linked in secondary medium.

    The bot is happy and sends a message, "To discuss this more use the following tag , "

    What the bot just found

    The bot gave you a report that you might want to take a close look at. There is something the bot wants your input on.

    After you approve, the tool has integrated with a new feature. Its job is to notice new patterns and identify shortfalls. The bot will notice the logic maps on a particular site, and compare them with the user-routes taken to identify roadblocks.

    The bot will be able to identify results that are not consistent with the desired outputs; or drive outputs that, if adopted would generate less than ideal results.

    This is another way of saying that if the bot were to arrive at a site like Technorati, and conduct a test of the tags, the bot would know that the single quotes returned in a search string, would generate one type of search return in the aggregator; yet, the bot would then monitor how those results were used in the aggregator to know whether the search command-turned-feed-results should be modified to return more specific results.

    Whew! And all that in just a micro-second.

    The bot will return results and decision result to the user in the form of advisories of what to change or adjust in the attempted task.

    The bot may notice that the way that Technorati is set-up may not be consistent with how the user generally does things; the bot will make a suggestion on which specific link to click on to accomplish the goal.

    In the case of Technorati, the bot would inform the user that it is not wise to attempt to create a watch list from the search platform, as the returned result will generate backslashes; also, the bot would notice that the user continues to enter single quotes, despite the objective of finding all the terms.

    The bot knows this because the user continues to enter the site, create a watch list, follows the directions, but then deletes the results from the aggregator. The bot would signal the user how to modify both [a] the search commands and [b] the route take [to find the CREATE WATCH list to ensure that the ADD button] did not add a feed that had had backslashes in the place of single quotes.

    The bot knows to do this because it has [1] a task map for the user-choices; [2] a site map for the target site that it creates, and compares to the site-provided XML template; and [3] data on how the feed is ultimately used or disposed.

    In this case, the bot tells the user that the site would be better navigated [to achieve the goal of monitoring a tag] if certain routes were taken, and certain search strings were entered.

    This is only an example with a familiar websites. Image the possibility of being able to apply this tool to something that is new, has not been beta tested, but the users could quickly figure out whether the tools is working, and whether the results are superior to other outputs from competing tools.

    The bot can also consolidate the lessons into suggestion for changes to W3 standards, and in tools to do the task, despite the shortfalls and limitations of the existing platforms.


    STEP NINE


    We’ve already touched on communication with the user. This phase of the tool is when the bot is in a position to assign questions and answer to user criteria.

    The bot explores whether the technology is working as intended; whether the user achieves the result.

    The goal is to look for ways to make the technology more easily used an adopted. The bot is tracking the maps and sharing them with other uses to see how other users are adopting the procedure, and what might be gleaned from other bots.

    The communication is both internal between the bot and the platforms, and with the user; and externally with other bots and platforms outside the user’s control.

    The bot prepares a report that narrows in on useful feedback on the success of various user-maps and site maps which the aggregators could incorporate and adopt in cycle updates.

    The bot returns the following tool:


    XML Tag Monitor Report

    Tests assumptions on tag addition options


    The bot reviews the product, compares it to the user-task-map and starts mulling things over. The bot finds a tool that helps it analyze this situation:


    Requirements


    The bot found a tool that answers the following questions:

  • Are multi-word tags working and searchable?

  • Are tags not being used as effectively as they could be?

  • What strategies could be used to get public adoption of a particular tag?

  • Are there advantages to [a] using a tag connected to a separate platform over [b] using a single blog with Q/A/discussion?


  • The bot says, "To discuss this more use the following tag ,"

    What the bot just did

    As you can see the returned tool evaluates the new technology and asks a number of questions. This information is useful for integration with development efforts.

    The integrated system of bots can provide feedback data to developers on similar tasks. The bots can look at the ongoing development efforts and can provide information to developers on how users are actually accomplishing their tasks.

    The tool will look at planned efforts and estimate whether the planned tool will adequately address the actual log map users are following.

    The bot also looks at whether the planned development efforts are addressing the weaknesses in the reported site-maps: Ease of follow, roadblocks, unclear procedures, confusing tasks, results that are discarded far earlier than expected.

    In a couple of minutes, the bot notices a new patterns and does some new assessments about your workflow and another tool in the template-registry.


    Bot Assessment


    The bot at this juncture conducts a follow-up step by

  • applying the lessons;
  • keeping an eye out for potential new tools that are archived, tested, and logic-maps are stored for comparison, analysis, and use; and
  • providing feedback on tools so they are developed with actual practices in mind.


  • Based on this analysis, the bot then returns the following tool:


    XML FeedMesh Tag

    Analyzing content-tagging publication and adoption


    This feature evaluates whether the content and tag are reporting correctly in FeedMesh. The bot discovers the tool can do the following types of analyses:


    Evaluation questions


  • Did the blog spot not get pinged?

  • Were tags added after blog publication and pinging?

  • Did other content get added on top and the target-tag was never discovered>?

  • Did the tags get used, adopted, converted, and reported into the index?


  • The bot says, "To discuss this more use the following tag ."

    What happened?

    The bot found a tool which monitors post-publication results; whether the expected tags reported and linked as planned; and whether the valid tags showed up with back-ups.

    The bot discovered the tool conducts the following types of desirable features: Expediting data integration; ensures cross-platform integration; focuses on using one-step to enter a tag in location A [search, watchlist creation], that is also automatically loaded in a second area [blog, bookmark], then subsequently used in a different location.

    Ideally, if a tag in location A were changed, the user would need to know that their watch list may need to be modified, expanded, deleted, or adjusted.

    After doing some test runs on the tool, the bot learns a couple of things. This tool would highlight the three places where this tag was changed, then change the tag so that the bookmark, blog, and watchlist monitoring tag was updated in the aggregator. Essentially, it takes a minor typing change, and ensures all the affected platforms that the user has used this data is seamlessly updated and remains fresh.


    STEP TEN


    This is the phase where the product features get showcased. The goal of this phase is to take the consolidated list of user-maps, and then compare these with the results from the bot-analysis.

    User would be able to see how repetitive-tasks are addressed with a number of options and features in the product. Either the users can choose from a list, or the system will automatically integrate the external tool into the platform, and provide instructions to the users on what to do more efficiently.

    The tool assigns solutions and ensures the logic-map is streamlined.

    Here is a sample product that the users’ bot might come across. Notice as you read it that the product features get compared with what the user is actually doing. Also, notice that the way the user is generating a feed, gets compared with how the feed is actually published and displayed.

    The bot runs across this product, and does an analysis:


    XML Feed Evaluation

    Reviews search tool exploration


    The XML feed on the site says this is a "feed content improvement tool."

    The bot learns that the promoters of the site are saying, "This tool helps assess whether publishers benefit or lose something by using summaries in their feed -vs- using full blog-content in their feed. Tool shows users how search results change with changes in feed summary length."

    The bot has its doubts. The users are saying that the product may not be able to tell the users which search platforms search just the summary, or the entire feed.

    However, the bot learns that the publishers can learn whether key terms and phrases outside the feed summary will or will not be included in the XML search tool.

    The bot is not sure whether this really helps publishers decide whether or not to publish the complete feeds or summary feeds.

    The bot learns that the site does a good job at making recommendations where place content to get searched; the problem is that the recommendations have a poor track record of generating sustainable return on sales.

    Ooops. However, the bot learns that the site does display the entire feed, and is clear in showing the limits of the search. This is what the bot learns about the tool:


    Evaluation Steps


  • 1. Users enter target audience

  • 2. Tool compares feed to how search tools explore feeds

  • 3. Makes suggestions to content priorities


  • The bot learns the goal of tool is to make the most likely searched content more valuable. Recommends changes to ensure feed audience serviced, with appropriate tone.

    Makes suggestions where to place key names and content. Makes suggestions on what to add to the end of the summary to increase odds of action, interest, and attention.

    The bot has its doubts. It has seen a consistent difference between these types of claims and what the products actually does. The bot does additional testing.

    The bot says, "To discuss this more use the following tag ,"

    What happened

    The bot has learned that the above tool might be able evaluated whether the product could better support the user: Advising the publisher how to improve their feed content.

    The bot-discovered-tool for the most part optimizes choices of the user to integrate the product with others.

    Theoretically, this is a multi-platform analysis from the blog to the FeedMesh burner. This tool looks at how to evaluate constraints, or promote the result within the exiting constraints of the platform, firewalls, and local protocols.

    This tool would only have been found had there been a mechanism that could identify constraints in the system; compare them with the user objectives; evaluate the time on feed; then compare the goals of the user to the actual uses.

    The tool would map the site-usage by external readers, and provide feedback to the publisher on what was going on; at the same time, the bot would find tools that would assist the user in achieving their objectives: Improving the feed template to improve subscriber satisfaction.

    However, despite all that the bot found, the bot is not confident the tool can actually do what is advertised.


    STEP ELEVEN


    The user-defined-bot also works with industry leaders and well established products. In this case, the tool integrates with Google.

    Here, the bot looks at what the user is doing with the search results, and finds new content and explores reasons content is not getting attention or visibility.

    The bot compares its own automated efforts with [a] the user and [b] the platform, and understand why there is a difference between the two.

    The bot then goes to an external platform to get the desired tool to accomplish the task. In this case, the tool has found the following tool:


    XML Novel Google Search

    Reports newly added search results


    These are sample requirements for a specific search tool. Ideally, we should have new results since last reviewed-all was clicked.


    Requirements


    The tool would differentiate between three things:

  • Content: Blogs and feeds updated
  • Feed: New feeds added since last review
  • Links: New links added to the search results since last review


  • Seamlessly, the bot adds the following tag to your list [bookmarks, tags, watchlists, aggregator, and blog reference tags], and says, "To discuss this more use the following tag ,"

    Now that you have the tool, when you start blogging about a particular issue in the future, the likely-tags you might want to use are flashed in a pop-up window. Of course, you can turn this off.

    Notice that the tool is looking at what is new in the search result. The tool then monitors how this legacy works with the other products: Can the user-map be improved with more effective steps.

    Again, the tool looks at what the user is doing and hopes to accomplish and figures how which tools to integrate with this platform.

    The bot learns that the tool finds and reports platforms that accomplish this task. In the case of the Google Search, the bot takes the following steps:


    Bot Actions to Matches Tool with Use Requirements


  • Knows the user wants X-type of platform [A site that has a single feed for the search results, not for each individual reported result]; then

  • Compares the user objective to the existing platforms; then

  • Looks for tools that are published and viable that accomplish this objective; then

  • Integrates this tool into the user’s suite of tools.




  • STEP TWELVE


    The bot is now narrowing into the desired next steps. On its own, the bot looks for ways to accomplish multiple steps with one task.

    The bot is noticing the pattern of steps; and validating that a given tool mess the success criteria: That the data were entered once; that the user only handles the data once; and the bot is doing the data transfer from one platform to another.

    This is another way of saying that the user creates data only once, and the multi-platforms, with the bot’s assistance, integrate with a tool to perform the same tasks and share data without user involvement.

    The bot does some other analysis and comes across the following tool:


    XML Multi-platform Search

    One-step search button


    The bot learns this tool is the single entry for multi-search button feed results. This single box is the single platform for all XML Search Buttons.

    Users would input a single search string and get a search of all the sites with XML buttons.

    Returns the result like OPML. All results across all search platforms are in a single feed.

    The bot says, "To discuss this more use the following tag ,"


    STEP THIRTEEN


    Let’s apply this tool to blogging. The tool would monitor how the users use an application, notice the steps, and notice common features that could be associated.

    In this case, the bot would find or develop protocols to integrate user task with platform capabilities. It both integrates the functions, and aligns the platforms with the user’s desired logic maps.

    As an example, the tool should know the challenge is to block spam; and also a competing goal of getting access to feeds and public comments. The bot would create the protocols to do what the user requires.

    Here’s a sample returned result of the tool that solves the user’s problem:


    XML Comment Feed

    Code for easy addition of blog comment feeds



    Requirements

    Tool creates an auto-link function between publishing ping and auto-generates three things:

  • A comment-feed for the aggregator to read;

  • A fast link between the blog, comments, and aggregator report; and

  • [?] A link-set-up of blog-comments in feed format.


  • Search platform can specifically hunt down sites that have comment feeds, or create a comment feed from a single comment in a URL.

    System will link the URI for the comment feed with a stand-alone blog without any comments with future comments.

    To discuss this more use the following tag .

    What the bot is doing

    Notice there may be an integration issue, marked by a [?]-symbol with the last function. The bot is monitoring this for you before injecting the tool.

    This tool demonstrates that a user can add comments to any content and platform, but they do not need to be in a blog. The auto link function satisfies the requirement of seamless integration and ensures the platforms are integrated.

    This is known because the bot recognizes the competing criteria: To get feedback, all the while blocking spam. The bot has hunted down a tool that will satisfy this requirement.

    The tool is essentially a reward or catalyst for other platforms. Those sties that integrate with the tool get adopted; those sites that do not integrate will have to find another means to get their product showcased.

    This tool monitors underlying trends and creates tools and protocols to meet those objectives. It takes the user preferences and finds tools and protocols that will achieve the result.

    The tool also issues instructions to integrate and adopt tools to complete the task. It advises the user how to improve: Either through training on new tools, or advising on more efficient ways to achieve the same outcome. The bot returns the following tool:


    STEP FOURTEEN


    Here is where the bot summarizes and collects protocols and tools actually adopted. In this example, you can see that the returned tool has five features:


    Features of Bot-Identified Tool


  • One-step
  • Multi-platform
  • Seamless
  • No download required, fits in with user constraints
  • Stores results and organizes information for easy retrieval


  • The bot returns this tool because it is a web based system where users can click on a link and button, and with one step add the tag to the aggregator. This is a one step: Search in platform-add to aggregator option, that previously required many steps, multiple copying data.

    This is a good thing. The goal is not simply to add a feed, but to integrate the search platform with the later aggregator monitoring. The search-commands are updated and refined based on how the user subsequently uses the results in the feed.

    The nice thing about the bot: This is done without adding a link to other protocols and it is a web-based platform. Nice choice, since your bot knows you are are behind a firewall, and the IT-department will not let you download anything work related. [Your bot is advising you to look for a new job, but you ignore it.]


    STEP FIFTEEN


    Here is where the logic maps are stored, archived, and maintained. A sample logic map includes major tasks, shows and documents the tool actually expedites the task; and verifies the tool meets the success criteria: That it is one step and that we are focusing on tasks, not links; and results not processes.

    In this case we have four steps:


    User Tag-Blog-Aggregator Tasks


  • Blog
  • Tag
  • Track
  • Converse


  • The bot returns this tool because it will help you. Blogging and tagging are combined with the single tool in the auto-tag to aggregator command.

    The bot found a tool that will allow the user in one step to ensure there is consistency between the blogged tag, the tag monitoring tools, and the feed stored in the aggregator. If the tags change or are updated, they are also changed in across all the platforms.

    Again, the bot analyzed the above four steps and found the tool that would integrate these features.

    Then the bot shares the lessons with the other aggregator-integration tools and bots, sharing:


    Multi-Platform Data Sharing


  • user-templates;
  • site-maps; and
  • results from other discovered, but not yet used tools.



  • STEP SIXTEEN


    This step goes into more details of the FeedMesh Bot. This is a tool that will create a map of the external platform, identifies limitations, looks for other tools that will fill in those gasp, and develops systems to work with the constraints and promote tools that automat what the users are not doing: Going to auto-discovery of tools.


    XML FeedMesh Bot

    Actively seek platforms with xml search capability


    Tool integrates with FeedMesh. Used to identify non-pinged and non-submitted sites, feeds, platforms, and tools to FeedMesh.

    Transitions from user-submitted system, to a system that actively hunts down feeds.

    Has ability to change any site to an XML feed. Publishers have the option to block the FeedMesh bot.

    The bot says, "To discuss this more use the following tag ,"

    This returned tool provides clear instructions: How to crate, integrate and transform the feeds into something that is usable.

    It moves from something that is user-initiated, to bot-found; moving away from user-involvement, to something that is seamlessly discovered by automated tools. Moving from passively receiving the feed submissions, to actively hunting down new sites, and content.

    You like the bot. It is saving you time. You think that the bot may have a good idea about your IT department.


    STEP SEVENTEEN


    This is the analysis phase. The bot actually goes back and makes sure your work situation is improving.

    These are the types of questions that the user-bot would then ask at this stage:


    Bot Reviews Results


  • Have we achieved the goal despite the limitation?

  • What has to be done to do this?

  • Who or what is doing this?

  • How well can we improve efficiency of the work in this task?


  • The challenge is how to ping search sites where something has been submitted, but the site is not connected to either PingOMatic or FeedMesh.

    The tool asks, "Is there a code to add to the template? If not, then this is favorable."

    The bot reviews the content available. The service reports that it is ready. The service states that the search has not been complete or there is no result.


    STEP EIGHTEEN


    This is the sharing feature.

    This is where the user identifies to other bots its findings: That it has found a new user-end-tool to do better searches for XML feeds.

    The bot sends a clear signal: This tool is simple, and makes searching easier.

    Also, the user-bot does the template sharing in this phase. It connects with other users and platforms to share features [not links]; templates [not content]; and wiring diagrams [not blogs].

    This is a more sophisticated user-end -sharing. It automates the current system of link sharing to new ideas.

    It transforms blogging from a reporting system [that could be automated and expedited] into something that shares new ideas and novel content.

    Truly, blogs can do more than what the users are stuck with. When will they be given the tools to expose them to the solutions needed to increase efficiency?


    CONCLUSION


    What we've done is run through some example products that a user-tool could find. This user tool examines how the user is using the platform and looks for ways to increase productivity.


  • The key is to make these user-targeted-suggestions related to the way the user works, as opposed to product-push. Those firms than can ensure their products integrate with this protocol will get showcased.

    Your life just got easier. If only you could convince the IT department . . .

    Links: Updated 3 Mar 2005

    Ref: Welcome to YSDN, Yahoo Search Developer's Network
    Executive Summary

    This blog-spot defines a user bot which discovers user tools to increase user productivity.

    This note outlines a number of possible tools that such a bot might find; and outlines a series of steps that such a bot would go through to review online information.

    The defined tool integrates - synchronizes information from external sources, creates logic mapping tools for user performance, and matches online tools to these maps.

    Other ideas - - Add this to your del.icio.us



    LEGAL NOTICE


    Creative Commons License

    This work is licensed under a Creative Commons License.

    You may not copy any of this work to promote a commercial product on any site or medium in the universe.

    If you see this work posted on a commercial site, it violates the creative commons license; and the author does not endorse the commercial product.

    Free to use for non-commercial uses. Link to this original blogspot and cite as .



    Discussion

    One thing I've noticed about internet tools is that they can make some nice suggestions. In a single platform, Microsoft Office assistant can make some useful suggestions. Also, Google word-suggest can offer some ideas on how to spell thing. The same goes with a search engine that makes a suggestion on another link for similar sites.

    It's nice to be able to use a single platform and get suggestions; be offered another word or spelling; and other ideas. But why stop there?

    Yet, I'm not really interested in other sites -- I want to get my job done. Faster. I've got all these suggestions to use a single platform to do things, but that's not the way I work. I use multiple platforms. Isn't there something that can help me out?

    With all this aggregator stuff, the last thing that I want is more links. What I'd really like is something that will give me a solution, not more information for me to ignore. What to do?

    What if there was a way to move away from searches for links; to searches for products; to move beyond a single platform-suggestions; and focus more on the way I actually work: Less on a single platform, and more toward something that helps me out with multiple platforms; less on words and more on concepts; less on additional links and more on solutions.

    The way that users work is across many platforms, not necessarily in a predictable order. They jump around from concepts not specific words. The last thing we need is more suggestions on a site to read. What users really want is a tool that will analyze the way they do things, and make suggestions on how to accomplish those multi-platform-tasks more effectively.

    I got to thinking about this and came up with an idea. What I'd like to do is let you know what it is and then demonstrate the types of products you might be able to get if this tool actually existed.

    What I'm going to do is outline the general concept and then go into some examples that show the concept is viable.


    User Bot Concept


    A user-support-bot which finds user-tools on the web to increase user productivity


    Unlike Microsoft Office Assistant which stays on a single platform, I propose industry begin looking at an XML Assistant that monitors the users actions across multiple platforms, then makes suggestions how to improve their productivity, like:

  • Display new products and features

  • Visually show how task-steps could be streamlined

    The big challenge for the end-user is to find these products that might help them out. Yet, for the most part the end-user-tools are designed for the generic user. There isn't something out there that works for me to create my custom software tools for my individual needs.

    The other challenge is that if I run into a problem, I may be so deep into a disaster that no amount of software solutions is going to get me out of this hole before the competition.

    Rather, I'd like a tool that is going to monitor how I'm doing things, then suggest to me improved ways to do these functions across multiple platforms, so I can save time and be ready for the real disasters.

    Let's take a simple example in the XML world. I come across a link in a feed that I'd like to save and come back to. The solution I'm offered is a clip file. That's nice.

    And the next step I generally take is that I copy some key terms out of this clipped file, and create a watch list. What do I do with that watch list? I tag it; load it up to my aggregator, and then blog a short idea.

    See what just happened? I continue to make a multi-platform action after I do an initial step:


    Sample List of Repetitive Tasks


  • Watchlist created

  • Tagged item saved to bookmarks

  • Watchlist URI loaded to aggregator

  • Tag loaded to blog


  • The above four steps are just one example of things I do across many platforms. I'm not the only one doing these types of things. And I'm not the only one that is doing this simple set of commands.

    In the real world, you also have your own ways of doing things. They work for you.

    Let's take a break

    Take a moment right now and think about the many times that you find yourself doing the same types of tasks over and over again. It may be copying links, transferring data from one platform to another, and then doing something else with some summary data.

    We all have our patterns. The key is to recognize these patterns and find a tool that helps us. Which leads us back to the original point.

    We're busy enough just trying to keep up with information. Who has time to sit back, think about a multiple number of tasks that they do over and over, then while work is piling up, still have the luxury of finding a tool that solves that particular problem?

    Besides, the budget for consulting is only so big. It doesn't do much good to find three years from now a tool that would've saved me alot of time, without the need to spend three years looking for ... something that I'm not sure what it looks like.

    Sure, we've got our online friends. And we hope the blogs and feeds give us some ideas. But those are outside our control.

    What would be nice if we had a custom tool that looked at what we, the individual end-user, actually did; and then this handy-tool hunted down solutions for us that we could incorporate. Right there. In the office, at home, or on the road. Without any wait.

    Just like a search engine can analyze our individual searches then return ads related to those searches and links, so too can a tool be created that will analyze our tasks and return products to us that will expedite our workflow.

    In the ideal world, a tool could be created that would ensure several tasks we perform could be integrated into a single one-step process. I don't know how many times I've come across del.icio.us-support tools that someone has created.

    I keep thinking, "How can I find out about these and actually use them when I need them?"

    Someone once said that every problem has already been solved. And for every solution there's half-a-dozen solutions out there. The trick is to connect the problem with the solution. But who has time to both work and analyze their work?

    Then again, maybe someone is calling this tool a different name. Oh, wouldn't a search-engine-tag thesaurus be nice for XML feeds.

    That's where this tool comes in. If focuses on tasks, not words, links, or tags. And what makes it nice is the tasks-functions are universal: Step 1, Step 2, then Step 3. Some orders are more effective than others.

    This tool will help you have more time to work [or not work], rather than spending time hunting for solutions. That makes you more productive and focused.

    For the rest of this blog spot I'd like to run through some examples of how this type of tool could be used to analyze your workflow, then return to you some suggested tools to help you out.

    What I'm going to do is talk about a general platform that would do this type of search and isolate each phase into steps. This isn't necessarily the order that your custom tool would take, but it illustrates the types of steps a tool might take to analyze your workflow and find a solution for you.

    In each step, I’ll apply the theoretical bot to the web and return back a suggested tool that would help with analyzing your workflow.


    STEP ONE


    One thing everyone does is search for things. As you well know if you use Google, you can get a suggested site or product based on your search. What would also be helpful is a suggested product based on my task.

    Step one of this tool is to notice your nuances in your searches. Again, we’re not focusing on links, we’re focusing on the way that you search; what you do with the data, and what types of information and content that solves your problem.

    Step one of the tool is to simply notice what you are doing. This phase of the analyses is to differentiate between a worthwhile outcome and a result that doesn’t do anything for you.

    Suppose you’re looking for a particular feature that hasn’t been given a formal name; or you are visiting many sites, but only saving some links, but not others.

    Step one of the analysis is for the tool to monitor your site-link relationship and notice what you are actually saving to your bookmarks and your aggregator. Again, we’re moving away from both links and content, and focusing more on the particular task you are engaged in.

    A sample search: Search Engines with XML Feeds

    Let’s suppose your task was to find all sites that have search engines with XML feeds in the search result. As you save your links, the tool would notice that even though you arrived on Sydnic8 [which has feeds for each link], that you didn’t spend much time there, and didn’t save anything from the site.

    As you visit other sites with search engines, the tool would be able to figure out that you’re not interested in just search engines, or just feeds, but you are looking for particular search engine platforms that only deliver a single feed for that overall search.

    Yet, what you don’t want are:

  • Search engines that have no feeds;

  • Sites and engines that give you links to individual feeds;

  • Feeds search engines in foreign languages;

  • Sites which prevent you from typing quotes in your search phrase as you want them; or

  • Search engines that say they provide a feed but do not actually work.

    Again, the tool starts to notice that what you want is a search engine that has a very specific task: Provides a single feed for that search, and there are no feeds for the individual links.

    The tool at this point starts to do the hunting for you, and adjusts your search results. In this case, it returns the following tool:


    XML Search Platform Discovery

    Finds search platforms with a single xml feed option


    The bot tells you that it found a tool that has the following abilities, and that the system is now in place on your computer, ready for you to now use:

  • Stand along system. Discovers search platforms with a single XML feed result.

  • This search platform injects phrases into the secondary platform, tests the outputs, and identifies differences in content and time.

  • Tool evaluates integration reporting output with aggregator. Identifies strengths weakness of each platform. Identifies novel features.

  • Shows which platforms have which features: [Types of possible features]. Compares the advertised steps and features with actual results. Shows which type of search work with which platform.

    The bot knows this is a new tool for you and that you may want to talk to others. The bot says, "To discuss this more use the following tag ."

    Review what the bot just did

    Let’s review. The tool has noticed that you are [A] searching in the platform, and then [B] saving links in your bookmarks, tags, and aggregator. Today, "noticing a user's movement across multiple platforms" is not something the normal search engine does.

    Also, this tool is moving the results to the top that are not related to links, but have specific features that you consistently identify in your saved links, even if you have no name for that feature. The tool analyzes what you are saving and uses a multi-tag array-analysis to figure out which features, aspects, content, or problems you might be trying to solve.

    All the bot did was watch what you were doing, and find something that would make your life easier. The neat part: It did this while you were working, and injected the new steps into your procedures. The bot informs you of what you no longer have to do, and gives you guidance on how to apply the tool in your tasks.

    The key: The bot did the work, found the tool, did the analysis, uploaded the information, and ensured your platforms were integrating, and that the tool would improve your workflow. This saves you alot of time.

    Array Search Report

    When I say a multi-tag array, I mean that we can put the characteristic from site A into a search string, and file this as a response. Then site B has another set of parameters.

    The more sites that we visit, the greater the confidence that our unique characteristics and features start to have greater hits. If we were to compare Site A characteristics alone, there’s no way to know which of those features is more or less important; but as I add site B, I’ll come up with another list of features, and some may overlap.

    As we increase the number of sites visited [theoretically approaching 100%], some characteristics have greater recurrence. That’s where this tool comes in. It analyzes the features by laying out the content, features, and aspects of each saved-link and puts them in an array.

    Rather than only give you search results based on yes-or-no response, the array-approach focuses on weights. If you do a simple 2-by-2 array of 2 factors on one site A, and two factors on one site B, there may be some overlap, and some may have no commonalities.

    Site A may have two factors like RED and SEARCH Box. While Site B has another feature called SEARCH Box and LINK to Google.

    This is just a simple example. But imagine what would happen if we use an array-search-analysis for these two sites: We’ll see that the common link, or higher weighting is associated with SEARCH Box, while the other two parameters in this case have are low in frequency, approaching zero.

    What can happen next is the search tool will put more weight to that common factor, and then take the third array from your next round on site C, and notice what you do across multiple platforms.

    Site C may have a different set of parameters. So the search tool then asks, "What’s the common element here?" And it goes back and reviews site A and B [while you are busy working on another task], and then reviews site C again to find the common pattern.

    In short, what we have is a user-support-bot that analyzes the end-user task performance, and collects data to assign weightings. The bot will identify patterns and characteristics, and then adjust the search to find more of that type of feature.

    The bot knows the types of features that would characterize a successful system:


    Retrieval System Goals


    Analyze movement and identify repetitive duplicate tasks across multi-platforms to find tools that will more efficiently complete user tasks.


    Again, we’re focusing on finding capabilities and tools on the web that will allow you to move through these tasks more easily. The bot knows the useful sites are those sites that do the following:


    Success Criteria of the Returned Systems


  • Reduced time-on-tasks

  • Expedite data integration

  • Eliminate redundant typing, data entry, copying, reposting; reduces chance of minor typographical errors having a large impact and delay in other uses on other platforms

  • Groups repetitive movements and tasks into a single routine; drive uses to select and use tools that allow the user to consolidate many steps into a single one-click approach

  • Cross-platform integration

  • One-entry with multiple uses and approaches


  • This tool is something that will allow users to go about their work while the system discovers the desired tools to consolidate these repetitive tasks into a simpler set of steps.

    The trick of all this will be for there to be an indexing system that can save these results, and at the same time use this information about how you work to hunt down other sites that may be of interest.

    Applying the criteria to our search example

    Let’s look at our XML Search routine. Again, what we’re looking for are search engines that provide an XML feed for the search, not for the individual search result. What we don’t want are search platforms that say they do this, but fail to provide a feed result in cases where there is no existing term or result.

    If we were to apply this tool to this search objective, we would want our bot to figure out what we are looking for, and then inject test phrases into the search engine to ensure that the site actually does what we want: Provide a feed to look for a search, not a feed for a single search-item on that list.

    Also, when we go about our tasks, keep in mind the multiple steps were are taking. Again, to do this search and monitoring task manually, we’re jumping back and forth between the search engine, the target sites, our bookmarks, blog, and aggregator to save the feeds.

    What the tool is doing while we go about our search is analyzing what we save, and then the bot goes into the sites were are using and monitors the integration, reporting, and outputs for each platform.

    In the case of our XML Search Button search, the tool would notice and monitor that we are copying certain links [but not others], saving that particular URI [but not other URIs] and then making predictions about what types of tools would expedite these tasks.

    At the same time, as the tool is saving data on what we like and are doing, it is also testing the returned results to identify strengths and weaknesses of each platform.

    Let's take a look at some of the types of things a bot could be doing for you:


    Bot Analysis


  • Survey: Identify novel features [external site]

  • Catalog: Show which platform have which features [read ="logic mapping"]

  • Assessment: Compare advertised standards capabilities with actual results

  • Integration validation: Show which types of searches would work with which platform


  • No doubt, you already have ideas what could do this in 2005. But think back to the days when you didn't know. Wouldn't it have been nice to have a bot that was doing this hunting for you?


    STEP TWO


    Let’s look at the general features of this user-centered bot, and the types of tasks it performs to support the end-user.


    User-Bot Tasks and Features


  • Tests site features and options

  • Identifies capabilities

  • Identifies new features and tools

  • Recognizes user-scenario when tool may be useful

  • When user does these new tasks, bot estimates "if relying on external-tool-functions would expedite the user-task"


  • This tool essentially generates a bot that will monitor internal activity across the platforms, and then organize the data into useful categories that can be compared with external findings on the web.

    The bot also can download new software; test it out using user-defined preferences; and record the results into categories useful for matching with user-task-performance.

    This is another way of saying that even though we don’t have a particular task that needs to be improved, the bot still hunts down and finds new software that may be of interest in the future. The bot’s role is to organize the results into logic-maps that can be compared to the steps a user takes to accomplish a task.

    In those cases where the user-steps show a definite pattern, the bot can then look at the task-maps assigned to new software and suggest a product, site, or external tool that would reduce the number of steps in the user’s task-map.


    Bot Monitoring Tasks


  • Notice URL lists user is referencing

  • Monitor types of links saved/not-saved to bookmarks and aggregator

  • Identify the common characteristic of interest across many platforms

  • Identify the URL and steps taken to save the URL

  • Conduct a test search, monitor how the saved URI is used or affected by the end-user

  • Monitor the high frequency tasks

  • Evaluate the legacy or the search in terms of test outputs, end-response to the presentation


  • Suffice it to say that the bot could be doing a number of things to analyze the user’s patterns across multi-platforms and also registering this data into templates that could be compared to external capabilities.

    Going back to our search example, this bot could go onto any content platform that matches our recurring criteria. The bot would then do what we are repetitively doing: Finding sites that have a search tool which reports a single feed.

    After a while, the bot would be able to duplicate the steps we were doing, and we could then go about our primary task [something else].

    Again, in the search example, the tool would find sites that have XML buttons for feeds even if the site is not registered or pinged with FeedMesh and not integrated with protocols for a universal feed subscription.

    Rather, the bot would analyze our steps; capture the desired sites; know to duplicate our many uri-link-saving-steps; and automatically inject our target site into the aggregator.

    Think of it as a more advanced version of universal replace in a single-platform word processor. Rather than simply noticing a common misspelling on a single site and doing a universal change [saving the time to manually change each error individually], the bot does the same approach but focuses on tasks, not individual letters or characters.

    The bot saves us time by noticing our pattern and finding methods to more quickly achieve the result, without regard to the content, link, or particular product used.

    The bot acts like a Google-text predictor. It notices what we are doing; finds location with the similar content; displays those similar characteristics into groups; notices what the user is doing with the data and results: How it is saved, where the data is placed, what the subsequent steps are taking; and then monitors you monitor that data, what you check and do next with it.

    The bot allows us to move from a text-based prediction in Google-searches to something that is task-based.

    Sample bot search

    Let’s take a look at this bot in action. There is a product out there that is named an XML Search Button Discovery tool. But we don't know that. Neither does the bot, yet.

    At this stage of our search, the bot is noticing that we are continually going back to a list on FaganFinder in search of XML Search Engines. The bot then starts to look for similar sites with the same features.

    The bot is also noticing that we are saving some of the site URLs in a bookmark and then blogging in draft, and then saving the feed URI into a particular aggregator.

    Then the bot notices that on each of the sites we have saved, that we’re also doing a test search on each site. The bot also notices that not all the sites are then saved; rather, the bot notices that with some types of results, the user removes the item from the list.

    With time the bot starts to notice a pattern. We are looking for a particular site, which specific features, and only saving and really using those sites that deliver a specific kind of feature.

    The bot looks across the web, finds a tool that improves our task-logic-map. The bot returns the following tool:


    XML Search Button Discovery

    Identifies search sites with single xml search feed


    The bot does two things when it finds the site. It integrates with the site's XML-infomration to read the logic map; and the bot also does some testing.

    After a quick look, the bot concludes that the site has the following features:


    Discovery Steps


  • 1. Identifies a target site from anywhere on web. Can have either an active bot hunt down lists, identify an individual site, or read from content in XML feeds.

  • 2. Tools saves sites with button searches in del.icio.us, other bookmarks, or an aggregator.

  • 3. User enters search string into tool; tool then explores each site looking for those sites that return a single XML-feed-button for a single search.

  • 4. Tool saves XML feed URI to aggregator, explores results, zeros out all items as read.

  • 5. Tool monitors changes. Whether the target searches appear, how quickly, and time lags.


  • The bot also runs through the site, creates a site map for itself, and verifies that the tool can indeed answer the following types of questions:



    Measures of Merit


    Tool answers these questions:

  • Can XML button search show case new additions to searches?

  • Will reported results show case only additions since last update?

  • Can the XML search platform differentiate between new content and previously added content?

  • Can the XML aggregator differentiate in the search-feed those results that already existed, and new content added after the start-search-time?


  • The bot lets us know that the returned tool is interesting. This external tool is a system saves baseline searches in one multi-channel feed; then compares this baseline search data against the returned results in another test search.

    The bot lets us know that the search tool correctly differentiates between[a] sites that report a search outputs in a single feed from [b] sites that report search results, but provide a feed for each individual search-result.

    The bot also lets us know where to go to discuss with others when discussing this tool. The bot says "To discuss this more use the following tag ."

    Reviewing what the bot just did

    Let’s go back to our original list of criteria for success. Step one of the analyses was to deploy the bot to monitor our actions, and then find a tool.

    This tool automatically does what we want: Find search engines that have a single XML feed. The tool has satisfied the following criteria:


    System Check: Confirm returned tool expedites user task performance


  • Reduced time, increased productivity: Eliminated duplicate steps of taking data from one search platform, retyping it or copying it from one platform to another. [One entry, multiple uses]

  • Expedited data integration: Analyzed user practices and returned a tool that will accomplish this task

  • Cataloged and mapped the user steps [steps catalogues into a single routine]

  • Done an auto-search of available solutions: Started a search of the bot’s templates to match our task-map with those of product solutions already available



  • STEP THREE


    The bot at this point has more data than before. The user has been issued a potential solution. The bot’s job is then to evaluate the use of this tool against the original task steps.

    In short the bot is comparing the efficiency of using this proposed tool against the continuing steps that have been simulated.

    In general the bot has already noticed which platforms and tasks are related. The bot stores the data, and uses the task-map to compare product-solutions.

    At this point the bot starts to track whether the success criteria have been met. The bot ranks the user performance with this tool. As the user engages in more repetitions, there is a higher probability of successfully listing and raking the relevant criteria.

    Here, the bot notices what the user continues to do with the result after putting it into the aggregator [Ex: Zero-ing out all items as read]; and notices the other tasks associated with that event [Ex: publishing in blog].

    The bot figures out that the two events are related: The user is zero-ing out the aggregator, then publishing something. The bot does the same thing as above, and reports the following tool for the user to apply:


    XML Search Selector

    Hunts search platforms and integrates with aggregator


    The bot tells the user how to user this tool:

  • "Users enter string. Tool hunts down sites with both XML search capability and a single button for that target search.

  • "Tool chooses search site best suited for task, area, search, content, tags and other user-defined parameters.

  • "Tool takes user search string, applies it to site, grabs the search-feed for that particular search, and then imports it into the aggregator."

    Then the bot flashes a message, "To discuss this more use the following tag ."

    What the bot does after returning this product is ask:

  • Can the XML Button Search showcase only new additions?

  • Will using this tool advance the users productivity?

    The bot doesn't know, yet ... until it starts the next phase of its analysis.


    STEP FOUR


    This phase of the process is to monitor the resource utilization. At his stage, the bot is narrowing in on the characteristics of our recurring searches. It is also associating success criteria with the desired features; comparing these criteria to the reviewed products; then assessing whether using this external tool would actually improve our performance.

    This phase is a logic tree analysis. The bot dissect the results, establishes the criteria and decision tree to discriminate among the used vs. non-used resources; establishes a decision tree map to summarize the user movements; and looks for similar templates.

    In short, what the bot is doing is a time-motion study, but translating the decisions into a visual map that can be compared to other product's capabilities. The bot is archiving the user patterns, and comparing them to other decision maps in products to ensure that the user is moving along routes that are both efficient, but also arriving at the outcome.

    The bot is also noticing the users' favored- and least-favored results associated with both the logic map and the desired features.

    The bot is preparing to cache these results along various weights by categorizing the criteria. The bot is preparing the task-map with the task-map of the available features.

    This is where we get into xml link wiring diagrams. This is a summary tool for links. As we manually go through various steps in our search, the bot will notice how we move along the site, what the roadblocks are, and the results of or selections.

    At the same time, it compares what we are doing, with what we appear to desire to do. The bot will create a logic-wiring diagram for our task, sort of like a wiring diagram for a decision tree or a silicon chip.

    The bot will integrate the searched-site-mazes into these maps to show the variety of steps users have to take to make a decision. The task map isn't what the site says. It's what the user attempts to do. It's a record of where the user is, how they got lost, and the paths they take.

    Again, there is no right answer. Rather, there are more efficient answers which the bot can recognize is our work. The goal of the bot at this point is to create a summary diagram of what we are doing with our links, how we are integrating them, and them move the analysis from looking simply at links and content, to conducting an analysis of our tasks.

    With time, the bot will learn which outcomes are desirable, and which types of maps are most conducive to a productivity improvement. The bot can identify the pattern and context of information, and then also identify gaps in the logic-tests used.

    The wiring diagram provides structure to vast data. The common task patterns are distinct by shape. This means that relative distances, angles, and number of inflow and outflow steps of the wiring diagram, task map, or logic tree become features used to discriminate a particular task, and then compare that routine with other options and solutions available on external platforms outsidethe user’s current tools.

    Using our XML Button Search example [looking for sites that have XML feeds for a single search, but not feeds for individual search results], let’s take a look at the types of steps that could be put into a wiring diagram, and the actions the bot would notice are related to those steps.


    Logic Diagram Steps

  • Start task: Report data
  • Decision: Output linked
  • Retry task: User backs up, re-does search, creates link to output
  • Start over: User backtracks, attempts new step


  • The bot tracks the paths in the site’s site map, and realizes the pattern of blocks: The user continues to dance around a function, attempting multiple entry links, but to no avail.

    The bot also notices there are duplications in the searches, and multiple=attempts to enter the same data.

    With time, the bot concludes that the data is entered correctly, but something is not working the platform. The bot concludes the site appears to be correct, but is not functioning, therefore does not currently meet requirements.

    The bot stores this decision tree in the archive; makes a note of where it was on the particular site; and makes a note to recheck the site to see if the site changes.

    If there is a change in site, the bot reattempts the tasks and compares the user original path to the changed path to see whether the outcome meets the apparent objective.

    If the bot concludes that the site has been updated and meets requirements, the bot then signals the user that the site meets the requirements; and the bot then applies the steps the user previously did to this site: Extracts data, conducts a test, saves the link, and then reports the feed URI to the aggregator.

    The user has the choice to go into "do not bother me mode" and let the bot do this without any notification.

    This is a sample product that the tool could return as a valid test to incorporate and run. This type of test could either be discovered, or it could be incorporated with external updates. Here is what the bot finds:


    XML Aggregator Search Integration

    Tests search tool and aggregator compatibility


    One feature is the ability to assess how well the search tool can [a] provide a feed; then [b] inject that feed into the aggregator.

    This feature measures the integration between the aggregator and original search platform. The tool can differentiate between different search platforms; show how the reported results compare across aggregators; and show there is a trace between the original search input and the final search output at the aggregator.

    The bot tells us that there is a report which shows how the XML Search tool reports. The bot is able to create a logic-map of the following decisions in the newly-discovered-platform:


    Report Discussion


  • Does the feed report only changes?

    If not, then the search engine provides no superior service through an XML feed than one might get by using the tool directly or a simple search engine?

  • Can the read-all option be zeroed out?

    If not, then the output list is no different than a direct search on the site or by using a simple search engine.

  • After zeroing out the search results in the aggregator, does the feed-aggregator report only new additions?

    If the aggregator reports more than new additions, the search tool-aggregator are not providing a better service than one might get by searching the XML search platform directly.

  • Is the search result reported as a single search result?

    If not, then the aggregator and tool are not correctly differentiating between [a] search results in a single feed, from [b] results that get assigned an individual feed for each individual return.

  • Are there separate feeds for each search result?

    If yes, then this is not what is needed, and the tool is not working correctly.


  • The bot then sends a message to the user, "To discuss this more use the following tag ."

    The above tool feature is something that does what the user needs. The bot does a good job at analyzing.

    However, this tool appears to be an early generation model. It does not necessarily accomplish the task, nor does it necessarily translate the analysis conclusions into a tool that accomplishes all the tasks,

    Yet, although the results may be specific to a particular aggregator, they are still valuable. If the user decides to transition from one aggregator to another, these results would be useful in deciding which aggregator was most consistent with how the user performs tasks.

    At this phase of the analysis, the bot stores the logic template and continues the monitoring, saving the results for possible future use.


    STEP FIVE


    The bot not only notices immediate tasks users are performing, but also monitors the subsequent uses of that data, and travel to other sites. The tool will generalize the steps to find what the user is attempting to do.

    Here, the user is associating a particular feature on a site with a subsequent task of linking and posting. The bot has better information. It knows [a] the user is looking for XML-feature; and [b] hopes to integrate this feature with a particular follow-on function.

    As it narrows in on requirements and user experience, the bot returns a new product:


    XML Search Evaluation

    Tests external platform support


    The bot learns this new tool will test check the XML search sites’ ability to review a third platform.

    The bot has mapped the site, and looks through the archive for a similar problem statement: Can the XML-button sites be used as a means to search del.icio.us tags, links, or bookmarks?

    The bot runs through the site and discovers: The [1] tool evaluates whether [2] this platform be used as search platform to look at and return as a search a [3] third site.

    The bot sends a message to the user, "To discuss this more use the following tag ."

    What just happened?

    The bot found the above tool. Now, the bot has a new set of commands and features from an external tool and is test checking the ability of the user-preferred-platform to work as desired.

    The bot evaluates the integration across platforms and notices what the user can or cannot search outside the given search engine. The bot then records the results and is ready to monitor a new user-task.


    STEP SIX


    This is the part of the analysis where the tool narrows in on the task map. This phase is called the gap analysis. The bot will compare the current state of information -vs.- what is desired or repeatedly requested.

    The bot discriminates the results: What distinguishes content from each other items of interest.

    In this situation, the bot is noticing content is distinguished by publication status.

    Also, the bot notices that there are a high number of tasks associated with non-pinged content. This tells the bot that there needs to be a method outside FeedMesh to find this data and content.

    Also, the bot notices that there are a high number of searches associated with non-pinged content. Despite the content not appearing on the FeedMesh, the user keeps asking for non-pinged content.

    The bot registers a gap as the difference between what the user desires -vs- what the test platform provides: The system provides content that is pinged, but the user wants content this is available in another way.

    The bot then starts to hunt for a product that will do this: Find content that isn't pinged. After doing a trillion-calculations in a second, the bot finds the following product:


    XML Search Platform Compare

    Compares feasibility of finding content


    One aspect of searching is to be able to find both pinged and non-pinged content.

    The bot discovers after reading the product XML specifications, and doing a simulation on the site, that the feature tests whether the XML button feed can find content that has not been pinged.

    The bot learns this features looks for content not in the service list [not pinged], but does appear on the web. The specifications say that the tool will hunt down content that can be linked to an XML feed, but is not searchable through one of the services.

    Also, the bot learns that the tool can identify whether or not the external search platform can still find content, even though it may not have been pinged, and does not report as a search result in the FeedMesh.

    The bot thinks this is a good prospect, installs the system, and you're ready to go. The bot then says, "To discuss this more use the following tag ."

    What just happened?

    The bot found a tool. It was that fast. The tool monitors how the individual sites are performing. It then narrows in on what new features exist.

    This is more of the process of narrowing in on target requirements relative to user actual tasks.


    STEP SEVEN


    Now the tool focuses on successful results. The bot is noticing what the user is doing and identifies tools with successful results. The bot associates these preliminary steps with the next steps.

    The bot continues to gather data that will help discriminate the feature on the platform. At the same time, the bot is also collecting user-logic-flow on the platform to create two maps: One map for the site-logic-flows; and a second map of what the user is doing on that map.

    The bot will compare the site map with the user map and see what preferred types of layouts would be more preferable. These templates are registered in the central registry, and the bot then puts greater weights on these types of layouts and formats.

    The bot also then monitors the task completion. It monitors the time and motion, stores the data, and compares it with other scenarios. The bot doesn't just look at what should be happening. It verifies the user is getting the job done.

    Now, the bot compares the time improvements vs. the learning curve. The bot asks whether the user needs a change in steps, map, site, or task-order to improve the learning curve to meet production goals.

    At this juncture, the bot is bundling tasks into bunches. The bot is starting to assign performance data to the logic map.

    Also, the tool is segregating common tasks into groups that other tools and platforms already do. The bot is hoping to compare the user's logic-maps with those of external platforms to find the best match.

    Cross-platform communication

    Again, the bot relies on results from other bots to make this analysis and comparison. The bot ensures that the tasks are properly mapped; and in cases where the task flow diverges into multiple lines [there are two or more ways to accomplish the same task], this divergence is noted as an alternate or back-up.

    The bot is narrowing on in the criteria that the tool hunter can use to successfully assign a tool to satisfy these requirements. The bot find the following tool:


    XML Content Discovery

    Evaluates content visibility


    The bot discovers that this tool is different than PubSub in that it looks at the platform search capability, not the content in the reported feeds.


    Desired results


    A. Tool correctly reports platforms which [1] performs multi-platform searches, and [2] produces results in a single feed.

    B. [?] Creates a unique search string for XML search buttons to find.

    C. Comparison between platforms


    Notice one of the task-statements [B] is not clear [?]. The bot knows that there is a potential problem with a platform. Thus, the tool may not work well. Perhaps there are beta errors that need to fixed.

    The bot monitors the developer's progress in fixing the site and lets you know when the tool is ready for your own use.

    Also, the bot identifies which search platform find the target string: Identifies the time lag; which platform found the target string first; and reports which platforms miss the target string.

    The bot is looking at the newly discovered tool and notices how it works:


    Platform Steps


    1. Target string input to one platform

    2. Platform conducts search of external platforms.

    3. Test platform reports results

    4. Identifies time, gaps, whether search platform misses hits

    5. Identifies how to correct the problem in code or template

    6. Identifies options to get content and feed into the search platform.


    The bot think this might be interesting and sends a message to the user, "To discuss this more use the following tag ."

    What just happened?

    The bot has found a tool that will evaluate the content visibility on a given page. The bot is comparing the user-steps to those in the external option.

    By looking at the content discovery tool, the bot maps this tool into the six steps. Then compares these features of the external tool to what the user is actually doing.

    One approach is to assign the tool and provide instructions to use that tool. The other option is to continue to the search for a tool that better fits with the users preferences. The user can define the successful result in terms of task complexity or time to learn the task.

    If the task is of short duration, it may not be prudent to use a new tool that requires learning above and beyond the original task. The bot knows it is not wise to over-engineer a simple task.

    However, if the task is relatively long compared to the time required to learn the task, then the user could choose to incorporate the product into the user's platform, and schedule time to learn the new procedure.

    The bot decides that this tool is favorable, and there is clearly a cost-benefit tradeoff that the user can automatically assign and schedule, but something the user could choose to let the bot take care of.

    Of course, the bot sends an e-mail to the schedule tool, and the is done through an XML feed, thanks to FutureMail.


    STEP EIGHT


    This phase of the process is one where the bot monitors the tools used and choices adopted to see if they are working.

    The bot will integrate and work with unfolding technology and concepts, noticing new tools, steps, codes, and procedures used and available. Notice the bot is not as concerned with words or tag-names.

    The bot will set up methods to check what the user is doing with that new feature; monitor those coding functions; look for similar features; and then make a map of the features available to perform that step.

    The bot will independently check what the user is hoping to accomplish, and conducts a transparent test of the external platform-option to check that the desire results match what actually happens.

    To do these types of procedures quickly, the bot decides it needs an upgrade for its own use. The bot returns the following tool:


    XML Tag Checker

    Validates tag tools


    The bot learns that his tool checks the ability of the platform to identify and report tags. Also, the tool test evaluates whether the tag-format instructions are consistent with the platforms ability to find those same tag formats.

    The bot thinks this is a good tool to monitor a tag and compares [a] the platform-results to [b] non-platform results.

    The bot then gives you a list of features and capabilities. When the tool matches your logic map, the bot can inject this tool into your platform. In this case, there may be something for you to read. the bot gives you a quick report:


    Requirements


    Your bot has found a tool which identifies tags that are:

  • Saved and used;
  • Exist but do not appear in the platform search results; or
  • Used and searchable with other platforms, but not this particular platform.


  • The bot discovers this tool identifies what is searchable, which types of tags work and are searchable, how effective the XML search platform are in finding simple or complex tags. After doing some tests, the bot learns this tool can do a good job of checking whether the multi-word-tags can be generated, used, reported, and cross-linked in secondary medium.

    The bot is happy and sends a message, "To discuss this more use the following tag , "

    What the bot just found

    The bot gave you a report that you might want to take a close look at. There is something the bot wants your input on.

    After you approve, the tool has integrated with a new feature. Its job is to notice new patterns and identify shortfalls. The bot will notice the logic maps on a particular site, and compare them with the user-routes taken to identify roadblocks.

    The bot will be able to identify results that are not consistent with the desired outputs; or drive outputs that, if adopted would generate less than ideal results.

    This is another way of saying that if the bot were to arrive at a site like Technorati, and conduct a test of the tags, the bot would know that the single quotes returned in a search string, would generate one type of search return in the aggregator; yet, the bot would then monitor how those results were used in the aggregator to know whether the search command-turned-feed-results should be modified to return more specific results.

    Whew! And all that in just a micro-second.

    The bot will return results and decision result to the user in the form of advisories of what to change or adjust in the attempted task.

    The bot may notice that the way that Technorati is set-up may not be consistent with how the user generally does things; the bot will make a suggestion on which specific link to click on to accomplish the goal.

    In the case of Technorati, the bot would inform the user that it is not wise to attempt to create a watch list from the search platform, as the returned result will generate backslashes; also, the bot would notice that the user continues to enter single quotes, despite the objective of finding all the terms.

    The bot knows this because the user continues to enter the site, create a watch list, follows the directions, but then deletes the results from the aggregator. The bot would signal the user how to modify both [a] the search commands and [b] the route take [to find the CREATE WATCH list to ensure that the ADD button] did not add a feed that had had backslashes in the place of single quotes.

    The bot knows to do this because it has [1] a task map for the user-choices; [2] a site map for the target site that it creates, and compares to the site-provided XML template; and [3] data on how the feed is ultimately used or disposed.

    In this case, the bot tells the user that the site would be better navigated [to achieve the goal of monitoring a tag] if certain routes were taken, and certain search strings were entered.

    This is only an example with a familiar websites. Image the possibility of being able to apply this tool to something that is new, has not been beta tested, but the users could quickly figure out whether the tools is working, and whether the results are superior to other outputs from competing tools.

    The bot can also consolidate the lessons into suggestion for changes to W3 standards, and in tools to do the task, despite the shortfalls and limitations of the existing platforms.


    STEP NINE


    We’ve already touched on communication with the user. This phase of the tool is when the bot is in a position to assign questions and answer to user criteria.

    The bot explores whether the technology is working as intended; whether the user achieves the result.

    The goal is to look for ways to make the technology more easily used an adopted. The bot is tracking the maps and sharing them with other uses to see how other users are adopting the procedure, and what might be gleaned from other bots.

    The communication is both internal between the bot and the platforms, and with the user; and externally with other bots and platforms outside the user’s control.

    The bot prepares a report that narrows in on useful feedback on the success of various user-maps and site maps which the aggregators could incorporate and adopt in cycle updates.

    The bot returns the following tool:


    XML Tag Monitor Report

    Tests assumptions on tag addition options


    The bot reviews the product, compares it to the user-task-map and starts mulling things over. The bot finds a tool that helps it analyze this situation:


    Requirements


    The bot found a tool that answers the following questions:

  • Are multi-word tags working and searchable?

  • Are tags not being used as effectively as they could be?

  • What strategies could be used to get public adoption of a particular tag?

  • Are there advantages to [a] using a tag connected to a separate platform over [b] using a single blog with Q/A/discussion?


  • The bot says, "To discuss this more use the following tag ,"

    What the bot just did

    As you can see the returned tool evaluates the new technology and asks a number of questions. This information is useful for integration with development efforts.

    The integrated system of bots can provide feedback data to developers on similar tasks. The bots can look at the ongoing development efforts and can provide information to developers on how users are actually accomplishing their tasks.

    The tool will look at planned efforts and estimate whether the planned tool will adequately address the actual log map users are following.

    The bot also looks at whether the planned development efforts are addressing the weaknesses in the reported site-maps: Ease of follow, roadblocks, unclear procedures, confusing tasks, results that are discarded far earlier than expected.

    In a couple of minutes, the bot notices a new patterns and does some new assessments about your workflow and another tool in the template-registry.


    Bot Assessment


    The bot at this juncture conducts a follow-up step by

  • applying the lessons;
  • keeping an eye out for potential new tools that are archived, tested, and logic-maps are stored for comparison, analysis, and use; and
  • providing feedback on tools so they are developed with actual practices in mind.


  • Based on this analysis, the bot then returns the following tool:


    XML FeedMesh Tag

    Analyzing content-tagging publication and adoption


    This feature evaluates whether the content and tag are reporting correctly in FeedMesh. The bot discovers the tool can do the following types of analyses:


    Evaluation questions


  • Did the blog spot not get pinged?

  • Were tags added after blog publication and pinging?

  • Did other content get added on top and the target-tag was never discovered>?

  • Did the tags get used, adopted, converted, and reported into the index?


  • The bot says, "To discuss this more use the following tag ."

    What happened?

    The bot found a tool which monitors post-publication results; whether the expected tags reported and linked as planned; and whether the valid tags showed up with back-ups.

    The bot discovered the tool conducts the following types of desirable features: Expediting data integration; ensures cross-platform integration; focuses on using one-step to enter a tag in location A [search, watchlist creation], that is also automatically loaded in a second area [blog, bookmark], then subsequently used in a different location.

    Ideally, if a tag in location A were changed, the user would need to know that their watch list may need to be modified, expanded, deleted, or adjusted.

    After doing some test runs on the tool, the bot learns a couple of things. This tool would highlight the three places where this tag was changed, then change the tag so that the bookmark, blog, and watchlist monitoring tag was updated in the aggregator. Essentially, it takes a minor typing change, and ensures all the affected platforms that the user has used this data is seamlessly updated and remains fresh.


    STEP TEN


    This is the phase where the product features get showcased. The goal of this phase is to take the consolidated list of user-maps, and then compare these with the results from the bot-analysis.

    User would be able to see how repetitive-tasks are addressed with a number of options and features in the product. Either the users can choose from a list, or the system will automatically integrate the external tool into the platform, and provide instructions to the users on what to do more efficiently.

    The tool assigns solutions and ensures the logic-map is streamlined.

    Here is a sample product that the users’ bot might come across. Notice as you read it that the product features get compared with what the user is actually doing. Also, notice that the way the user is generating a feed, gets compared with how the feed is actually published and displayed.

    The bot runs across this product, and does an analysis:


    XML Feed Evaluation

    Reviews search tool exploration


    The XML feed on the site says this is a "feed content improvement tool."

    The bot learns that the promoters of the site are saying, "This tool helps assess whether publishers benefit or lose something by using summaries in their feed -vs- using full blog-content in their feed. Tool shows users how search results change with changes in feed summary length."

    The bot has its doubts. The users are saying that the product may not be able to tell the users which search platforms search just the summary, or the entire feed.

    However, the bot learns that the publishers can learn whether key terms and phrases outside the feed summary will or will not be included in the XML search tool.

    The bot is not sure whether this really helps publishers decide whether or not to publish the complete feeds or summary feeds.

    The bot learns that the site does a good job at making recommendations where place content to get searched; the problem is that the recommendations have a poor track record of generating sustainable return on sales.

    Ooops. However, the bot learns that the site does display the entire feed, and is clear in showing the limits of the search. This is what the bot learns about the tool:


    Evaluation Steps


  • 1. Users enter target audience

  • 2. Tool compares feed to how search tools explore feeds

  • 3. Makes suggestions to content priorities


  • The bot learns the goal of tool is to make the most likely searched content more valuable. Recommends changes to ensure feed audience serviced, with appropriate tone.

    Makes suggestions where to place key names and content. Makes suggestions on what to add to the end of the summary to increase odds of action, interest, and attention.

    The bot has its doubts. It has seen a consistent difference between these types of claims and what the products actually does. The bot does additional testing.

    The bot says, "To discuss this more use the following tag ,"

    What happened

    The bot has learned that the above tool might be able evaluated whether the product could better support the user: Advising the publisher how to improve their feed content.

    The bot-discovered-tool for the most part optimizes choices of the user to integrate the product with others.

    Theoretically, this is a multi-platform analysis from the blog to the FeedMesh burner. This tool looks at how to evaluate constraints, or promote the result within the exiting constraints of the platform, firewalls, and local protocols.

    This tool would only have been found had there been a mechanism that could identify constraints in the system; compare them with the user objectives; evaluate the time on feed; then compare the goals of the user to the actual uses.

    The tool would map the site-usage by external readers, and provide feedback to the publisher on what was going on; at the same time, the bot would find tools that would assist the user in achieving their objectives: Improving the feed template to improve subscriber satisfaction.

    However, despite all that the bot found, the bot is not confident the tool can actually do what is advertised.


    STEP ELEVEN


    The user-defined-bot also works with industry leaders and well established products. In this case, the tool integrates with Google.

    Here, the bot looks at what the user is doing with the search results, and finds new content and explores reasons content is not getting attention or visibility.

    The bot compares its own automated efforts with [a] the user and [b] the platform, and understand why there is a difference between the two.

    The bot then goes to an external platform to get the desired tool to accomplish the task. In this case, the tool has found the following tool:


    XML Novel Google Search

    Reports newly added search results


    These are sample requirements for a specific search tool. Ideally, we should have new results since last reviewed-all was clicked.


    Requirements


    The tool would differentiate between three things:

  • Content: Blogs and feeds updated
  • Feed: New feeds added since last review
  • Links: New links added to the search results since last review


  • Seamlessly, the bot adds the following tag to your list [bookmarks, tags, watchlists, aggregator, and blog reference tags], and says, "To discuss this more use the following tag ,"

    Now that you have the tool, when you start blogging about a particular issue in the future, the likely-tags you might want to use are flashed in a pop-up window. Of course, you can turn this off.

    Notice that the tool is looking at what is new in the search result. The tool then monitors how this legacy works with the other products: Can the user-map be improved with more effective steps.

    Again, the tool looks at what the user is doing and hopes to accomplish and figures how which tools to integrate with this platform.

    The bot learns that the tool finds and reports platforms that accomplish this task. In the case of the Google Search, the bot takes the following steps:


    Bot Actions to Matches Tool with Use Requirements


  • Knows the user wants X-type of platform [A site that has a single feed for the search results, not for each individual reported result]; then

  • Compares the user objective to the existing platforms; then

  • Looks for tools that are published and viable that accomplish this objective; then

  • Integrates this tool into the user’s suite of tools.




  • STEP TWELVE


    The bot is now narrowing into the desired next steps. On its own, the bot looks for ways to accomplish multiple steps with one task.

    The bot is noticing the pattern of steps; and validating that a given tool mess the success criteria: That the data were entered once; that the user only handles the data once; and the bot is doing the data transfer from one platform to another.

    This is another way of saying that the user creates data only once, and the multi-platforms, with the bot’s assistance, integrate with a tool to perform the same tasks and share data without user involvement.

    The bot does some other analysis and comes across the following tool:


    XML Multi-platform Search

    One-step search button


    The bot learns this tool is the single entry for multi-search button feed results. This single box is the single platform for all XML Search Buttons.

    Users would input a single search string and get a search of all the sites with XML buttons.

    Returns the result like OPML. All results across all search platforms are in a single feed.

    The bot says, "To discuss this more use the following tag ,"


    STEP THIRTEEN


    Let’s apply this tool to blogging. The tool would monitor how the users use an application, notice the steps, and notice common features that could be associated.

    In this case, the bot would find or develop protocols to integrate user task with platform capabilities. It both integrates the functions, and aligns the platforms with the user’s desired logic maps.

    As an example, the tool should know the challenge is to block spam; and also a competing goal of getting access to feeds and public comments. The bot would create the protocols to do what the user requires.

    Here’s a sample returned result of the tool that solves the user’s problem:


    XML Comment Feed

    Code for easy addition of blog comment feeds



    Requirements

    Tool creates an auto-link function between publishing ping and auto-generates three things:

  • A comment-feed for the aggregator to read;

  • A fast link between the blog, comments, and aggregator report; and

  • [?] A link-set-up of blog-comments in feed format.


  • Search platform can specifically hunt down sites that have comment feeds, or create a comment feed from a single comment in a URL.

    System will link the URI for the comment feed with a stand-alone blog without any comments with future comments.

    To discuss this more use the following tag .

    What the bot is doing

    Notice there may be an integration issue, marked by a [?]-symbol with the last function. The bot is monitoring this for you before injecting the tool.

    This tool demonstrates that a user can add comments to any content and platform, but they do not need to be in a blog. The auto link function satisfies the requirement of seamless integration and ensures the platforms are integrated.

    This is known because the bot recognizes the competing criteria: To get feedback, all the while blocking spam. The bot has hunted down a tool that will satisfy this requirement.

    The tool is essentially a reward or catalyst for other platforms. Those sties that integrate with the tool get adopted; those sites that do not integrate will have to find another means to get their product showcased.

    This tool monitors underlying trends and creates tools and protocols to meet those objectives. It takes the user preferences and finds tools and protocols that will achieve the result.

    The tool also issues instructions to integrate and adopt tools to complete the task. It advises the user how to improve: Either through training on new tools, or advising on more efficient ways to achieve the same outcome. The bot returns the following tool:


    STEP FOURTEEN


    Here is where the bot summarizes and collects protocols and tools actually adopted. In this example, you can see that the returned tool has five features:


    Features of Bot-Identified Tool


  • One-step
  • Multi-platform
  • Seamless
  • No download required, fits in with user constraints
  • Stores results and organizes information for easy retrieval


  • The bot returns this tool because it is a web based system where users can click on a link and button, and with one step add the tag to the aggregator. This is a one step: Search in platform-add to aggregator option, that previously required many steps, multiple copying data.

    This is a good thing. The goal is not simply to add a feed, but to integrate the search platform with the later aggregator monitoring. The search-commands are updated and refined based on how the user subsequently uses the results in the feed.

    The nice thing about the bot: This is done without adding a link to other protocols and it is a web-based platform. Nice choice, since your bot knows you are are behind a firewall, and the IT-department will not let you download anything work related. [Your bot is advising you to look for a new job, but you ignore it.]


    STEP FIFTEEN


    Here is where the logic maps are stored, archived, and maintained. A sample logic map includes major tasks, shows and documents the tool actually expedites the task; and verifies the tool meets the success criteria: That it is one step and that we are focusing on tasks, not links; and results not processes.

    In this case we have four steps:


    User Tag-Blog-Aggregator Tasks


  • Blog
  • Tag
  • Track
  • Converse


  • The bot returns this tool because it will help you. Blogging and tagging are combined with the single tool in the auto-tag to aggregator command.

    The bot found a tool that will allow the user in one step to ensure there is consistency between the blogged tag, the tag monitoring tools, and the feed stored in the aggregator. If the tags change or are updated, they are also changed in across all the platforms.

    Again, the bot analyzed the above four steps and found the tool that would integrate these features.

    Then the bot shares the lessons with the other aggregator-integration tools and bots, sharing:


    Multi-Platform Data Sharing


  • user-templates;
  • site-maps; and
  • results from other discovered, but not yet used tools.



  • STEP SIXTEEN


    This step goes into more details of the FeedMesh Bot. This is a tool that will create a map of the external platform, identifies limitations, looks for other tools that will fill in those gasp, and develops systems to work with the constraints and promote tools that automat what the users are not doing: Going to auto-discovery of tools.


    XML FeedMesh Bot

    Actively seek platforms with xml search capability


    Tool integrates with FeedMesh. Used to identify non-pinged and non-submitted sites, feeds, platforms, and tools to FeedMesh.

    Transitions from user-submitted system, to a system that actively hunts down feeds.

    Has ability to change any site to an XML feed. Publishers have the option to block the FeedMesh bot.

    The bot says, "To discuss this more use the following tag ,"

    This returned tool provides clear instructions: How to crate, integrate and transform the feeds into something that is usable.

    It moves from something that is user-initiated, to bot-found; moving away from user-involvement, to something that is seamlessly discovered by automated tools. Moving from passively receiving the feed submissions, to actively hunting down new sites, and content.

    You like the bot. It is saving you time. You think that the bot may have a good idea about your IT department.


    STEP SEVENTEEN


    This is the analysis phase. The bot actually goes back and makes sure your work situation is improving.

    These are the types of questions that the user-bot would then ask at this stage:


    Bot Reviews Results


  • Have we achieved the goal despite the limitation?

  • What has to be done to do this?

  • Who or what is doing this?

  • How well can we improve efficiency of the work in this task?


  • The challenge is how to ping search sites where something has been submitted, but the site is not connected to either PingOMatic or FeedMesh.

    The tool asks, "Is there a code to add to the template? If not, then this is favorable."

    The bot reviews the content available. The service reports that it is ready. The service states that the search has not been complete or there is no result.


    STEP EIGHTEEN


    This is the sharing feature.

    This is where the user identifies to other bots its findings: That it has found a new user-end-tool to do better searches for XML feeds.

    The bot sends a clear signal: This tool is simple, and makes searching easier.

    Also, the user-bot does the template sharing in this phase. It connects with other users and platforms to share features [not links]; templates [not content]; and wiring diagrams [not blogs].

    This is a more sophisticated user-end -sharing. It automates the current system of link sharing to new ideas.

    It transforms blogging from a reporting system [that could be automated and expedited] into something that shares new ideas and novel content.

    Truly, blogs can do more than what the users are stuck with. When will they be given the tools to expose them to the solutions needed to increase efficiency?


    CONCLUSION


    What we've done is run through some example products that a user-tool could find. This user tool examines how the user is using the platform and looks for ways to increase productivity.


  • The key is to make these user-targeted-suggestions related to the way the user works, as opposed to product-push. Those firms than can ensure their products integrate with this protocol will get showcased.

    Your life just got easier. If only you could convince the IT department . . .

    Links: Updated 3 Mar 2005

    Ref: Welcome to YSDN, Yahoo Search Developer's Network
    " />