Explainable means summarizable and visual. Often, it also means changing the presentation radically to fit the needs of the current viewer.
You may always need to present your thousand pages of budget figures, as a required report. But I guarantee that you will have delivered information more expertly – the information will be more valuable to the people who receive it – if you have also summarized those figures cogently.
My advice here is no different from a literature professor telling you that an essay requires an introduction and a conclusion; these elements aid the reader in grasping the content in the middle. Outline headers („data groupings“) help, too, as do tables of contents and indexes.
The ways your output provides these pieces will vary depending on the output mechanism you use. I’ll include an example of an FRX generating table of contents and index material in the source material for my other session. Adding similar output to the results of an FRXCLASS subclass (discussed below) is a trivial problem, as you’ll soon appreciate.
Although you can summarize using the text-based elements I’ve just suggested, additional graphical presentations are of great value in summarizing. Some people will say this is a result of the picture’s proverbial ability to out-perform „a thousand words.“ It may also be true that we use fewer, and less dense, elements in a graphical presentation than a text-based presentation because of the intensive resources each picture consumes. Here, read „resources“ both for the machine and developer to create it, and for the user to assimilate its meaning. For whatever reason, the net effect is the same: graphics tend to display a summarized, high level view of data.
However you generate your original output, you can save summary information along the way, for re-display in a more graphical format after the report runs. For example, you might finish the statistical report with group totals in a summary cursor that you can re-display in a chart on a form after the report prints.
Consider surrounding the reporting process with additional, visual ways for the user to manipulate various aspects of this process, to make the act of receiving output more active, and more malleable, from the user’s point of view. Suppose the form above appeared before the report was generated. The user might choose levels of report inclusion based on preliminary results. The summary query could be regenerated on the fly, with the chart changing appropriately, before the user actually generated the report.
The figure below illustrates this useful idea. It presents a (non-existent) reporting dialog that allows the user to manipulate calculations and output bonus results based on two variables. One variable (minimum total sales by region) results in a „TOP N“ clause on a preliminary or nested query, and the other value (weighting for size of sales region) is part of calculated query column that derives final bonus figures. The user watches the results of his/her actions dynamically reflected in a chart on the form, before deciding on final output.
You can see why the user would prefer to have a real-time „what if“ scenario processor like this one, before bothering to print a full report. I need hardly mention that this particular report will result in bonus figures being simultaneously posted to the General Ledger, so it shouldn’t run before figures are final. If I ever implemented this fantasy form, I’d probably add a label that changed to reflect the total bonus amount, as the boss calculated and recalculated the bonuses, which s/he’d certainly want before s/he approved the final figures!
Giving decision-makers information-handling tools is an important part of generating output.
The simultaneous posting of figures and creation of a bonus report in this sample form illustrates one of our other principles: the same information has multiple uses, and our output of that information should be as flexible as possible to incorporate new uses as needed. In this case, perhaps summary figures are sent to an Excel spreadsheet with macros while the bonus report is printed in two formats (summary by region for sales manager plus individual letters produced by Automating Word to the sales people who get the bonuses). At the same time, a fairly standard detail report is created and posted as an e-mail notice to the accounting manager.
Output may be accessible and explainable, but the output reports and other documents that we imagine are never the end of the story, and the users for whom we designate transports are never the only people who need to see the output. The concept to which I’ve attached the clumsy label „accumulatable“ goes back to the need for data warehousing. It provides yet another layer of repository in which our output can be stored, re-used, and sliced and diced to provide yet more information towards new goals.
Many of you are already data-driving report dialogs which present all the output from which users can choose will be available from one place and we can easily add to these available output forms without more code. (My other session sample code is likely to have a simple example of such a dialog.)
On the other end of the reporting process, we need to store output that users are likely to request, or have already requested.
What does such storage buy us? The most common and long-standing reason is the need to preserve reports, whether in paper or electronic form, is as an audit trail and a historical record that is guaranteed to represent particular moments in time, such as each year-end for accounts. We can append whole reports into memo fields, or off-line views that represent snapshots of data, appropriately date-stamped.
If preferred, our tables can store file and pathnames of the actual report documents, rather than the content itself. It doesn’t matter whether the output we’re talking about is a table on which we can re-REPORT FORM later, or a print file we can re-print, or a foreign format document we’ll re-transport just as we did the first time, on demand.
How else can we leverage storage of results? We’ve already saved some processing time for re-creation of any report. Let’s add less network traffic. Suppose that sales managers, the VP of Sales, and individual salespeople all need to see certain month-end figures, although each person needs to have a different level of detail („bursting“) in his or her report and some people are only entitled to a portion of the data („slices“). If we perform the month-end query once against the server and store the results in a local table, we can give each person their portion of the results on demand, archiving the single table later, as suggested above. The ideal querying, from a network point of view, will be carefully managed in scope to provide as many different kinds of information as possible without requiring more information to be sent down the wire, overall.
Once we start storing these results – whether formatted output or interim tables or both – we’ve started an impromptu data warehouse or mart of our own. Most people will tell you that it’s inadvisable to start such a repository without adequate preparation or planning. I won’t argue otherwise, if we’re talking about an expensive and time-consuming process that you initiate independently of work you have to do anyway. On the other hand, you have other reasons to store data already. If you make the storage of this data known and document its contents in your organization, people will find new ways to turn that data into useful information eventually, and it hasn’t cost you much at all.
It’s interesting to note that some third-party reporting tools, such as Crystal Reports (published by Seagate Software) store result sets with report definitions by default. Once you’ve output a report, you have to indicate to the tool that you want to re-assemble the data for your results if you don’t want to use the earlier „snapshot“.
The architecture of Crystal Info, a sort of parent tool to the reporting tool, gives us a good model for controlling and managing the reporting process across the enterprise. Along with strategies to reduce redundant queries, Crystal automates the process of report scheduling and allows you to off-load the actual report and querying generation to „information servers“. These are techniques we can easily emulate in VFP systems if we choose – or we can use Crystal Info and Crystal Reports directly to achieve our results, instead. Check out http://www.img.seagatesoftware.com for more information.
Having established some criteria for the kind of output you should be looking to produce, it’s time to talk about what sorts of output best accomplish these goals. Everything below should stand as a technique that helps you meet at least one, and possibly several, of these goals.
To fulfill these criteria, it is evident that our output has to be in a format that is as portable and neutral as possible. This will enable many people to access and read it, for many new methods of displaying and arranging it to be added over time, and for the stored output to have the greatest variety of uses later.
When I first started thinking about this problem, the only „portable and storable“ format we had was REPORT TO FILE. You could get either an ASCII text image of your output or a file containing printer information, suitable for sending to a printer that understood certain codes later on. Originally, in Xbase, these two forms were the same thing. Then we learned to embed printer codes into our output programmatically, and later GENPD provided a mechanism to automate the embedding of these codes. Later still, we could tell Windows to embed the appropriate codes for us, by designating the appropriate printer to the OS.
It was only natural for me to arrive at the idea that PostScript would serve as a „portable and storable“ rich format. It has the capability to describe graphical output exactly, and its format is well-documented and standard for many output devices, not a single manufacturer or a single platform. We could output EPS (Encapsulated PostScript) instructions to a file, and we could have a fair confidence in how these instructions would be „obeyed“, including complex graphics and other formatting.
In fact, had we been discussing this topic a few years ago, I’d be recommending that you report to a PostScript file and use an application called Ghostview to view and print the results. You could also try PDF (Portable Document Format), created by Adobe Systems, Inc., to add hypertexting capabilities to PostScript, and a client viewer application, Adobe Acrobat, to read manipulate documents in this format.
PostScript and PDF fulfill the requirement of portability. Ghostview and Acrobat fulfill the requirement of platform neutrality. But these options fall short of complete context neutrality in the sense there aren’t many other applications besides Ghostview and Acrobat that can handle these formats.
HTML provides an almost ideally neutral format. You still need some application that „reads“ and interprets HTML, but as you know there are hundreds of such applications. There is nothing more accessible.
The fact that HTML has become a clear winner in the format department is evident in that plug-ins are now available to convert both PostScript and PDF to HTML for viewing within browsers. (The plug-in for PostScript is actually Ghostview itself, which comes with suitable instructions for installing into various Windows browsers. For more information about Ghostview and Aladdin Ghostscript, the interpreter for PostScript that lies underneath Ghostview, visit ftp.cs.wisc.edu. You’ll find the Windows plug-in that converts PDF to HTML at http://www.adobe.com/prodindex/acrobat/accessadobecom.html.
To fulfill our „explainable“ criterion, hypertexting allows one document to reference or nest another, allowing multiple ways of thinking about the information that each contains, and almost unlimited types of summarizing document collections. We can also see that anybody can edit an HTML document in one of dozens of tools, to annotate, illustrate, or otherwise enhance any particular point. When it comes to the criteria of „accumulatable“ documents, of course people are already caching HTML documents or explicitly saving them to disk for later use. The format is so well understood that anybody can write a FoxPro, or C, or PERL program to parse and evaluate a document’s contents after receiving it. Placing collections of HTML text documents in memofields wins, hands-down, over placing other formats of documents in general fields, for both control and use of disk space.
Although I’ve noted the availability of PostScript browser plug-ins above, there is no need for us to resort to such subterfuge to create browser-viewable output. HTML is easy for us, as VFP developers, to produce ourselves. As with faxing, we have several different avenues to arrive at HTML format documents in VFP. We can separate these methods into two broad classifications, which I’ll call „high level“ and „low level“.
„High level“ creation of HTML means asking another application to do it for us, and remaining ignorant of the details. For example, I can automate Word and insert various elements, including tables, formatting, bookmarks, and so on, into a Word document. When I’ve passed all the information over to Word in this manner, I can ask Word to save the document in HTML format, and it will capably translate the whole shebang at once.
It’s certainly an advantage to be able to remain ignorant of HTML formatting, I suppose, and we can create fairly sophisticated layouts in this manner. However, you may find instructing Word to perform subtle formatting tasks through VBA to be as daunting a task as learning the equivalent HTML tagging syntax, and it certainly is resource-intensive.
A simpler, yet high-level, choice would be to use an HTML printer driver directly on a VFP REPORT FORM or other VFP output. (You can find versions of the unsupported Microsoft utility pack POWERTOYS that include an HTML printer driver. When you look for this file on the web be sure to ascertain that you’ve got one of the versions that includes the HTML printer driver; not all of them do, presumably because such drivers would be specific to Win95 or NT.)
There are two main disadvantages to using an HTML driver:
I prefer the alternative „low-level“ generation of HTML results from VFP to any of the above methods. By „low-level“ I refer to any method of sending the actual HTML instructions directly from VFP programmatically to a file, rather than expecting another program to issue these instructions.
This may seem to contradict my earlier advice to take help when help is available, but in truth HTML formatting is so simple to learn, and VFP is so adept at manipulating text, that you can get very sophisticated results very easily with low-level techniques.
To issue the HTML formatting directly within VFP, we have a number of alternatives, such as the low level file functions (FOPEN() etc), SET TEXTMERGE, and even the lowly and ancient SET ALTERNATE. You can also embed HTML tags in expressions in a report form and use the REPORT FORM ASCII command to send this text out to a file. You can store text in memofields, reformat the text using VFP’s myriad string- and memo-handling commands, and COPY the memofile TO a file.
You remember that I said that the Neanderthal character-based FRX handled some tasks very well? These tasks involve the portion of output production that require the data to be scanned, related, and grouped – and the FRX still does those tasks well, today, even though it may be inadequate to display those results the way we’d like them to be displayed.
I’ve created an FRXCLASS object to leverage the FRX and REPORT FORM ability to manipulate data, while creating the actual output outside the FRX. FRXCLASS uses an empty report form that calls out to its associated custom object from each report band event and group expression evaluation. The associated object can evaluate, massage, and output the current data in the tables that the FRX is scanning through, in any format you like.
FRXCLASS and two of its subclasses -- FRX2DOC and FRX2HTM, for HTML -- are in FRXCLASS.PRG in the source code samples for this session, along with FRXCLASS.API, a text file to describe their use. (Many more subclasses are possible.) Strictly speaking, these two subclasses are „abstract“; they don’t create output themselves, they just create the circumstances in which reports can be created for their chosen format types. You subclass them further for actual reports, or families of reports. The source code contains JUGGLER1.PRG and JUGGLER2.PRG, with subclasses of FRX2DOC and FRX2HTM respectively, that create identical reports in the different formats.
All subclasses of FRXCLASS use the identical empty report form, and simply choose different output mechanisms to emit results from each event. For example, here is a detail band event method in JUGGLER1:
PROC DetailEntry
THIS.UpdateMessage()
THIS.oWordRef.Insert(Detail.Trick)
* note the use of all string values here
THIS.oWordRef.NextCell
THIS.oWordRef.Insert(DTOC(Detail.Meeting))
THIS.oWordRef.NextCell
THIS.oWordRef.RightPara
THIS.oWordRef.Insert(TRANS(Detail.Points,;
"@( 999.99"))
THIS.oWordRef.NextCell
ENDPROC
The detail code for the same output, in JUGGLER2, reads as follows, using TEXTMERGE to create the HTML output:
PROC DetailEntry
THIS.UpdateMessage()
\<<THIS.SendText(Detail.Trick)>>
\\<<Detail.Meeting>><<SPACE(IIF(Detail.Points<0,5,6))>>
\\<<Detail.Points>>
* note the spacing should remain constant
* without work, if we don't trim entries
ENDPROC