Visual DataFlex

DataFlex

Client Server Solutions

Tools

Year 2000 Solutions

Contact DAC
spacer.gif (845 bytes)

sidebot.gif (793 bytes)

What's New in Visual DataFlex 6.gif (1770 bytes)

linespacer.gif (47 bytes)

Overview
Integrated Development Environment (IDE)
Visual DataFlex Debugger
Database Builder
Compiler
Language Changes
Runtime and Package Changes
Documentation
Crystal Reports 7 for DataFlex
Connectivity Kits
WinPrint
Database Explorer

Overview

This section contains information that will be of interest to developers upgrading from a previous version of DataFlex to VDF6. Not all changes have been listed here. A change is described if:

A.    A new feature or enhancement will be of particular interest to an existing VDF developer. - Or -

B.    A change may effect the way your existing applications work.

IMPORTANT: All paragraphs that are marked as “IMPORTANT:” contain information about a change in VDF that might effect an existing program. You should review these sections very carefully.

Contents


Integrated Development Environment (IDE)

Source Code Editor
Component Editor and Explorer
Component Outline and Editor
New Classes
Enhancements to the IDE's Subclassing System
Improved Templates
Sub-component Support
Object Preference Files
Crystal Reports Wizard
Jump-to-Error Editing
Integrated Debugger Support
Miscellaneous Changes

Source Code Editor

A new, much more powerful source editor has been integrated into VDF6. The same editor is used by the IDE, by Database Builder and by the Debugger. It has the following features.

  • User configurable: Configuration settings (colors, keywords, etc.) are set in the IDE and are applied to the IDE, Database Builder and the Debugger.
  • Very fast with smooth scrolling.
  • Massive file-size support: Tested with files over one megabyte in size.
  • Full color-coding: The editor supports significantly enhanced color-coding of the DataFlex language. Permitting individual customization for keywords, numbers, strings, comments, etc.
  • Unlimited Undo and Redo: Individual text changes can now be undone or re-done with no limit.
  • Automatic indenting and out-denting: DataFlex block-starting commands, such as “Begin,” will automatically indent the following line. Commands, such as End, will automatically un-indent the “End,” thereby retaining correct indentation for the language.
  • Insert/overtype: The editor now supports both Insert and Overtype modes.
  • Search & Search/Replace: Powerful search and replace functionality is provided including features such as bookmarking all occurrences of a find and replacements respecting the case of the replaced word.
  • Bookmarking: Up-to ten bookmarks are supported. Bookmarks permit a position in the source to be marked and returned to later with a simple key-combination.
  • Split Screen: The editor supports both a horizontal and vertical split screen, with automatic synchronization of the code. This feature permits you to view one part of the code while editing another part.
  • Drag n Drop: Source can be dragged and dropped within the editor or from other Windows applications such as Word.
  • Stream/Block marking: Text marking can be either stream or column.
  • Toggle the display of “whitespace” (show spaces, etc.).
  • Uppercase/lowercase the marked block.
  • Set the number of spaces for indent.
  • User-defined colors.
  • Context-Help relative to the caret.
  • Extensive mouse and keyboard functions.
  • Context Menu provides shortcuts to frequently used operations.

Back to IDE

Component Editor and Explorer

The new Component Explorer combines functionality of the object navigator and source code editor into a far more powerful single tool. The explorer consists of a split-screen explorer style panel with a tree view listing all objects on the left and a custom source editor on the right. The editor shows the custom source for the currently selected object.

The explorer is synchronized with the IDE’s component display. Changing the selected item in the explorer’s tree-view will change the selected item in the component display and visa-versa. The explorer and its integrated editor is very fast making it very easy to navigate through object’s and their source.

Back to IDE

Program Outline

The Program Outline has been enhanced. It now uses a split-screen explorer style interface that shows program components in a tree-view on the left and source code in an editor on the right. Depending on the program component selected the editor will display top/bottom source code areas or the full source for a component.

Back to IDE

New Classes

The following classes additional classes are now supported directly by the IDE.

  • dbTabDialogView
  • dbRadioGroup
  • dbTrackBar
  • Grid
  • TrackBar
  • ProgressBar

Back to IDE

Enhancements to the IDE's Subclassing System

IMPORTANT: The method used to create sub-classes in VDF5 is not compatible with VDF6. If you already have IDE sub-classes you will have to convert them to the new method. This conversion process is quite easy and you will find the advantages of the new system to be well worth the change.

The method of defining classes and sub-classes within the IDE has been significantly enhanced. It is now much easier to add sub-classes to the IDE and it is now possible to define classes that previous versions of the IDE would not support. The creation of sub-classes within the IDE is an advanced developer topic and is described in Using your own Subclasses.

Some of the capabilities of this new sub-classing system are:

  • Support for global and local workspace classes. Each workspace may have a unique set of classes.
  • Automatic loading and unloading of classes when changing workspaces.
  • Custom property lists for each class in the Object Properties tool.
  • Full access to DAC and custom classes.
  • "No Execute" property allows the IDE to display properties it cannot access.
  • "Inherit" property allows classes to inherit from the superclass.
  • Classes are contained in separate (.dfc) files in a new workspace directory named IdeSrc.
  • New classes may be added to the Controls Palette and existing classes may be “hidden” allowing developers to create their own layer of sub-classes on top of the standard VDF classes.
  • A class maintenance tool is part of the IDE. It allows you to register classes and determine where they will appear within the Controls Palette.

Note that the IDE now uses stricter syntax checking when parsing source. The previous IDE was more relaxed when it parsed data that was not part of a sub-class definition. It would execute the code and hope that it would work. The new IDE generates an error if parsed source is not part of the sub-class definition.

Back to IDE

Improved Templates

The use of component templates has been simplified and expanded. Templates may now be saved and loaded from local or global workspaces. In addition, templates no longer require registration within AbData.

Back to IDE

Sub-component Support

Sub-components are now supported. Sub-components (a collection of objects to be re-used within a Component) may be saved and loaded in either a Local or Global workspace area. These files are stored in a new workspace directory named IdeSrc and are saved with the .dfs file extension.

Back to IDE

Object Preference Files

When an object is created within a component, there is a new method for initializing that object. This method allows for the setting of properties, creation of custom code, and the creation of custom child objects. For example, a Button can be created with the “Procedure OnClick / End_Procedure” already created.

This is accomplished by creating preference files. These files are stored in a new workspace directory named IdeSrc and are saved with the .dfo file extension.

Back to IDE

Crystal Reports Wizard

The Crystal Report Wizard allows you to create VDF component interfaces for Crystal Reports. This wizard will actually load the Crystal Report and guide you through set up options based on the characteristics of the report. The wizard allows you to create front-end set-up options such as sort order, criteria selection, and output device.

Back to IDE

Jump-to-Error Editing

The IDE now supports a compiler-error reconciliation panel. If errors are encountered during a compilation, the IDE will load a split screen panel. Errors are displayed in a list in the bottom frame and the source code for the highlighted error is displayed in the top frame. Errors are easily corrected by selecting the error to view and changing the source code. When the panel is closed, the files are saved and the IDE is updated as needed.

Back to IDE

Integrated Debugger Support

Support for the new debugger is built into the IDE. A new menu item, Debug, allows you to compile programs and test components with debug symbols, run a program under debugger control, or simply load the debugger.

Back to IDE

Miscellaneous Changes

  • A “mouse drag lock” has been created which allows you to disable object movement with a mouse.
  • The Nudge Objects tool panel can be used to resize and reposition objects.
  • Pasted objects are now located in a more intelligent manner. When an object or group of objects is pasted into a component, the IDE adjusts the location so that, at least, the first pasted object will always be visible.
  • The size and location of tool panels (e.g., the Controls Palette) are maintained across IDE sessions.
  • Checkbox-columns now supported for the dbGrid and dbList classes.
  • The Controls Palette now supports user-defined pages.
  • Tools now auto-locate themselves when the screen resolution changes.
  • A warning is given before removing a DDO from a Component’s DDO-structure if custom-code has been added.
  • Objects may be moved and sized from the keyboard.

Back to IDE

Contents


Visual DataFlex Debugger

An integrated source level debugger has been added to VDF. It supports the following features:

  • Full IDE Integration: The IDE has a Debug menu that provides full access to the debugger. It allows you prepare (compile with debug symbols) and run any program or test component under full debugger control.
  • Full Source Level Debugging: Debug by stepping through your source code. The editor used by Visual DataFlex Debugger is the same powerful editor used by the IDE. It has the same user interface and all of the same command and syntax color highlighting as the IDE.
  • Flexible breakpoints: Visual DataFlex Debugger supports unlimited breakpoints. Pointing to a line of text and clicking the mouse button creates breakpoints. Breakpoints can be easily added, removed, enabled, disabled, and viewed.
  • Flexible program stepping options: Once a breakpoint has been encountered and Visual DataFlex Debugger is controlling execution, various stepping options are available. In addition to single step program execution, you can choose to step into, step over and step out of procedures and functions. You can also choose to run to cursor or just run (until the next breakpoint is encountered).
  • View variable values: Both local and global variables can be displayed in variable windows or by passing your mouse cursor over a variable in the source window (the value will appear in a tool tip). Visual DataFlex Debugger knows when a local variable is “in-scope” and will only display local data when it is appropriate.
  • Expression Evaluator: The expression evaluator makes it easy to view any piece of DataFlex data. This provides a powerful method for viewing object property values.
  • Ease of use: The Debugger’s integration into the IDE and the compiler makes it easy to use. During development, no special steps are required to prepare a program for debugging. You just compile your program and run it. If you discover you need to debug your program, just load the debugger and attach to it.
  • Many other Features: Some of the other debug features include a Watch window (watch variables and expressions), a call stack, a File dialog (view file and field data), an object tree display, and a message history trace window.

DFSpy has been removed from VDF – it is no longer needed.

Contents


Database Builder

  • The file definition panels now support resizing and maximize. All of the appropriate controls reposition and resize automatically, allowing more information on the file and data dictionary definitions to be viewed.
  • The Methods Editor Tab Page has been re-designed into a single tree-view.
  • The new editor (see New Source Editor for details) is used in the Methods Tab Page and as the Data Dictionary pop-up code editor.
  • Data Dictionary source files now support code placement prior to the //DDB-FileStart marker. This code is ignored by the DD parser. The tree-view performs all of the operations of the old design, without any of the known bugs or nuances.
  • Developers may now specify exactly where the Data Dictionary generated settings in the Define_Fields and Field_Defaults should be placed. A special marker is used to specify the point in each method. This allows developers to place custom code prior to DBB-generated code.
  • Edits to DD class, that are made in the pop-up editor, are immediately applied. It is no longer necessary for the developer to press the save button.
  • Added file DSN support.
  • Added logging to conversion for ODBC or DB2.
  • Conversion can be run unattended.
  • Open file panel is resizable.
  • Added DDF generator for DataFlex for Btrieve files. This will allow the user to generate DDF information from DataFlex for Btrieve files so they can be used from the DataFlex Connectivity Kit for Btrieve (aka the driver). This option is only available if the Btrieve driver is loaded.
  • When moving a file, you can select to update relationships to the file automatically.
  • When loading a DEF file you can specify a target type. This makes it possible to load a DataFlex def file for some ODBC format for example.
  • You can setup the confirm behavior when deleting, erasing or removing files. You can set it up to show a confirmation message for every file, once or never. Setup in the configuration panel.

Contents


Compiler

  • Error line numbers are now always correct: In previous versions of DataFlex, some error messages such as “unresolved message name” would report the error line number as the last line of source. This has been corrected.
  • The compiler now allows source lines of up to 4,096 characters (up from 256).
  • A new compiler option and a new compiler switch (–z) supports the ability to compile with debug symbols. This is required to support the Debugger. For more information about compiling for debugging see Compiling with Debug Information.
  • Much Better Error checking: Changes in the Compiler and in FMAC allows the compiler to perform much better syntax and structure error checking. This is an important change! You should carefully review Improved Syntax and Construct Checking for detailed information.

IMPORTANT: Changes in the compiler and in FMAC may result in compiler errors in programs that previously compiled error free. In most cases, the compiler is detecting real errors that previously escaped detection. Correcting these errors should be easy and is, obviously, important. If for some reason, you cannot correct these errors, a compiler command, Compiler_warnings OFF, can be inserted at the top of a program allowing you to compile your programs using the more relaxed (and possibly wrong) VDF5 syntax.

Contents


Language Changes

Simplified, Consistant Command Syntax
Obsolete Commands and Techniques
Improved Syntax and Construct Checking
Relaxing the Compiler Checking
Compiler Changes in Error Checking and Syntax
Case Statement

One of the goals of VDF6 was to make the programming language easier to use. To achieve this goal, four major changes were made:

  1. A suggested syntax has been created for all commands: In cases where multiple methods exist to accomplish the same goal, we now present a single “suggested” approach. For example, when commands and functions exist that accomplish the same thing (e.g., the Left command and function) we advise that you always use the function.
  2. Stricter syntax checking is now enforced: Previous versions of DataFlex supported very relaxed compiler syntax checking. This made the language confusing and made it much harder to distinguish bad code from unusual code. We now enforce stricter and consequently much more sophisticated syntax error checking. For example, the “to” keyword is now required in any Set or Get statement. By requiring this keyword we can now check that the “to” keyword is included, that it is included in the proper position and that it is only included once.
  3. A simplified syntax has been created: Some command syntax changes will make programs easier to type, easier to read and less prone to error. For example, the Local keyword is no longer required within a method. If a variable is declared within a method, it is local. Another example of simplification is object referencing. You may now reference an object by its simple name (e.g., Get piProp of oMyform to iVar) instead of it’s expression access method (e.g., Get piProp of (oMyform(self)) to iVar)
  4. We’ve identified many commands and techniques as being obsolete: We have identified over 100 commands and numerous programming techniques as being obsolete and we advise that you no longer use them. Note that these commands do not have removed from existing programs – existing programs that use these commands will continue to run properly. Documentation for obsolete commands has been moved into a section of the Language Guide named Obsolete Commands, functions and other symbols. Each obsolete command will contain a link to the command that should be used in its place.

You will find the advantages of these changes to be significant. It is possible that some existing VDF programs will not compiler properly in VDF6. In many cases, you will discover you’ve made a programming error (and you will welcome the notification). In some cases older style coding, while correct, will now be considered improper. While we provide a method to disable the new changes allowing you to compile “old style” we encourage you in the strongest possible terms to correct your programs and bring them up to date. We’ve performed this update on great deal of internal code and have found the process to be relatively fast and easy.

To find out more about these changes you should refer to following documents:

  • The Language Guide presents an overview of the DataFlex Language and should be reviewed by all developers. This guide will be particularly useful to new developers.
  • The Language Reference contains a section titled Obsolete Commands, Functions and Other Symbols. Refer to this to see what commands are now obsolete and what commands should be used in their place.

Back to Language Changes

Simplified, Consistant Command Syntax

Several changes were made in the DataFlex command language. These changes attempted to improve on syntax that was either clumsy, hard to remember, or error prone. The complete list of changes is contained within this document. The most significant changes are:

  • Variables are intelligently scoped to determine if they are local or global. The Local keyword is no longer required inside of methods.
  • The keyword Self should be used in place of Current_object
  • The object referencing parameter in Get/Set/Send commands allows you to use a simple naming syntax of ObjectName instead of (ObjectName(Self)).
  • A simplified and consistent message sending syntax is now:

Get msg {of obj} {param1...paramx} to Val
Set msg {of obj} {param1...paramx} to Val
Send msg {of obj} {param1...paramx}

  • A Case command is now supported

Back to Language Changes

Obsolete Commands and Techniques

A number of commands and techniques are now considered to be obsolete. You will not need to go and change your existing programs – the old commands will still work.

Here is an example of some of the commands and techniques that are now considered obsolete. These are accompanied by their suggested replacement commands and techniques.

Command vs. Functions

In DataFlex, there are numerous commands and functions that do the same thing (e.g., Left, Right, Append, Pos). We recommend that you always use functions instead of commands.

Not: Left sText 20 to sValue
Use: Move (left(sText,20)) to sValue

Not: Append sVar1 sVar2
Use: Move (Append(sVar1,svar2)) to sVar1

Expressions

The use of logical operators is an area where there are many ways to accomplish the same things. Below is some useful expression information and programming guidelines.

  • In all cases expressions will evaluate to zero (false) or non-zero (true).
  • If (expression); Expressions belong in parenthesis (see next item for exception)

If (x=7) ….
If (x=7 and y>z) ….
If not (x=7 and y>z) ….
If ( MyFunct(self,var) or GlblFunct(var) ) ….

  • If IntegerVariable; Parenthesis are not required for single operand expressions. The single operand is usually an integer. If it’s zero, it is false. If it is non-zero, it is true.

If var….
If not var

  • Use of indicators; Only use indicators when you must: Found, SeqEof, Err. Indicators can be used in expressions but they must be enclosed within their own set of parenthesis.

If (found) ….
If ( (found) and (err) ) ….
If ( (found) and changed_state(self) and sVal<>””)

  • Be aware of expression short circuit capability (it’s powerful).
  • Avoid global variables; Whenever possible use local variables within expressions and not globals and particularly not indicators. Local integers can be used in place of indicators.
    If you need to create a “variable” that is global to an entire object, define and use a property – that’s what they are there for. Properties can be defined within objects and within classes.
  • The If statement can be followed by a single command or, if multiple commands are needed, a Begin/End construct. The following does not require a Begin/End block (although you may create one if desired):

If (SomeValue > SomeOtherValue) ;
    Move SomeValue to SomeOtherValue

  • Note that the use of the semi-colon as a command line separator is a valid and useful technique.

Obsolete methods of use (Do not use)

  • Do not use EQ, GT, LT, etc

Not: If x eq 7
Use: If (x=7)

Not: If (x eq 7)
Use: If (x=7)

  • Do not use multiple IFs if not needed:

Not: If x eq 7 If y eq 8…
Use: If (x=7 and y=8)…

Not: If (x=7) Begin
            If (x=8) Begin
Use: If (x=7 and y=8) Begin

  • Do not use indicators. Use integers. If an integer evaluates to zero it is false, if non-zero it is true.

Not: [IndOk] Move x to y
Use: If IntOk Move x to y

Not: [IndOk][Ind2Ok] Move x to y
Use: If (IntOk and Int2Ok) Move x to y

Not: [IndOk] Move x to y
[IndOk] Move y to z
Use: If IntOk Begin
             Move x to y
             Move y to z
         End

Back to Language Changes

Improved Syntax and Construct Checking

Previous versions of the compiler did not do a very thorough job of checking for obvious errors. The compiler took the attitude of “when in doubt, don’t report an error.” This has changed. If syntax or command placement is doubtful, an error will be reported.

This should greatly reduce development time. Now, instead of tracking down problems within a running program, you will have a much greater chance that the error will be reported at compile time. The following sample should prove this point:

Class MyArray is An Array
    Property Integer IsBusy public False
    Object OldVals is an Array
    End_Object
End_Class

Object MyObject is an Array
    Local Integer OldValue
    Procedure Construct_Object
        Forward Send Construct_Object
        Property Integer IntProp public 0
    End_Procedure

    Procedure Add_Items integer i1 string s1
        Integer iCnt
        If i1 eq 0 Begin
            Send delete_data
            return
        end
        Get (Item_Count(self)) to iCnt
        Set value iCnt s1
    End_Procedure
End_Object

Send Add_items 1 "John" (MyObject(self))

How many errors can you see? In previous versions of Visual DataFlex, this sample will compile. It might even run. Just about everything in it is wrong. Here are the problems:

  1. Within a class you should only define properties within a method – usually construct object.
  2. Within a class you should only define objects within a method.
  3. You cannot place construct_object within an object. It never gets called.
  4. You should not define properties within methods within an object – it should be directly within the object.
  5. You cannot create local variables outside of methods.
  6. A global variable, iCnt was created within Add_items when a local was wanted (we forgot the Local keyword).
  7. Return was used in the procedure instead of procedure_return.
  8. Get (ItemCount(Self)) is wrong. Get followed by an expression is almost always wrong.
  9. The “set value iCnt s1” command is ambiguous and will not work. It will attempt to set the value of the current_item with the value in iCnt.
  10. The send Add_items syntax is wrong. The parameters belong after the object name.

The new compiler/fmac will catch all of these problems. In addition, changes have been made to allow for simpler command syntax. The proper VDF6 code for the above example should be:

Class MyArray is An Array
    Procedure Construct_Object
        Forward Send Construct_Object
        Property Integer pbIsBusy False
        Object oOldVals is an Array
        End_Object
    End_Procedure
End_Class

Object oMyObject is an Array
    Property Integer piOldValue 0
    Property Integer piIntProp 0
    Procedure Add_Items integer i1 string s1
        Integer iCnt
        If (i1=0) Begin
            Send delete_data
            Procedure_return
        End
        Move Item_Count to iCnt
        Set value iCnt to s1
    End_Procedure
End_Object

Send Add_items of oMyObject 1 "John"

Back to Language Changes

Relaxing the Compiler Checking

In some cases, this new stricter syntax will catch errors that the previous DataFlex compiler did not. In other cases, the stricter syntax will report errors that, while not truly an error, represents unusual, ambiguous or bad coding. In both cases, you will want to correct this code as quickly as possible.

The reporting of some errors can be relaxed. Items in this list marked with <Warning!> can be relaxed by placing using the command Compiler_Warnings. This command can be used to turn relaxed compiler checking on and off. The command is passed the parameter ON or OFF to indicate if the compiler should relax its checking. Therefore placing the command Compiler_Warning OFF at the top of a program will make it work the way it always did.

For example, it is possible, although highly unlikely, that the following syntax might be correct:

Procedure Add_Items
    Integer iCnt
    Get (MyProp(Self) to iCnt
    :
End_Procedure

If this is the case, you could tell the compiler to allow this syntax by adding the command “Compiler_Warning OFF” to the top of the program. Even better would be to surround the code as follows:

Procedure Add_Items
    Integer iCnt
    Compiler_Warnings Off
    Get (MyProp(Self) to iCnt
    Compiler_Warnings On
    :
End_Procedure

A better and recommended alternative would be to create syntax that is more clearly intentional. The following code does not require relaxed error checking and it more clearly reflects the intent of the programmer (even without comments).

Procedure Add_Items
    Integer iCnt iMessage
    Get MyProp to iMessage
    Get iMessage to iCnt
    :
End_Procedure

Even with Compiler_Warnings disabled, some new errors may be reported. These are errors that are so serious that they can not be relaxed. They should be corrected immediately.

Back to Language Changes

Compiler Changes in Error Checking and Syntax

The following lists all compiler and syntax changes in VDF6.

Classes: Class / End_Class Commands

  • The newly defined class must not yet exist
  • Super-Class must already exist
  • Classes cannot be nested within classes
  • Classes cannot be nested within methods (use the commands Create_base_class/End_base_Class for this very advanced feature).
  • End_Class must have a corresponding class
  • End_Class not valid if unresolved method (missing end_procedure / end_function)
  • End_Class not valid if unresolved objects (missing end_object)
  • A class should only contain methods and import_class_protocol command. Creating a property, sending a message or creating an object directly within a class generates an error. <Warning!>

Objects: Object/End_Object Commands

  • If created in a class, it must be inside a procedure or function. <Warning!>
  • End_Object must have a corresponding object
  • If in method, end_object must have a paired object within the method.
  • The method construct_object not allowed within an object (only classes)

Procedures & Functions

  • Cannot nest methods
  • Methods are not allowed within objects that are defined in a method.
  • If at desktop, it is a method of the Object class. In other words all objects will understand the message via inheritance, not delegation. Same as “For BaseClass”. If you want an method to reside on the desktop (and resolve via delegation) use “For DFDesktop”
  • The same function name (get_xxxx) cannot be used for cross types (internal/global/class/object-access). <Warning!>
  • The same function name cannot be used with different parameter lists and types.
  • End_Function / End_procedure must have a corresponding procedure/function
  • End_Function / End_procedure checks for unmatched Begin/end blocks within the method.
  • End_Function / End_procedure checks for improperly nested child objects within the method.
  • Procedure_Return / Function_return must be within a procedure or function
  • A Return command inside a method will generate an error. If you must do this use the command Gosub_return. <Warning!>
  • Globals variables cannot be created within methods. The compiler will assume the variable is local. If you must create a global within a method, use the new command Global_Variable (e.g., Global_Variable Integer x). <Warning!>

Properties

  • Properties can only be defined within a class or object
  • If defined within a class, property must be within a method <Warning!>
  • If defined within an object, property must not be within in a method. <Warning!>
  • Not allowed within an object that is defined within a method
  • Check that name is valid. It cannot be used for other function type (internal/global/object-access). If a class function, must have same parameter types
  • Syntax of Private/Public is optional (e.g., Property Integer MyVal 10). This is a documentation issue. We don’t really use private properties and the overhead makes properties seem much more complicated.
  • Property integer Name {PUBLIC|PRIVATE} {Val}

Sending Messages: Send/Set/Get Commands

  • Error if sent directly within a class (and not within method in the class) <Warning!>
  • Error if message is an expression (e.g., Get (myobject(self)) to x) <Warning!>
  • We now use a stricter syntax checking) <Warning!>
  1. To parameter is required with Get and Set commands
  2. Get syntax expects a single parameter following the To.
  3. Set syntax expects at least one parameter following the To. Multiple parameters are allowed in order to support legacy messages (e.g., set location, set size). They are strongly discouraged for new usage.
  4. Of parameter can be used with Send message in place of To. This creates a consistent syntax with Get and Set
  5. Location of the Of parameter is checked. If present it must immediately follow the message
  6. Item Parameter is not required to precede the item value. Note that in previous revisions of DataFlex the Item keyword was required for some item based messages (e.g., value) but not for others (password_state). It is now never required.
  7. Item and Form based properties will provide a default item value (either Current or 0) if the index parameter is omitted. This is discussed in greater detail under Default Index values in Messages
  • A simpler object naming syntax is supported. You may use ObjectName in place of (ObjectName(self))

Expected Syntax:

Get msg {of obj} {{Field|File_field} param1...paramx} to Val
Set msg {of obj} {{Field|File_field} param1...paramx} to Val
Send msg {of obj} {param1...paramx}

Tolerated Syntax:

Get msg {of obj} {{Field|File_field|item} param1...paramx} to Val
Set msg {of obj} {{Field|File_field|item} param1...paramx} to Val...Valxx
Send msg {of|to obj} {param1...paramx}

Variables

  • The local keyword is now optional. Variables declared within a method are local. If you must place a global within a method use new command Global_Variable. <Warning!>
  • If a local variable is declared outside of a method, an error is generated. <Warning!>
  • If you already have variables declared within a method and you have not used the Local keyword, it is really not clear what your intentions were. Most likely, you wanted this variable to be local and you forgot the word local. Less likely, you actually wanted a global variable. That variable will now be local. If you really wanted that variable to be global you will receive a compiler error when you attempt to use the variable outside of the method. This error will be your clue that you’ve declared your global variable within a method. We advise you to move the variable outside of the method. Or, if you must, preface the command with the global_variable keyword.

Object Name Syntax in Messages

  • The keyword self can now be used in place of current_object. The word self is more commonly used across other object-oriented languages and it’s easier to type.
  • An object name must uniquely identify an object. You cannot use the same name for a function (either object function, global function, or built in function). If you try to use the same name for cross purposes you will get an error. Previously you did not get errors for this, but you should have - it was and is an error. <Warning!>
  • Object may be named the same as variables. See below.
  • When object name follows an “of” keyword (or “to” for send) the simple name of the object can be used in place of the expression access method. The following two lines are the same:

Get MyProp of oMyObj to iMyVar
Get MyProp of (oMyObj(self)) to iMyVar

The following sample shows how you can now access objects without using the expression syntax.

Object oMyView is a ReportView

    Object oLowRange is a Form
    End_object

    Object oHiRange is a Form
    End_object

    Object oMyreport is a SomeReport
    End_object

Procedure Doit
    integer iLow iHi
    Get Value of oLowRange to iLow
    Get Value of oHiRange to iHi
    Send RunReport of oMyReport iHi iLow   
End_procedure

End_Object

In both cases above, the object oMyObj must be defined before it can be used. It can be defined by creating the object or by using the register_object command.

  • Object names within expression must still be entered as an expression.

Move (MyProp(oMyObj)) to iMyVar // this is not legal
Move (MyProp(oMyobj(self))) to iMyVar // this is allowed
Get MyProp of oMyObj to iMyVar // this is allowed

  • The move command allows you to move an object Id to a variable without needing to place the object within an expression. The following two lines are the same:

Move oMyObj to hoMyObj
Move (oMyObj(self)) to hoMyObj

  • If oMyObj is both the name of an object and the name of a variable (presumably a local variable that is in scope) the variable name will have precedence. For example:

Object oObj1 is an Array
End_Object


Object oObj2 is an Array
    Procedure Proc1
        Send MyMsg to oObj1 // will send to object Obj1
    End_Procedure
    Procedure Proc1
        integer oObj1
        Move Self to oObj1
        Send MyMsg of oObj1 // will send to oObj2
        // this is very poor coding!!
    End_Procedure
End_Object

  • The simpler object addressing construct is handled by the compiler. The compiled result of hObj and (hObj(self)) are identical. Both are access expressions. If you are going to send a message to the same object many times within a method it is more efficient to move the object handle to a variable (which is one evaluation) and to use the variable as the message’s object handle.

Procedure Proc1
    Integer hoArray
    Integer iVal
    Move oArray to hoArray // move object handle to variable
    For iVal from 1 to 10000
        Set value of hoArray iVal to “test Data”
    Loop
End_Procedure

  • You can only use this simplified syntax with the move command and with the object handle of a message. You cannot use this syntax with parameters – you must use the full expression syntax:

Set Main_DD to Customer_DD // this is not allowed!
Set Main_DD to (Customer_DD(Self)) // this is the proper syntax

Default Index Values in Messages

DataFlex has a number of messages that require an index parameter. These are often referred to as “item based properties” where an item number (an index) is required to identify which item data value should be queried. A better term for these item based properties would be collections.

A number of these messages have been turned into “auto-index” messages. If an index parameter is not passed, a default value is provided. This was originally done to make it easy to refer to the “current item” and was a feature of dubious value. This technique was applied to much greater advantage in VDF where objects existed that internally were multi-item but from a usage (and interface) point of view were always single item (item 0). It was not intuitive to have to apply index values when there was always one and just one value, and its index was always zero.

In VDF4 and 5, some multi-item messages (e.g., get/set Value) were auto-indexed and did not require an index parameter (e.g., Get Value of hoMyForm to sVar). While other multi-item messages were not and did require an index parameter (e.g., Set password_state of hoMyForm to True). This inconsistency created confusion and errors.

In VDF6 a more consistent set of auto-index properties has been defined and is listed below. If an index parameter is omitted a default value (usually 0) will be provided. This means that the following two code lines are identical:

Set Value of oForm to sValue // suggested
Set Value of oForm 0 to sValue

In the above example, a form is a single item based object so you are encouraged to not use an explicit index value. The following example shows two identical code lines. Since the array is multi-item based, you are encouraged to always use explicit index values:

Set Value of oArray to sValue
Set Value of oArray 0 to sValue // suggested

We suggest that you apply the following standards to your programs:

  • Single item objects (e.g., Form, Checkbox, ComboForm, dbForm) should never pass an index number as a parameter. Allow VDF to provide index 0 for you.
  • Multi item objects (e.g., Arrays, Grids, dbGrids) should always pass an index number. Even if you know the value is zero or current_item explicitly pass the value. Never allow VDF to apply a value for you.

The compiler must explicitly recognize that a message requires an index parameter. Special compiler commands are created to identify these messages. The indexed messages identified as current-item based, which means that current_item is used if the index parameter is omitted, are:

Value                          Message                          Aux_Value
Shadow_State           Select_State                     Checkbox_Item_State
Autoclear_State          Center_State                     Entry_State
Item_Changed_State  Item_Entry_Msg                Item_Exit_Msg
Item_Validate_Msg     Data_File    Data_Field
Data_Window    Item_Options    Item_Option
Prompt_Object    Zoom_Object   

The indexed messages identified as Form based, which means that zero is used if the index parameter is omitted, are:

Form_Width     Form_Color      Form_Datatype
Form_Options     Form_Font     Form_Row
Form_Column     Form_Typeface     Form_Fontheight
Form_Fontweight     Form_Fontitalics       Form_Fontunderline
Button_Aspect     Form_Height    Form_Guiwidth
Form_Guiheight     Form_Guirow     Form_Guicolumn
Form_Margin     Form_Option     Form_Style
Form_Extended_Style     Form_Border       Password_State
Form_Mask     Form_Button     Form_Button_Value
Form_Window_Handle        

The difference between current-item based and form based messages are historical. Since we only recommend that you omit the index parameter on single item objects the current-item should always be zero. This means that both types of messages should be treated the same.

To allow the compiler to perform maximal error checking it is assumed that all of the above messages will support one and only one parameter to the left of the To keyword. This allows the compiler to recognize compiler errors in all of the following statements:

Set Value of oArray to 7 “My Value” // index in wrong location
Set Value of oArray 8 “My Value” // missing To
Set Value of oArray iVal     // is iVal index or value?
Set Value 0 to iVal of oArray // object id in wrong place
Set Value oArray iVal “test” // what does this mean

The older compiler would permit most of the constructs creating programs that were quite difficult to maintain.

A couple of messages are exceptions to this rule. The following index messages requires two parameters to the right of the To keyword. They exist for historical reasons only. No new messages will be created in this style. Those messages are: Item_Option, Form_Color, Form_Style, and Form_Extended_Style.

Finally, note that the use of messages with optional parameters is an internal technique. Do not extend this technique to your own custom messages.

Back to Language Changes

Case Statement

DataFlex case statement has been shipped with DataFlex since version 3.1. It was not documented and the package “case.mac” had to be used in a program to access it. It is now part of the precompiled packages. The syntax is discussed elsewhere (see Language Guide and Reference).

If you are using some other case command in your programs you are encouraged to change your applications and use the official command provided with the product.

If you must use some other case command, you can disable the DataFlex case command by commenting out the line “use case.mac” in dfbase.pkg and pre-compiling. This should be a temporary solution. Next revision this disabling feature will not be supported.

Back to Language Changes

Contents


Runtime and Package Changes

Data Dictionary Changes
Changes to Existing Behaviors
Extended Pointer Buffer Changes
Private Interface Changes
Crystal Reports Replaces WinQL Class
Changes in the Error Report Message
Changes in the Error Command
WinPrint DfWritePos and DfWriteLnPos Commands

 

A number of bug fixes have been made in the runtime and the packages. A list of these can be found at www.dataaccess.com/visualdataflex/updates. By recompiling your programs and running them under the VDF6 runtime the bug fixes will immediately applied to you programs. There several changes that may required changes in your programs.

Data Dictionary Changes

Changes were made in the Data Dictionaries to better support batch processes – in particular, processes generated by the DataFlex WebApp Server. Most of these changes were made to be invisible to existing applications. The most important changes are:

  • The DDO’s field validations did not perform Required and Find_Req validations. In visual views, the DEO would normally perform these operations but in batch processes these validations were skipped. The validation now occurs as expected. This has caused a certain amount of confusion, which is discussed below.
  • The new message File_Field_Entry was created that moves data into the DDO’s field buffer in much the same way keyboard entry does. It properly disables entry into fields that are No_Put (both regular and foreign field), it performs capslocks as needed, and it performs autofinds as needed. In some batch processing situations you would want to use this message instead of set File_Field_current_value.
  • The new message File_field_Find was created which performs a find similar to the Item_Find message except that data for the find is moved into the DDO buffer and not the DEO buffer.
  • The messages DefineExtendedField and DefineAllExtendedFields can be used to create DDO support for text and binary fields. This feature was added primarily for batch processing. Once created the messages get/set File_field_current_value, get/set field_current_value, set file_field_changed_value, set field_changed_value, and set file_field_entry can be used to modify text fields. Again, this should really only be used with batch processes. Other extended field messages have been created but they are considered advanced and should be used with care.
  • A new property, allow_foreign_new_save_state, makes it easier to save new parent records when a child record is being saved. For example, you may wish to save a header (parent) record the first time the detail (child) record is saved. In such a case, the parent record should not behave as a foreign file and its allow_foreign_new_save_state should be set True.
  • Changed the way errors are reported when the DD encounters field_current_value messages for extended and overlapped fields. In 5.0.6 we started reporting errors that were ignored in prior revisions. This has caused a great deal of confusion.
  • Within data dictionary objects, the message OnConstrain should be used in place of the Begin_Constraints/End_Constraints command (or the Constrain procedure).

IMPORTANT: In VDF, the data dictionary objects only directly support date, string and numeric fields. Text, binary and overlap fields are not supported and attempting to access them from within the DD is an error. The most likely improper use of this would be to get or set the field value with the field_current_value message. In previous versions of the DD packages these errors were handled by ignoring the message and doing nothing. While the error was not reported, the DDO was definitely not doing what you wanted it do. The right thing to do was to report the error so in revision 5.0.6, these errors were reported as “Extended Field not defined” errors. This caused confusion because programs that previously appeared to be working now generated error messages. These errors occurred under the following conditions:

  1. You attempted to access a text or binary field with the messages Get/Set Current_field_value, Get/Set File_field_current_value or Get Field_changed_value. Since these fields are not supported, in all cases, this is an error. You should look at your code and make the required corrections.
  2. You are improperly using overlap fields. You should not directly access overlap fields with DDO messages. It is expected that you will use the underlying (primary) fields that make up the overlap. In particular:
  • You should not access overlaps with the field_current_value, file_field_current_value or field_changed_value messages.
  • You should never mark any overlap fields as a Key-field. It doesn’t work. Instead, you should mark each underlying field as a key-field. A key field can consist of multiple fields.
  • You should not use overlap fields with field options, validation tables, entry, exit or validation messages. You should use the underlying fields instead.

Because this has caused so much confusion, the packages have been changed, so that a new runtime error message will be generated if you attempt to improperly use an overlap field. The reported error will be “999 - Invalid use of overlap with DD.” When displayed, the overlap file and field number will also be displayed. This is a bug in your program that must be fixed. In most cases, you will fix this by loading the Data Dictionary Builder, removing the DD field settings from the overlap field and adding the field setting to the underlying fields.

If you have been improperly accessing text or binary fields you will receive the error message “999 - Extended Field not defined in DD”. The offending file and field will also be reported with the error. You will want to fix this error in your code.

IMPORTANT: In previous versions of VDF, Required and Find_Required validations were not performed if the field needing the validation was not visually represented in a view. This validation loophole has now been closed. This has caused some problems with existing programs. The mostly likely cause of these problems and their solutions are:

  • When a child field relates to a parent field, the required and find_req should always be applied to the parent and not the child field. Setting the child-field is an error. In previous versions of VDF this error went undetected. In VDF6 you will want to correct this.
  • You should never set overlap fields as required or find_req.
  • Do not set overlap fields as key-fields. Instead, you should set each underlying field that makes up the overlap as a key field.
  • If you are saving a new parent record at the same time you are trying to save a new child record you may receive a required validation error. This occurs because the parent is now treated as a foreign file. If any of the parent’s foreign validations are required or find_req (and they probably are) you will get an error message when you attempt to save the child record (which is attempting to also save a new parent record). The solution is to set the parent’s allow_foreign_new_save_state to true. This tells the parent DDO that it should not treat the file as a foreign file.

Back to Runtime and Package Changes

Changes to Existing Behaviors

A number of internal changes were made to data-dictionaries to support advanced batch processing required for the DataFlex WebApp Server. These changes are private and should have no effect on current programs. They are presented here in detail for completeness.

Function Field_Validate / Request_Validate

Field_Validate now performs a Required and Findreq validation as needed. Previously data-entry objects only performed these validations which presented a problem with batch processing. Two new private functions Valdiate_Required and Validate_FindReq were created to support this change.

IMPORTANT: It is possible that the change in Field_Validate may alter the way your program works. This most likely change will be that you will see a required or find-required error occur when previously none occurred. In previous revisions of the data-dictionary required and find-required validations were not performed unless a DEO existed for the field within the view. This was a defect. It meant that these validation tests were never performed during batch processes and that they might not be performed during normal data entry (if the field were not present as a visual DEO item).

If you start encountering errors, you may wish to check your DD field settings. Your logic may be incorrect. Find_Required is most often used with foreign files (parent files). If you are encountering new validation problems with find_required keep in mind that setting the Validate_Foreign_File_state property to false can disable the foreign (parent) field validation. This property is normally set within a DD object and not class and is set in the main DDO and not the parent DDO. It can be set for an entire view or could be set conditionally by augmenting the function Request_Validate.

Function Validate_data_Sets / Function Validate_Fields

These functions are now passed an additional integer parameter (bNoStop) which determines if the validation should end when the first error is encountered. These are private messages, which should not be sent or augmented, and therefore this change should be transparent to all applications. If you have used these messages you will need to change your code.

Function Data_Set

Previously this function only traversed upwards when looking for the DDO owner of the passed file. This has been changed. If the owner DDO is not found during the upward sweep it will traverse in a downward direction. Since DDOs are almost always found in an upwards sweep, this should have no impact on applications.

Get/Set File_Field_Current_Value / Field_Current_Value

These messages may be used with extended DDO fields (text and binary). This allows you to move text fields in and out of strings. This is only supported if extended fields are supported, if the extended field has been created for the DDO, and the maximum string length is large enough to hold the extended value. If any of these conditions are not met, an error will occur.

It is only expected that you will use these messages during batch processing.

Back to Runtime and Package Changes

Extended Pointer Buffer Changes

The new VDF 6 runtime contains changes to support direct access to heap memory via pointers. This enhancement has been used to provide basic support for extended fields in the DD. Extended fields are text and binary fields. Normally, DDs do not support these field types internally. These changes can only be used with runtimes that support the new pointer logic and support the windows heap memory interface.

Extended fields are supported as follows: When a DD is created, local buffers are not created for extended fields (text and binary). If these fields are not needed, you do not want to incur the added overhead of these fields. An extended DD field can be created for any field by sending the message DefineExtendedField passing the field number of the extended field. Extended DD fields can be created for all text and binary fields within a DDO by sending the message DefineAllExtendedFields.

When an extended DD field is created, a “field object” is created for this field. Within this field object heap memory is allocated for this field’s buffer. Once created values are moved between the file buffer and the extended fields in the same way they are moved in and out of the normal fields. The “refresh” process moves data from the file buffer to the DD buffer, and the “update” process moves data from the DD buffer to the file buffer. In addition a mechanism is provided for updating the DD buffer value.

The DDO has an interface that allows access to the field-object. In addition, once the DD has identified the Field object id, an interface exists within the field object that can be directly accessed. Currently the Field-object interface is private, and is therefore not documented.

The DD extended field interface is:

Procedure DefineExtendedField
Send DefineExtendedField Field FileName.FieldName

An extended DD field can be created for any field by sending the message DefineExtendedField passing the field number of the extended field.

Object Customer_DD is a Customer_DataDictionary
Send DefineExtendedField Field Customer.Notes
End_Object

Procedure DefineAllExtendedFields
Send DefineAllExtendedFields

Extended DD fields can be created for all text and binary fields within a DDO by sending the message DefineAllExtendedFields.

Object Customer_DD is a Customer_DataDictionary
Send DefineAllExtendedFields
End_Object

Procedure Set File_Field_Pointer_Entry
Set File_Field_Pointer_Entry of hDD iFile iField iLen bShowErr to pData
Set File_Field_Pointer_Entry of hDD File_Field FileName.FieldName iLen bShowErr to pData

This is called to move data from an entry source to the DD extended buffer. This is identical to the File_Field_Entry message except that a pointer to the data is passed. You must make sure that this pointer addresses valid memory and that the length of the data is correct. If the length, iLen, passed less than the field length, the rest of the field will be zero filled. If the length passed is greater than the length of the field, the length will be truncated.

The parameter bShowErr determines if an error should be generated if the data is invalid for the passed file type. The value should not matter since it is not possible to pass invalid data to the currently supported extended field types.

If the extended field does not exist, an error will be generated.

You may use the message Set File_Field_Entry to enter data into all fields (including text and binary). If this message is used with an extended field, the passed string value will be converted to a pointer and the message will be directed to the extended pointer messages.

Procedure Set File_Field_Current_Pointer_Value / Field_Current_Pointer_Value
Set File_Field_Current_Pointer_Value iFile iField iLen to pData
Set File_Field_Current_Pointer_Value File_Field FileName.FieldName iLen to pData
Set Field_Current_Pointer_Value iField iLen to pData
Set Field_Current_Pointer_Value Field FileName.FieldName iLen to pData

This is called to move data from an entry source to the DD extended buffer. This is identical to the set File_Field_Current_Value and Field_Current_Value messages except that a pointer to the data is passed. You must make sure that this pointer addresses valid memory and that the length of the data is correct. If the length, iLen, passed less than the field length, the rest of the field will be zero filled. If the length passed is greater than the length of the field, the length will be truncated.

It is expected that this message would only be used in batch processing.

If the extended field does not exist, an error will be generated.

You may use the messages Set File_Field_Current_Value, Set Field_Current_Value, Set File_field_Changed_value or Set Field_Changed_Value to enter data into all fields (including text and binary). If these messages are used with an extended field, the passed string value will be converted to a pointer and the message will be directed to the extended pointer messages.

String sData
Get File_field_Current_value file_field customer.comments to sData
Move (lowercase(sData)) to sData
Set File_field_Changed_value file_field customer.comments to sData

Function File_Field_Current_Pointer_Value / Field_Current_Pointer_Value
Get File_Field_Current_Pointer_Value of hDD iFile iField to pData
Get File_Field_Current_Pointer_Value of hDD file_Field FileName.FieldName to pData
Get Field_Current_Pointer_Value of hDD iField to pData
Get Field_Current_Pointer_Value of hDD Field FileName.FieldName to pData

These functions return a pointer to the data in the extended field object. You can use this to make a copy of this data. While you could use this to change the data, you are not encouraged to do so. If you need to change the data in the extended field object use the File_field_Pointer_Entry message.

Note that this message is not similar to Field_Current_Value where the string value of the DD buffer is returned to a new string. This returns the pointer to the actual DD data. It does not create a copy of the data – you must do that yourself. You could do this by moving the pointer to a string (make sure that the maximum string size is big enough to handle the data).

Address pData
string sData
Get File_field_Current_pointer_value file_field customer.comments to pData
Move pData to sData

You may also use the messages Get File_Field_Current_Value and Get Field_Current_Value to retrieve extended fields to a string. If the extended field does not exist or the field’s length is greater than the maximum allowable string size, an error will be generated. The sample above could be rewritten as:

string sData
Get File_field_Current_value file_field customer.comments to sData
Function Field_Object
Get Field_Object iField to hFieldObj
Get Field_Object Field FileName.FieldName to hFieldObj

Returns the object ID of the extended field object. If zero, no extended field object exists.

Back to Runtime and Package Changes

Private Interface Changes

Procedure Set Field_Pointer_Entry integer iField integer iOpts integer iLen integer bShowErr Address pData

Private: Use File_Field_Pointer_Entry

Procedure ExtendedFieldsUpdate integer bSave

Private: Called when data must be moved from the DD field to the file buffer. This updates all extended fields. If bSave is true, the update is for a save, if false, it is a find. Normally, extended DD fields are not updated during a find (since extended fields can not have indexes).

Procedure ExtendedFieldsRefresh integer iRec

Private: This is called when a new record has been found or cleared and data must be moved from the file buffer to the DD Buffer. This refreshes all extended field buffers. If iRec is zero (0), the record is being cleared, else iRec determines the record number of the new record.

Back to Runtime and Package Changes

Crystal Reports Replaces WinQL Class

The VDF class which “wraps” a WinQL (a.k.a. Crystal) report has be changed from WinQLReport to CrystalReport. Within your programs, you will need to change all instances of WinQLReport to CrystalReport.

Back to Runtime and Package Changes

Changes in the Error Report Message

A bug in the previous error handler caused errors in lines about 64K to not be properly displayed. The fix for this required that an additional required parameter be passed to the Error_Report message. If your programs do not directly send or augment the error_report message (and most programs do not) you will not need to make any changes in your program. If your programs do send or augment this message, you will need to make the changes described below.

The error object handler processes errors by sending the message Error_report. In previous runtimes, two parameters were passed to this procedure, errorInfo (integer) and errorText (string). ErrorInfo was a complex integer containing both error number and line number. This “packing” scheme only works with programs with fewer than 64k lines. When programs contain more than 64k lines error will not be properly reported.

This has been fixed in the runtime. The Error_report message now receives three (non-packed) variables. The new format for error_report is:

procedure Error_Report integer iErrNum integer iErrLine string sErrMsg

IMPORTANT: If you have created any augmentations of Error_report you must change your programs. Your procedure must now receive three parameters and you must make sure you forward three parameters. For example, assume that you had the following procedure:

Procedure Error_Report integer iErrInfo string sErrMsg
    integer hErr
    integer iErrNum iErrLine
    If (error_processing_state(self)=False) Begin
        Set Error_processing_State to True // prevents recursion
        Move (Hi(iErrInfo)) to iErrNum
        Move (Low(iErrInfo)) to iErrLine
        Send Log_this_Error iErrNum iErrLine iErrMsg
        Get Old_Error_Object_Id to hErr
        Send Error_Report to hErr iErrInfo sErrMsg
        Set Error_processing_State to False
    End
End_procedure

It would be changed as follows:

Procedure Error_Report integer iErrNum integer iErrLine string sErrMsg
    handle hErr
    If (error_processing_state(self)=False) Begin
        Set Error_processing_State to True // prevents recursion
        Send Log_this_Error iErrNum iErrLine iErrMsg
        Get Old_Error_Object_Id to hErr
        Send Error_Report to hErr iErrNum iErrLine sErrMsg
        Set Error_processing_State to False
    End
End_procedure

If you are sending the message Error_report to trigger an error you will also need to change your parameter list. However, you should not be using the technique at all! If you wish to generate an error you should do so with the Error command. The message error_report should only ever be sent within the error_report procedure. Some developers have been using this improper technique as a means on generating errors within text longer than 40 characters. Now that this 40-character limit has been removed you should no longer need to use this technique.

IMPORTANT: If you find that you are encountering nonsense error messages such as error numbers of 0 and error text that makes no sense, then your program probably contains calls to error_report or augmentations of error_report, and you did not change your source to handle the additional parameter.

Back to Runtime and Package Changes

Changes in the Error Command

The error command now supports text string of more than 40 characters. The new limit is 2,048. This allows you to pass much longer error messages in the error command. For example:

Error 300 “This string can now be MUCH longer than 40 characters”

Back to Runtime and Package Changes

WinPrint DfWritePos and DfWriteLnPos Commands

You can now pass a length parameter with the DfWritePos and DfWriteLnPos commands. This can be used to solve the problem of overlapping output. This change is 100% backwards compatible.

The new syntax for the commands are:

dfWritePos variable position [attributes [decimal_places [MaxLength]]]
dfWriteLnPos variable position [attributes [decimal_places [MaxLength]]]

The new parameter, MaxLength, determines maximum length of output value using the current system metrics (Inch or Cm). The Report Wizard has been modified to use the max-length feature.

Back to Runtime and Package Changes

Contents


Documentation

New Explorer Style Help
Help Ordering
New Language Guide
Language Reference

We continue to make efforts to improve the format, content and accessibility of our documentation. The most significant changes for VDF6 are:

New Explorer Style Help

Help is now presented in a split screen, Explorer style format with the help topics (Contents, Index or Search) presented on the left and the current help topic presented on the right. You are probably looking at this new help format right now. You should find this new format to be both easier to use and more powerful.

Back to Documentation

Help Ordering

The ordering of the main help topics are:

  • IDE: Provides a complete guide to using the IDE. It covers all topics related to developing applications with the IDE.
  • Database Builder: Explains how to use Database Builder to create data-files and to build and modify data-dictionaries.
  • Debugger: Explains how the new VDF debugger is used.
  • Language Guide: This new document provides a complete introduction to the DataFlex language.
  • Language Reference: Previously called “Command Reference.” It contains information about all VDF commands, functions, variables and tokens. Both the format and content have been significantly improved.
  • Class Reference: Provides reference to all VDF classes.
  • Developer's Guide: This section contains information about various development topics. These are the topics that did not fit neatly into other areas. This was previously called the Language Guide.
  • Compiler: This provides information about the VDF compiler.

Back to Documentation

New Language Guide

The Language Guide is a new and important addition to VDF. It provides an introduction to the VDF Language. This document is presented in both printed and on-line format. New developers will find the guide to be invaluable. Current developers are also encouraged to review the guide. It provides an excellent overview of VDF’s new suggested usage.

Back to Documentation

Language Reference

The old VDF “Command Reference” is now called the “Language Reference.” Both it’s format and content has been significantly altered. These changes include:

  • Over 100 commands have been marked as obsolete and moved to a special section for obsolete command and functions. Each obsolete entry contains a link to the command or function that should replace it.
  • Functions, global variables, database API attributes and compiler directives have been moved into their own separate sub-sections.
  • The Command sub-section is much smaller and now only contains current commands.
  • All documentation and samples have been reviewed and updated to make sure that the information presented is accurate for VDF6 and adheres to suggested programming styles. (The exception to this is the obsolete commands, which have not been changed).

IMPORTANT: Commands and functions that are now marked as obsolete can still be used in VDF6. Your existing programs will still run. These items have been marked as obsolete primarily to serve as a guide for your future program development. While you do not need to change your existing programs you are strongly encouraged to not use obsolete commands in your new programs.

Back to Documentation

Contents


Crystal Reports 7 for DataFlex

Crystal Report Writer 7 (CRW7) is now provided as a standard component VDF6. This replaces WinQL (the previous version of Crystal Reports provided with VDF5). CRW7 is downwards compatible with WinQL. All reports created with WinQL will run unchanged using CRW7.

IMPORTANT: The name of the Crystal Report class has been changed from WinQLReport to CrystalReport. Any existing reports based on the old WinQLReport class will need to be changed to CrystalReport.

The change to CRW7 will impact you in the following ways:

  • You now have full access to the many advanced features and enhancements found in CRW7.
  • Reports created using WinQL may be used with CRW7 without changes.
  • Within existing report views, you will need to rename all instances WinQlReport to CrystalReport. Other than the name change, these classes are 100% compatible. The name change was made to remove any possible confusion about support for WinQL versus Crystal.
  • The CrystalReport class does not yet take advantage of all of CRW7’s new features. Future revisions of this package will address this.
  • A Crystal Report Writer wizard has been added to the IDE. It allows you to select an existing Crystal report and to create a report view with a “front end” specially customized for this report.

Contents


Connectivity Kits

The DataFlex Connectivity Kit for ODBC
The DataFlex Connectivity Kit for Pervasive.SQLV
The DataFlex Connectivity Kit for IBM DB2

The DataFlex Connectivity Kit for ODBC

New Features

  1. The number of records in use attribute has been implemented.
  2. The handling of nulls until now was implicit and ambiguous. We have changed this into explicit null support. Values are not null unless the programmer specifies them to be null. Using the DF_FIELD_IS_NULL attribute, which can be set to true and false, will accomplish this. Setting a field to null will clear its value.
  3. Switched statements to NOSCAN ODBC offers so-called escape clause support. All statements executed will be scanned for an escape clause and replaced by the vendor specific equivalent. Most statements we use do not use escape clauses at all. We have switched scanning for escape clauses off for all statements where this is possible. This is supposed to result in faster statement parsing.
  4. Database Builder & Drvr_cnv schema name support for conversion It is now possible to supply a schema name to convert to. This can be done both for ODBC and DB2. This can be used when converting files with identical names to the database. Usually the original database has these files in different directories.
  5. When opening a table the primary key information will be read in from the backend (to not confuse it with the DF record identity). The primary key fields are defined as an index. The user can setup the index number for this index by using the index_name keyword in the intermediate file and setting it to SQL_PRIMARY. Be aware that not all ODBC driver support getting this information. In that case you also cannot use this way to define an index.
  6. Database Builder log & run unattended: Database Builder will now create a log file when converting to ODBC or DB2. It also has the possibility to setup a run unattended mode, which will write errors to the log file instead of popping them up and waiting for user input. This functionality can be found in Database Builder 1.089, beta’s of Database Builder for VDF5 can be downloaded from the Data Access FTP site ftp://ftp.dataaccess.com. It can be found at anonymous/pub/updates/beta.
  7. Setting the DF_FILE_ALIAS attribute to DF_FILE_ALIAS_DEFAULT is now supported.
  8. Primary index checking. The Data Access ODBC Client will now check for primary indexes in a more strict way. It was possible to access a table with no or badly defined primary index. The check for a legal primary index was added to the find, delete, update (saving existing records with changes), getting recnum and setting recnum functions.
  9. The “connect to ODBC” option in Database Builder now generates a FD file.
  10. The possibility to connect or convert to a file data source has been added.
  11. Introduced SCHEMA_NAME intermediate file keyword In SQL database different schemas can exist within one database. Two schemas can contain the same tables (at least tables with the same name). This could result in errors when opening a table that occurs in multiple schema and the user having access rights to all schema. ODBC would report the fields of all tables instead of just the one in the schema. This can be fixed by adding the intermediate file keyword SCHEMA_NAME to the intermediate file of the table in question. It should be set to the name of the schema the table is defined in.
  12. Overlap fields in indexes. The conversion logic did not support overlap fields in indexes. It simply would not convert. This has been adjusted. The index definition on the backend will be an index containing all overlapped fields.
  13. Conversion type with length in middle. The conversion logic assumed all types defined the field length at the end of the definition, like “BINARY (255)”. It turns out there are backends that do not comply with this expectation. The driver now checks where the size must be placed so you can convert to type as “CHARacter (255) FOR BIT DATA”
  14. An unsuccessful find would clear the buffer. This behavior has been removed, the buffer stays in the same state as before the find.
  15. Introduced PRIMARY_INDEX_TRIGGER intermediate file keyword. You can identify a primary index as being triggered. Setting the intermediate file keyword PRIMARY_INDEX_TRIGGER to YES will se this up, The default is NO. If this is set for a file the driver will try to determine the new record number of a created file automatically. This is done by performing a “select max(recordid)” after the record has been created. Setting up the trigger is the user’s responsibility.
  16. After a save operation the record will be re-found. This ensures the correct information is in the buffer in case the server has triggers defined on some of the columns in the table. Doing a re-find after each save will slow down the performance of save operations. Not all files will have triggers defined so it not needed do perform the extra find after every save. You can setup the bahavior on a file by file basis by setting the “REFIND_AFTER_SAVE” intermediate file keyword to “YES”. Alternatively it can be set to “NO” which is the default.
  17. Default index names The index names that are generated by the structure_end logic have changed from <Tablename>1, <Tablename>2 … to <Tablename>001, <Tablename>002 …. This will show the indexes in correct order when querying from the SQL backend.
  18. The Structure_End logic supports all, driver specific, intermediate file keywords. At this moment it supports the following keywords: FIELD_OVERLAP_START, FIELD_OVERLAP_END, PRIMARY_INDEX, PRIMARY_INDEX_TRIGGER, SYSTEM_FILE, FIELD_STORE_TIME, MAX_ROWS_FETCHED, SCHEMA_NAME, INDEX_NAME, TRANSLATE_OEM_TO_ANSI, REFIND_AFTER_SAVE. Non supported keywords are: FIELD_OVERLAP_OFFSET_START and FIELD_OVERLAP_OFFSET_END. They can be used in an intermediate file but the Structure_End logic will replace fields defined this way by complete overlapped fields using the FIELD_OVERLAP_START and FIELD_OVERLAP_END keywords.
  19. New DataFlex package layout Adjusted the DataFlex packages for ODBC and DB2. Since a big part of the attributes overlap we created 3 packages: CLI.PKG, Defines common functionality and constants. ODBC_DRV.PKG, Defines ODBC specific functionality and constants. DB2_DRV.PKG, Defines DB2 specific functionality and constants. In the DataFlex program you only need to use the driver specific packages.
  20. Made sure the extra attributes defined for the driver can be set and get through the DataFlex attribute commands. For this purpose we have define the following attribute constants: DF_FILE_MAX_ROWS_FETCHED, DF_FILE_PRIMARY_INDEX_TRIGGER, DF_FILE_TRANSLATE_OEM_TO_ANSI, DF_FILE_REFIND_AFTER_SAVE, DF_FIELD_STORE_TIME, DF_INDEX_NAME.
  21. When setting the type of a new or existing column to date the size would not be set automatically. This has been fixed. The size is set to 10, which is the smallest size that can hold the string YYYY-MM-DD (the SQL date representation). What the actual size will be once the change has been made permanent depends on the backend.

Back to Connectivity Kits

The DataFlex Connectivity Kit for Pervasive.SQL

Notes

The installation will no longer include the Btrieve 6.15 Workstation Engine.

New Features

  1. Faster file opening; Files can now be opened faster by using so-called structure cache. This method will write the complete structure of a file to a .CCH file so that a next open doesn't need to read the info from the DDF files, but straight from the CCH file.
  2. Faster finds; A new method has been developed to get the (internal) recnum for a file. This method may speed up finds up to 30% faster.
  3. New fieldtypes; Support for three fieldtypes which became available with Pervasive 7, has been added:
  • CURRENCY: This field in Pervasive.SQL will be converted to 14.4 BCD. The maximum number of digits for the integer part is 14 in DataFlex while the CURRENCY supports up to 15 int digits. When a value cannot be represented in a BCD or a BCD's value cannot be represented in a CURRENCY field, an error will be generated.
  • TIMESTAMP: A Btrieve timestamp field holds the number of septa seconds (10 ^ -7) since January 1st 0001 in a Gregorian calendar. The value will be represented in DataFlex as BCD where the int-part of the BCD represents the number of seconds and the decimal-part represents the parts of a second.
  • 64-bit INTEGER: These will be represented as a BCD in DataFlex. If the number in the file is too large to represent in a BCD, an error will be generated. If a value to be stored has a decimal value an error will be generated too.

New Locking / Transaction Method

A new method for locking has been implemented. The previous version of the driver used to lock every file/record accessed during a transaction, including files that had their filemode set to READONLY. This has been changed so that when using Concurrent transaction, it will no longer lock file that have been set to READONLY. Exclusive transaction will lock each and every open file.

The driver supports two types of transactions: Exclusive and concurrent. The default is concurrent. Exclusive transactions will lock a complete file while concurrent transactions will only lock one record at a time.

The moment at which a lock will be placed depends on the setting EXPLICIT_LOCKING. When this has been set to 0, it will place a lock the first time a file/record is accessed within a transaction. When set to 1, it will place the lock immediately when the transaction is started.

TRANSACTION_TYPE = EXCLUSIVE & EXPLICIT_LOCKING = 0

The files will be locked when access for the first time in a transaction. The FILE_MODE of a file doesn't matter, each file will be locked. Note that this mechanism can cause deadlocks!

TRANSACTION_TYPE = EXCLUSIVE & EXPLICIT_LOCKING = 1

All open files will be locked when a transaction is started, no matter what FILE_MODE is used.

TRANSACTION_TYPE = CONCURRENT & EXPLICIT_LOCKING = 0

Records will be locked when accessed for the first time in a transaction. When a file's FILE_MODE is set to READ_ONLY, records will NOT be locked, unless the LOCK_READONLY setting has been set to 1. Note that this mechanism can cause deadlocks!

TRANSACTION_TYPE = CONCURRENT & EXPLICIT_LOCKING = 1

Active records will be locked when the transaction is started. When a file's FILE_MODE is set to READ_ONLY, records will NOT be locked, unless the LOCK_READONLY setting has been set to 1. Note that this mechanism can cause deadlocks!

LOCK_TIMEOUT & LOCK_DELAY

Two keywords are being added to control the lock timeout and delay value when a record or file is in use. The LOCK_TIMEOUT can be set to the number milliseconds to try to get a lock. When set to 0 it will try forever until it succeeds. By default this setting is read from the DataFlex DF_LOCK_TIMEOUT attribute. The LOCK_DELAY controls the number of milliseconds that will be paused between to lock tries. By default this setting is read from the DataFlex DF_LOCK_TIMEOUT attribute.

Back to Connectivity Kits

The DataFlex Connectivity Kit for IBM DB2

New Features

  1. Spaces in column names where not supported. The driver will now generate a “quoted identifier” in such cases.
  2. The number of records in use attribute has been implemented.
  3. Added null value support. The handling of nulls until now was implicit and ambiguous. We have changed this into explicit null support. Values are not null unless the programmer specifies them to be null. Using the DF_FIELD_IS_NULL attribute, which can be set to true and false, will accomplish this. Setting a field to null will clear its value.
  4. Switched statements to NOSCAN. The DB2 CLI offers so-called escape clause support. All statements executed will be scanned for an escape clause and replaced by the vendor specific equivalent. Most statements we use do not use escape clauses at all. We have switched scanning for escape clauses off for all statements where this is possible. This is supposed to result in faster statement parsing.
  5. Database Builder & Drvr_cnv schema name support for conversion. It is now possible to supply a schema name to convert to. This can be done both for ODBC and DB2. This can be used when converting files with identical names to the database. Usually the original database has these files in different directories.
  6. When opening a table the primary key information will be read in from the backend (to not confuse it with the DF record identity). The primary key fields are defined as an index. The user can setup the index number for this index by using the index_name keyword in the intermediate file and setting it to SQL_PRIMARY.
  7. Changed the logic that determines the main index for a field. The main index of an overlap field is the main index of the first overlapped fields that has a main index. If for example we have an overlap overlapping field 1,2 and 3 and the main index for field 1 is 0, the main index for field 2 is 3 and the main index for field 3 is 1 the main index of the overlap field will be 3.
  8. Database Builder will now create a log file when converting to ODBC or DB2. It also has the possibility to setup a run unattended mode, which will write errors to the log file instead of popping them up and waiting for user input. This functionality can be found in Database Builder 1.089, beta’s of Database Builder for VDF6 can be downloaded from the Data Access FTP site ftp://ftp.dataaccess.com. It can be found at anonymous/pub/updates/beta.
  9. We have downloaded and installed the ODBC 3.5 SDK. This is the new version of ODBC from Microsoft. Supports some new features but it was mainly installed to keep up to date with the developments in the ODBC area. We have not been able to find any problems in this update on running existing installations.
  10. Setting the DF_FILE_ALIAS attribute to DF_FILE_ALIAS_DEFAULT is now supported.
  11. Field cleared after moving invalid data. When moving an invalid value to a field the field’s value would be cleared. This has been fixed. The original value stays in the field.
  12. The Data Access ODBC Client will now check for primary indexes in a more strict way. It was possible to access a table with no or badly defined primary index. The check for a legal primary index was added to the find, delete, update (saving existing records with changes), getting recnum and setting recnum functions.
  13. The “connect to ODBC” option in Database Builder now generates a FD file.
  14. The ability to connect or convert to a file data source has been added.
  15. Introduced SCHEMA_NAME intermediate file keyword. In SQL database different schemas can exist within one database. Two schemas can contain the same tables (at least tables with the same name). This could result in errors when opening a table that occurs in multiple schema and the user having access rights to all schema. ODBC would report the fields of all tables instead of just the one in the schema. This can be fixed by adding the intermediate file keyword SCHEMA_NAME to the intermediate file of the table in question. It should be set to the name of the schema the table is defined in.
  16. Some backends have specific types that do not match any of the pre-defined ODBC types. These types would not be allowed. We changed this so the type will be reported as TEXT.
  17. The conversion process would change the DataFlex definition of a file and then convert. This has been changed to manipulate the ODBC structure. This way the original DataFlex file will be intact even if something goes wrong during conversion. Furthermore, the record copy logic was changed not to stop when an error occurs but to continue with the next record.
  18. The conversion logic did not support overlap fields in indexes. It simply would not convert. This has been adjusted. The index definition on the backend will be an index containing all overlapped fields.
  19. The conversion logic assumed all types defined the field length at the end of the definition, like “BINARY (255)”. It turns out there are backends that do not comply with this expectation. The driver now checks where the size must be placed so you can convert to type as “CHARacter (255) FOR BIT DATA”
  20. Introduced PRIMARY_INDEX_TRIGGER intermediate file keyword. You can identify a primary index as being triggered. Setting the intermediate file keyword PRIMARY_INDEX_TRIGGER to YES will se this up, The default is NO. If this is set for a file the driver will try to determine the new record number of a created file automatically. This is done by performing a “select max(recordid)” after the record has been created. Setting up the trigger is the user’s responsibility.
  21. Re-find after save. After a save operation the record will be re-found. This ensures the correct information is in the buffer in case the server has triggers defined on some of the columns in the table. Doing a re-find after each save will slow down the performance of save operations. Not all files will have triggers defined so it not needed do perform the extra find after every save. You can setup the bahavior on a file by file basis by setting the “REFIND_AFTER_SAVE” intermediate file keyword to “YES”. Alternatively it can be set to “NO” which is the default.
  22. Partial overlaps. Basically the way overlaps are defined in DataFlex is not enough to accurately convert an overlap field to another backend (any backend). Overlap fields are defined as an offset and a length. In another backend the offset is usually not the same and the length can also be completely different to define an overlap that is functionally equivalent (overlaps the same fields). For this reason it will not always be possible to relate between two overlaps in two different backends. There is nothing we can do about that. The current conversion logic will convert all overlap fields as overlapping complete fields, regardless of the original definition. Determining the start and end field of the overlap and using the offset and length of those fields in the converted structure to setup the field attributes does this. This is the best we can do at conversion. If a partial overlap is needed the intermediate file must be edited manually. Partial overlaps are defined by specifying the start and end offset by using the “FIELD_OVERLAP_OFSET_START” and “FIELD_OVERLAP_OFFSET_END” intermediate file keywords. Remember that these are manual setting in the intermediate file. Every re-structure operation that results in a new intermediate file will overwrite these settings. The structure_end logic forces overlaps to be on complete fields. When setting these keywords keep in mind that ODBC fields have different sizes from their DataFlex counterparts.
  23. ANSI character set support In a Windows environment there are two character sets in use, OEM and ANSI. DataFlex (both VDF and DF3.1c) uses the OEM character set. Other tools may use another character set. The data in the database will be stored in the character set provided by the program. This may cause problems when using both DataFlex and some other ANSI oriented tool to access and manipulate the same data. You can set the driver up so it stores string fields in ANSI format. In that case, the string will be converted to ANSI when moved in the buffer and converted to OEM when moved out of the buffer. This can be setup on a file by file basis by setting the “
    TRANSLATE_OEM_TO_ANSI” intermediate file keyword to “YES”. Alternatively, it can be set to “NO” which is the default.
  24. Default index names The index names that are generated by the structure_end logic have changed from <Tablename>1, <Tablename>2 … to <Tablename>001, <Tablename>002 …. This will show the indexes in correct order when querying from the SQL backend.
  25. Index_name support When indexes used a different naming convention than the ODBC Clients default (001, 002, 003, etcetera), changing the index definition could be a problem. The ODBC Clients structure_end logic will delete all indexes and re-create them. To delete an index in ODBC you need to know its name.
  26. Structure_End driver specific keyword support. The Structure_End logic supports all, driver specific, intermediate file keywords. At this moment it supports the following keywords: FIELD_OVERLAP_START, FIELD_OVERLAP_END, PRIMARY_INDEX, PRIMARY_INDEX_TRIGGER, SYSTEM_FILE, FIELD_STORE_TIME, MAX_ROWS_FETCHED, SCHEMA_NAME, INDEX_NAME, TRANSLATE_OEM_TO_ANSI, REFIND_AFTER_SAVE. Non supported keywords are: FIELD_OVERLAP_OFFSET_START and FIELD_OVERLAP_OFFSET_END. They can be used in an intermediate file but the Structure_End logic will replace fields defined this way by complete overlapped fields using the FIELD_OVERLAP_START and FIELD_OVERLAP_END keywords.
  27. Attribute support made sure the extra attributes defined for the driver can be Set and Get through the DataFlex attribute commands. For this purpose we have define the following attribute constants: DF_FILE_MAX_ROWS_FETCHED, DF_FILE_PRIMARY_INDEX_TRIGGER, DF_FILE_TRANSLATE_OEM_TO_ANSI, DF_FILE_REFIND_AFTER_SAVE, DF_FIELD_STORE_TIME, DF_INDEX_NAME.
  28. Date type size when setting the type of a new or existing column to date the size would not be set automatically. This has been fixed. The size is set to 10, which is the smallest size that can hold the string YYYY-MM-DD (the SQL date representation). What the actual size will be once the change has been made permanent depends on the backend.
  29. FOR FETCH ONLY added to select statements. All generated select statements have the FOR FETCH ONLY clause. This should improve concurrency performance.
  30. FOR UPDATE and positioned updates / deletes. Implemented a schema where we use the FOR UPDATE clause together with positioned updates and deletes. Basically we use the record found in the reread again when updating or deleting the record. This should speed up updating (amending) or deleting records. The optimization only works when reread is used! (Or a find Eq by recnum after a lock but that is the same…).

Back to Connectivity Kits

Contents


WinPrint

Reporting Interface Changes
Report Wizard
Registry Settings for the Report Wizard

The latest version of WinPrint is 1.22.

The most significant change in WinPrint is that you can now pass a length parameter with the DfWritePos and DfWriteLnPos commands. This can be used to solve the problem of overlapping output. This change is 100% backwards compatible.

The new syntax for the commands are:

dfWritePos variable position [attributes [decimal_places [MaxLength]]]
dfWriteLnPos variable position [attributes [decimal_places [MaxLength]]]

The Report Wizard has been modified to use the max-length feature.

Back to WinPrint

Reporting Interface Changes

Commands

dfWritePos variable position [attributes [decimal_places [MaxLength]]]
dfWriteLnPos variable position [attributes [decimal_places [MaxLength]]]

The new parameter, MaxLength, determines maximum length of output value using the current system metrics (Inch or Cm).

For example:

DfWritePos Customer.Name 5 FONT.DEFAULT -1 6.5
DfWritePos Customer.Total 5 FONT.DEFAULT 2 1.8

This is an optional parameter and if not passed the entire output value will be output. Passing a length of zero (0) will also output the entire string.

If you wish to pass the MaxLength parameter you must also pass parameters for Attributes and Decimal_places. Pass -1 for no-decimal places (i.e., a string output). The following commands are the same:

DfWritePos Customer.Name 10
DfWritePos Customer.Name 10 FONT.DEFAULT
DfWritePos Customer.Name 10 FONT.DEFAULT -1
DfWritePos Customer.Name 10 FONT.DEFAULT -1 0

WinPrint Object Message

Note: These messages are not currently documented. Developers might use these and they should be considered public messages.

Procedure DFWritePos String sText DWORD iStyle Number Pos Integer Dec Number nMaxLen
Procedure DFWritelnPos String sText DWORD iStyle Number Pos Integer Dec Number nMaxLen
Procedure DFWritePosToPage Integer Page String sText DWORD iStyle Number Pos ;
        Integer Dec number nMaxLen
Procedure DFWritelnPosToPage Integer Page String sText DWORD iStyle Number Pos
        Integer Dec Number nMaxLen

All of these procedures now receive an additional parameter, nLen, which determines the maximum length of the output string. If 0 is passed, the entire string is output.

This parameter is optional. If not passed, 0 is used. This parameter was made optional to maintain backward compatibility. You are encouraged to always pass the length parameter.

For example:

Send DfWritePos to WinPrintId sMyValue (Font.Default) -1 7.2
Send DfWritePos to WinPrintId sMyNumber (Font.Default+Font.Right) 2 0
WinPrint DLL External Function

Two new private external messages have been added. Developers should not be using these messages. They should be using the commands and messages listed above.

External_Function32 WriteLineToPositionInchEx "WriteLineToPositionInchEx" DFPRINT.DLL ;
Integer iPageNr ;
String sText;
Integer iTextLen;
Integer iLineFeed;
Integer iPosition;
Integer iDecimal;
Integer iLength; // new max output length parameter, 0 = All,
Returns Integer

External_Function32 WriteLineToPositionMmEx "WriteLineToPositionMmEx" DFPRINT.DLL;
Integer iPageNr ;
String sText;
Integer iTextLen;
Integer iLineFeed;
Integer iPosition;
Integer iDecimal;
Integer iLength; // new max output length parameter, 0 = All,
Returns Integer

These are identical to WriteLinetoPositionMm and WriteLinetoPositionInch except they are passed an additional Length parameter. These two, old functions are now obsolete and are maintained only for backward compatibility. The new "ex" functions should be used in their place. The WinPrint global object now calls these new functions.

Back to WinPrint

Report Wizard

  • The report wizard will now generate source code that uses the maximum length parameter.
  • The wizard will now does a more intelligent job of determining the expected length and position of an output field.
  • A new registry setting will allow you to output wizard generated code in inches (in addition to the default Centimeter output). That registry key is ..\WinPrint\ReportWizard\Metrics and setting this value to INCH will cause the report wizard to generate output in inches. Setting this value to CM (or anything other than INCH) or leaving it blank will cause the wizard to generate the code in Centimeters.

Back to WinPrint

Registry Settings for the Report Wizard

The Report Wizard supports a number of registry settings that can be used to customize the wizard’s report generation process. While there is no direct access to these values from within the IDE you can set these values manually. You can do this by selecting the Modify Workspace option from the IDE’s Workspace menu. From within the explorer select Other Keys, and from there select WinPrint. You will then probably need to create a new sub-key named ReportWizard. All of the following keys and data can be created within this section.

All of these registry settings are string type. They determine how your source code will be generated. For example, if you want your reports to use “Times New Roman” instead of “Arial” as your body font you would change the Body_Font value to “Times New Roman.”  Refer to the documentation for a complete list of registry settings.

Back to WinPrint

Contents


Database Explorer

Database Explorer now supports:

  • Database driver loading.
  • Toggle file list on/off.
  • Choosing which value will be displayed as filename in the file list.
  • A multi-user data refresh (timer based or button based).
  • More attribute and field information is shown.
  • Show/hide file numbers in the file list was added.
  • A toggle function to skip text fields in the data grid.
  • Selecting fields for a partial data grid build can now be done at user (select) order.
  • Local EPOCH support is available.
  • The export data part was replaced by a data export wizard that can generate source code.
  • If you like you can start this wizard from other programs too.

Contents

Back to top
dot-line.gif (127 bytes)
Data Access Corporation
address