A number of bug fixes have
been made in the runtime and the packages. A list of these can be
found at www.dataaccess.com/visualdataflex/updates. By recompiling
your programs and running them under the VDF6 runtime the bug fixes
will immediately applied to you programs. There several changes that
may required changes in your programs.
Changes were made in the Data
Dictionaries to better support batch processes – in particular,
processes generated by the DataFlex WebApp Server. Most of these
changes were made to be invisible to existing applications. The most
important changes are:
- The DDO’s field
validations did not perform Required and Find_Req validations. In
visual views, the DEO would normally perform these operations but
in batch processes these validations were skipped. The validation
now occurs as expected. This has caused a certain amount of
confusion, which is discussed below.
- The new message
File_Field_Entry was created that moves data into the DDO’s field
buffer in much the same way keyboard entry does. It properly
disables entry into fields that are No_Put (both regular and
foreign field), it performs capslocks as needed, and it performs
autofinds as needed. In some batch processing situations you would
want to use this message instead of set
File_Field_current_value.
- The new message
File_field_Find was created which performs a find similar to the
Item_Find message except that data for the find is moved into the
DDO buffer and not the DEO buffer.
- The messages
DefineExtendedField and DefineAllExtendedFields can be used to
create DDO support for text and binary fields. This feature was
added primarily for batch processing. Once created the messages
get/set File_field_current_value, get/set field_current_value, set
file_field_changed_value, set field_changed_value, and set
file_field_entry can be used to modify text fields. Again, this
should really only be used with batch processes. Other extended
field messages have been created but they are considered advanced
and should be used with care.
- A new property,
allow_foreign_new_save_state, makes it easier to save new parent
records when a child record is being saved. For example, you may
wish to save a header (parent) record the first time the detail
(child) record is saved. In such a case, the parent record should
not behave as a foreign file and its allow_foreign_new_save_state
should be set True.
- Changed the way errors are
reported when the DD encounters field_current_value messages for
extended and overlapped fields. In 5.0.6 we started reporting
errors that were ignored in prior revisions. This has caused a
great deal of confusion.
- Within data dictionary
objects, the message OnConstrain should be used in place of the
Begin_Constraints/End_Constraints command (or the Constrain
procedure).
IMPORTANT:
In VDF, the data dictionary objects only directly support date,
string and numeric fields. Text, binary and overlap fields are not
supported and attempting to access them from within the DD is an
error. The most likely improper use of this would be to get or set
the field value with the field_current_value message. In previous
versions of the DD packages these errors were handled by ignoring
the message and doing nothing. While the error was not reported, the
DDO was definitely not doing what you wanted it do. The right thing
to do was to report the error so in revision 5.0.6, these errors
were reported as “Extended Field not defined” errors. This caused
confusion because programs that previously appeared to be working
now generated error messages. These errors occurred under the
following conditions:
- You attempted to access a
text or binary field with the messages Get/Set
Current_field_value, Get/Set File_field_current_value or Get
Field_changed_value. Since these fields are not supported, in all
cases, this is an error. You should look at your code and make the
required corrections.
- You are improperly using
overlap fields. You should not directly access overlap fields with
DDO messages. It is expected that you will use the underlying
(primary) fields that make up the overlap. In particular:
- You should not access
overlaps with the field_current_value, file_field_current_value
or field_changed_value messages.
- You should never mark
any overlap fields as a Key-field. It doesn’t work. Instead, you
should mark each underlying field as a key-field. A key field
can consist of multiple fields.
- You should not use
overlap fields with field options, validation tables, entry,
exit or validation messages. You should use the underlying
fields instead.
Because this has caused so
much confusion, the packages have been changed, so that a new
runtime error message will be generated if you attempt to improperly
use an overlap field. The reported error will be “999 - Invalid use
of overlap with DD.” When displayed, the overlap file and field
number will also be displayed. This is a bug in your program that
must be fixed. In most cases, you will fix this by loading the Data
Dictionary Builder, removing the DD field settings from the overlap
field and adding the field setting to the underlying fields.
If you have been improperly
accessing text or binary fields you will receive the error message
“999 - Extended Field not defined in DD”. The offending file and
field will also be reported with the error. You will want to fix
this error in your code.
IMPORTANT:
In previous versions of VDF, Required and Find_Required validations
were not performed if the field needing the validation was not
visually represented in a view. This validation loophole has now
been closed. This has caused some problems with existing programs.
The mostly likely cause of these problems and their solutions
are:
- When a child field relates
to a parent field, the required and find_req should always be
applied to the parent and not the child field. Setting the
child-field is an error. In previous versions of VDF this error
went undetected. In VDF6 you will want to correct this.
- You should never set
overlap fields as required or find_req.
- Do not set overlap fields
as key-fields. Instead, you should set each underlying field that
makes up the overlap as a key field.
- If you are saving a new
parent record at the same time you are trying to save a new child
record you may receive a required validation error. This occurs
because the parent is now treated as a foreign file. If any of the
parent’s foreign validations are required or find_req (and they
probably are) you will get an error message when you attempt to
save the child record (which is attempting to also save a new
parent record). The solution is to set the parent’s
allow_foreign_new_save_state to true. This tells the parent DDO
that it should not treat the file as a foreign file.
Back to Runtime and Package
Changes
A number of internal changes
were made to data-dictionaries to support advanced batch processing
required for the DataFlex WebApp Server. These changes are private
and should have no effect on current programs. They are presented
here in detail for completeness.
Function Field_Validate /
Request_Validate
Field_Validate now performs a
Required and Findreq validation as needed. Previously data-entry
objects only performed these validations which presented a problem
with batch processing. Two new private functions Valdiate_Required
and Validate_FindReq were created to support this
change.
IMPORTANT:
It is possible that the change in Field_Validate may alter the way
your program works. This most likely change will be that you will
see a required or find-required error occur when previously none
occurred. In previous revisions of the data-dictionary required and
find-required validations were not performed unless a DEO existed
for the field within the view. This was a defect. It meant that
these validation tests were never performed during batch processes
and that they might not be performed during normal data entry (if
the field were not present as a visual DEO item).
If you start encountering
errors, you may wish to check your DD field settings. Your logic may
be incorrect. Find_Required is most often used with foreign files
(parent files). If you are encountering new validation problems with
find_required keep in mind that setting the
Validate_Foreign_File_state property to false can disable the
foreign (parent) field validation. This property is normally set
within a DD object and not class and is set in the main DDO and not
the parent DDO. It can be set for an entire view or could be set
conditionally by augmenting the function
Request_Validate.
Function Validate_data_Sets
/ Function Validate_Fields
These functions are now
passed an additional integer parameter (bNoStop) which determines if
the validation should end when the first error is encountered. These
are private messages, which should not be sent or augmented, and
therefore this change should be transparent to all applications. If
you have used these messages you will need to change your
code.
Function
Data_Set
Previously this function only
traversed upwards when looking for the DDO owner of the passed file.
This has been changed. If the owner DDO is not found during the
upward sweep it will traverse in a downward direction. Since DDOs
are almost always found in an upwards sweep, this should have no
impact on applications.
Get/Set
File_Field_Current_Value / Field_Current_Value
These messages may be used
with extended DDO fields (text and binary). This allows you to move
text fields in and out of strings. This is only supported if
extended fields are supported, if the extended field has been
created for the DDO, and the maximum string length is large enough
to hold the extended value. If any of these conditions are not met,
an error will occur.
It is only expected that you
will use these messages during batch processing.
Back to Runtime and Package
Changes
The new VDF 6 runtime
contains changes to support direct access to heap memory via
pointers. This enhancement has been used to provide basic support
for extended fields in the DD. Extended fields are text and binary
fields. Normally, DDs do not support these field types internally.
These changes can only be used with runtimes that support the new
pointer logic and support the windows heap memory
interface.
Extended fields are supported
as follows: When a DD is created, local buffers are not created for
extended fields (text and binary). If these fields are not needed,
you do not want to incur the added overhead of these fields. An
extended DD field can be created for any field by sending the
message DefineExtendedField passing the field number of the extended
field. Extended DD fields can be created for all text and binary
fields within a DDO by sending the message
DefineAllExtendedFields.
When an extended DD field is
created, a “field object” is created for this field. Within this
field object heap memory is allocated for this field’s buffer. Once
created values are moved between the file buffer and the extended
fields in the same way they are moved in and out of the normal
fields. The “refresh” process moves data from the file buffer to the
DD buffer, and the “update” process moves data from the DD buffer to
the file buffer. In addition a mechanism is provided for updating
the DD buffer value.
The DDO has an interface that
allows access to the field-object. In addition, once the DD has
identified the Field object id, an interface exists within the field
object that can be directly accessed. Currently the Field-object
interface is private, and is therefore not
documented.
The DD extended field
interface is:
Procedure
DefineExtendedField Send DefineExtendedField Field
FileName.FieldName
An extended DD field can be
created for any field by sending the message DefineExtendedField
passing the field number of the extended field.
Object
Customer_DD is a Customer_DataDictionary Send
DefineExtendedField Field
Customer.Notes End_Object
Procedure
DefineAllExtendedFields Send
DefineAllExtendedFields
Extended DD fields can be
created for all text and binary fields within a DDO by sending the
message DefineAllExtendedFields.
Object
Customer_DD is a Customer_DataDictionary Send
DefineAllExtendedFields End_Object
Procedure
Set File_Field_Pointer_Entry Set File_Field_Pointer_Entry of
hDD iFile iField iLen bShowErr to pData Set
File_Field_Pointer_Entry of hDD File_Field FileName.FieldName iLen
bShowErr to pData
This is called to move data
from an entry source to the DD extended buffer. This is identical to
the File_Field_Entry message except that a pointer to the data is
passed. You must make sure that this pointer addresses valid memory
and that the length of the data is correct. If the length, iLen,
passed less than the field length, the rest of the field will be
zero filled. If the length passed is greater than the length of the
field, the length will be truncated.
The parameter bShowErr
determines if an error should be generated if the data is invalid
for the passed file type. The value should not matter since it is
not possible to pass invalid data to the currently supported
extended field types.
If the extended field does
not exist, an error will be generated.
You may use the message Set
File_Field_Entry to enter data into all fields (including text and
binary). If this message is used with an extended field, the passed
string value will be converted to a pointer and the message will be
directed to the extended pointer messages.
Procedure
Set File_Field_Current_Pointer_Value /
Field_Current_Pointer_Value Set
File_Field_Current_Pointer_Value iFile iField iLen to pData Set
File_Field_Current_Pointer_Value File_Field FileName.FieldName
iLen to pData Set Field_Current_Pointer_Value iField iLen to
pData Set Field_Current_Pointer_Value Field FileName.FieldName
iLen to pData
This is called to move data
from an entry source to the DD extended buffer. This is identical to
the set File_Field_Current_Value and Field_Current_Value messages
except that a pointer to the data is passed. You must make sure that
this pointer addresses valid memory and that the length of the data
is correct. If the length, iLen, passed less than the field length,
the rest of the field will be zero filled. If the length passed is
greater than the length of the field, the length will be
truncated.
It is expected that this
message would only be used in batch processing.
If the extended field does
not exist, an error will be generated.
You may use the messages Set
File_Field_Current_Value, Set Field_Current_Value, Set
File_field_Changed_value or Set Field_Changed_Value to enter data
into all fields (including text and binary). If these messages are
used with an extended field, the passed string value will be
converted to a pointer and the message will be directed to the
extended pointer messages.
String
sData Get File_field_Current_value file_field customer.comments
to sData Move (lowercase(sData)) to sData Set
File_field_Changed_value file_field customer.comments to
sData
Function
File_Field_Current_Pointer_Value /
Field_Current_Pointer_Value Get
File_Field_Current_Pointer_Value of hDD iFile iField to
pData Get File_Field_Current_Pointer_Value of hDD file_Field
FileName.FieldName to pData Get Field_Current_Pointer_Value of
hDD iField to pData Get Field_Current_Pointer_Value of hDD
Field FileName.FieldName to
pData
These functions return a
pointer to the data in the extended field object. You can use this
to make a copy of this data. While you could use this to change the
data, you are not encouraged to do so. If you need to change the
data in the extended field object use the File_field_Pointer_Entry
message.
Note that this message is not
similar to Field_Current_Value where the string value of the DD
buffer is returned to a new string. This returns the pointer to the
actual DD data. It does not create a copy of the data – you must do
that yourself. You could do this by moving the pointer to a string
(make sure that the maximum string size is big enough to handle the
data).
Address
pData string sData Get File_field_Current_pointer_value
file_field customer.comments to pData Move pData to
sData
You may also use the messages
Get File_Field_Current_Value and Get Field_Current_Value to retrieve
extended fields to a string. If the extended field does not exist or
the field’s length is greater than the maximum allowable string
size, an error will be generated. The sample above could be
rewritten as:
string
sData Get File_field_Current_value file_field customer.comments
to sData Function Field_Object Get Field_Object iField to
hFieldObj Get Field_Object Field FileName.FieldName to
hFieldObj
Returns the object ID of the
extended field object. If zero, no extended field object
exists.
Back to Runtime and Package
Changes
Procedure Set
Field_Pointer_Entry integer iField integer iOpts integer iLen
integer bShowErr Address pData
Private: Use
File_Field_Pointer_Entry
Procedure
ExtendedFieldsUpdate integer bSave
Private: Called when data
must be moved from the DD field to the file buffer. This updates all
extended fields. If bSave is true, the update is for a save, if
false, it is a find. Normally, extended DD fields are not updated
during a find (since extended fields can not have
indexes).
Procedure
ExtendedFieldsRefresh integer iRec
Private: This is called when
a new record has been found or cleared and data must be moved from
the file buffer to the DD Buffer. This refreshes all extended field
buffers. If iRec is zero (0), the record is being cleared, else iRec
determines the record number of the new record.
Back to Runtime and Package
Changes
The VDF class which “wraps” a
WinQL (a.k.a. Crystal) report has be changed from WinQLReport to
CrystalReport. Within your programs, you will need to change all
instances of WinQLReport to CrystalReport.
Back to Runtime and Package
Changes
A bug in the previous error
handler caused errors in lines about 64K to not be properly
displayed. The fix for this required that an additional required
parameter be passed to the Error_Report message. If your programs do
not directly send or augment the error_report message (and most
programs do not) you will not need to make any changes in your
program. If your programs do send or augment this message, you will
need to make the changes described below.
The error object handler
processes errors by sending the message Error_report. In previous
runtimes, two parameters were passed to this procedure, errorInfo
(integer) and errorText (string). ErrorInfo was a complex integer
containing both error number and line number. This “packing” scheme
only works with programs with fewer than 64k lines. When programs
contain more than 64k lines error will not be properly
reported.
This has been fixed in the
runtime. The Error_report message now receives three (non-packed)
variables. The new format for error_report is:
procedure
Error_Report integer iErrNum integer iErrLine string
sErrMsg
IMPORTANT:
If you have created any augmentations of Error_report you
must change your programs. Your procedure must now receive three
parameters and you must make sure you forward three parameters. For
example, assume that you had the following
procedure:
Procedure
Error_Report integer iErrInfo string sErrMsg
integer hErr integer iErrNum
iErrLine If
(error_processing_state(self)=False)
Begin Set
Error_processing_State to True // prevents
recursion Move
(Hi(iErrInfo)) to
iErrNum Move
(Low(iErrInfo)) to
iErrLine Send
Log_this_Error iErrNum iErrLine
iErrMsg Get
Old_Error_Object_Id to
hErr Send
Error_Report to hErr iErrInfo
sErrMsg Set
Error_processing_State to False
End End_procedure
It would be changed as
follows:
Procedure
Error_Report integer iErrNum integer iErrLine string
sErrMsg handle hErr If
(error_processing_state(self)=False)
Begin Set
Error_processing_State to True // prevents
recursion Send
Log_this_Error iErrNum iErrLine
iErrMsg Get
Old_Error_Object_Id to
hErr Send
Error_Report to hErr iErrNum iErrLine
sErrMsg Set
Error_processing_State to False
End End_procedure
If you are sending the
message Error_report to trigger an error you will also need to
change your parameter list. However, you should not be using the
technique at all! If you wish to generate an error you should do so
with the Error command. The message error_report should only ever be
sent within the error_report procedure. Some developers have been
using this improper technique as a means on generating errors within
text longer than 40 characters. Now that this 40-character limit has
been removed you should no longer need to use this
technique.
IMPORTANT:
If you find that you are encountering nonsense error messages such
as error numbers of 0 and error text that makes no sense, then your
program probably contains calls to error_report or augmentations of
error_report, and you did not change your source to handle the
additional parameter.
Back to Runtime and Package
Changes
The error command now
supports text string of more than 40 characters. The new limit is
2,048. This allows you to pass much longer error messages in the
error command. For example:
Error 300
“This string can now be MUCH longer than 40
characters”
Back to Runtime and Package
Changes
You can now pass a length
parameter with the DfWritePos and DfWriteLnPos commands. This can be
used to solve the problem of overlapping output. This change is 100%
backwards compatible.
The new syntax for the
commands are:
dfWritePos
variable position [attributes [decimal_places
[MaxLength]]] dfWriteLnPos variable position [attributes
[decimal_places
[MaxLength]]]
The new parameter, MaxLength,
determines maximum length of output value using the current system
metrics (Inch or Cm). The Report Wizard has been modified to use the
max-length feature.
Back to Runtime and Package
Changes
Contents
New Explorer Style Help Help Ordering New Language Guide Language Reference
We continue to make efforts
to improve the format, content and accessibility of our
documentation. The most significant changes for VDF6
are:
Help is now presented in a
split screen, Explorer style format with the help topics (Contents,
Index or Search) presented on the left and the current help topic
presented on the right. You are probably looking at this new help
format right now. You should find this new format to be both easier
to use and more powerful.
Back
to Documentation
The ordering of the main help
topics are:
- IDE: Provides a complete
guide to using the IDE. It covers all topics related to developing
applications with the IDE.
- Database Builder: Explains
how to use Database Builder to create data-files and to build and
modify data-dictionaries.
- Debugger: Explains how the
new VDF debugger is used.
- Language Guide: This new
document provides a complete introduction to the DataFlex
language.
- Language Reference:
Previously called “Command Reference.” It contains information
about all VDF commands, functions, variables and tokens. Both the
format and content have been significantly improved.
- Class Reference: Provides
reference to all VDF classes.
- Developer's Guide: This
section contains information about various development topics.
These are the topics that did not fit neatly into other areas.
This was previously called the Language Guide.
- Compiler: This provides
information about the VDF compiler.
Back
to Documentation
The Language Guide is a new
and important addition to VDF. It provides an introduction to the
VDF Language. This document is presented in both printed and on-line
format. New developers will find the guide to be invaluable. Current
developers are also encouraged to review the guide. It provides an
excellent overview of VDF’s new suggested usage.
Back
to Documentation
The old VDF “Command
Reference” is now called the “Language Reference.” Both it’s format
and content has been significantly altered. These changes
include:
- Over 100 commands have
been marked as obsolete and moved to a special section for
obsolete command and functions. Each obsolete entry contains a
link to the command or function that should replace it.
- Functions, global
variables, database API attributes and compiler directives have
been moved into their own separate sub-sections.
- The Command sub-section is
much smaller and now only contains current commands.
- All documentation and
samples have been reviewed and updated to make sure that the
information presented is accurate for VDF6 and adheres to
suggested programming styles. (The exception to this is the
obsolete commands, which have not been changed).
IMPORTANT:
Commands and functions that are now marked as obsolete can still be
used in VDF6. Your existing programs will still run. These items
have been marked as obsolete primarily to serve as a guide for your
future program development. While you do not need to change your
existing programs you are strongly encouraged to not use obsolete
commands in your new programs.
Back
to Documentation
Contents
Crystal Report Writer 7
(CRW7) is now provided as a standard component VDF6. This replaces
WinQL (the previous version of Crystal Reports provided with VDF5).
CRW7 is downwards compatible with WinQL. All reports created with
WinQL will run unchanged using CRW7.
IMPORTANT:
The name of the Crystal Report class has been changed from
WinQLReport to CrystalReport. Any existing reports based on the old
WinQLReport class will need to be changed to
CrystalReport.
The change to CRW7 will
impact you in the following ways:
- You now have full access
to the many advanced features and enhancements found in
CRW7.
- Reports created using
WinQL may be used with CRW7 without changes.
- Within existing report
views, you will need to rename all instances WinQlReport to
CrystalReport. Other than the name change, these classes are 100%
compatible. The name change was made to remove any possible
confusion about support for WinQL versus Crystal.
- The CrystalReport class
does not yet take advantage of all of CRW7’s new features. Future
revisions of this package will address this.
- A Crystal Report Writer
wizard has been added to the IDE. It allows you to select an
existing Crystal report and to create a report view with a “front
end” specially customized for this report.
Contents
The DataFlex Connectivity Kit for
ODBC The DataFlex Connectivity Kit for
Pervasive.SQLV The DataFlex Connectivity Kit for IBM
DB2
New
Features
- The number of records in
use attribute has been implemented.
- The handling of nulls
until now was implicit and ambiguous. We have changed this into
explicit null support. Values are not null unless the programmer
specifies them to be null. Using the DF_FIELD_IS_NULL attribute,
which can be set to true and false, will accomplish this. Setting
a field to null will clear its value.
- Switched statements to
NOSCAN ODBC offers so-called escape clause support. All statements
executed will be scanned for an escape clause and replaced by the
vendor specific equivalent. Most statements we use do not use
escape clauses at all. We have switched scanning for escape
clauses off for all statements where this is possible. This is
supposed to result in faster statement parsing.
- Database Builder &
Drvr_cnv schema name support for conversion It is now possible to
supply a schema name to convert to. This can be done both for ODBC
and DB2. This can be used when converting files with identical
names to the database. Usually the original database has these
files in different directories.
- When opening a table the
primary key information will be read in from the backend (to not
confuse it with the DF record identity). The primary key fields
are defined as an index. The user can setup the index number for
this index by using the index_name keyword in the intermediate
file and setting it to SQL_PRIMARY. Be aware that not all ODBC
driver support getting this information. In that case you also
cannot use this way to define an index.
- Database Builder log &
run unattended: Database Builder will now create a log file when
converting to ODBC or DB2. It also has the possibility to setup a
run unattended mode, which will write errors to the log file
instead of popping them up and waiting for user input. This
functionality can be found in Database Builder 1.089, beta’s of
Database Builder for VDF5 can be downloaded from the Data Access
FTP site ftp://ftp.dataaccess.com. It can be found at
anonymous/pub/updates/beta.
- Setting the DF_FILE_ALIAS
attribute to DF_FILE_ALIAS_DEFAULT is now supported.
- Primary index checking.
The Data Access ODBC Client will now check for primary indexes in
a more strict way. It was possible to access a table with no or
badly defined primary index. The check for a legal primary index
was added to the find, delete, update (saving existing records
with changes), getting recnum and setting recnum
functions.
- The “connect to ODBC”
option in Database Builder now generates a FD file.
- The possibility to connect
or convert to a file data source has been added.
- Introduced SCHEMA_NAME
intermediate file keyword In SQL database different schemas can
exist within one database. Two schemas can contain the same tables
(at least tables with the same name). This could result in errors
when opening a table that occurs in multiple schema and the user
having access rights to all schema. ODBC would report the fields
of all tables instead of just the one in the schema. This can be
fixed by adding the intermediate file keyword SCHEMA_NAME to the
intermediate file of the table in question. It should be set to
the name of the schema the table is defined in.
- Overlap fields in indexes.
The conversion logic did not support overlap fields in indexes. It
simply would not convert. This has been adjusted. The index
definition on the backend will be an index containing all
overlapped fields.
- Conversion type with
length in middle. The conversion logic assumed all types defined
the field length at the end of the definition, like “BINARY
(255)”. It turns out there are backends that do not comply with
this expectation. The driver now checks where the size must be
placed so you can convert to type as “CHARacter (255) FOR BIT
DATA”
- An unsuccessful find would
clear the buffer. This behavior has been removed, the buffer stays
in the same state as before the find.
- Introduced
PRIMARY_INDEX_TRIGGER intermediate file keyword. You can identify
a primary index as being triggered. Setting the intermediate file
keyword PRIMARY_INDEX_TRIGGER to YES will se this up, The default
is NO. If this is set for a file the driver will try to determine
the new record number of a created file automatically. This is
done by performing a “select max(recordid)” after the record has
been created. Setting up the trigger is the user’s
responsibility.
- After a save operation the
record will be re-found. This ensures the correct information is
in the buffer in case the server has triggers defined on some of
the columns in the table. Doing a re-find after each save will
slow down the performance of save operations. Not all files will
have triggers defined so it not needed do perform the extra find
after every save. You can setup the bahavior on a file by file
basis by setting the “REFIND_AFTER_SAVE” intermediate file keyword
to “YES”. Alternatively it can be set to “NO” which is the
default.
- Default index names The
index names that are generated by the structure_end logic have
changed from <Tablename>1, <Tablename>2 … to
<Tablename>001, <Tablename>002 …. This will show the
indexes in correct order when querying from the SQL
backend.
- The Structure_End logic
supports all, driver specific, intermediate file keywords. At this
moment it supports the following keywords: FIELD_OVERLAP_START,
FIELD_OVERLAP_END, PRIMARY_INDEX, PRIMARY_INDEX_TRIGGER,
SYSTEM_FILE, FIELD_STORE_TIME, MAX_ROWS_FETCHED, SCHEMA_NAME,
INDEX_NAME, TRANSLATE_OEM_TO_ANSI, REFIND_AFTER_SAVE. Non
supported keywords are: FIELD_OVERLAP_OFFSET_START and
FIELD_OVERLAP_OFFSET_END. They can be used in an intermediate file
but the Structure_End logic will replace fields defined this way
by complete overlapped fields using the FIELD_OVERLAP_START and
FIELD_OVERLAP_END keywords.
- New DataFlex package
layout Adjusted the DataFlex packages for ODBC and DB2. Since a
big part of the attributes overlap we created 3 packages: CLI.PKG,
Defines common functionality and constants. ODBC_DRV.PKG, Defines
ODBC specific functionality and constants. DB2_DRV.PKG, Defines
DB2 specific functionality and constants. In the DataFlex program
you only need to use the driver specific packages.
- Made sure the extra
attributes defined for the driver can be set and get through the
DataFlex attribute commands. For this purpose we have define the
following attribute constants: DF_FILE_MAX_ROWS_FETCHED,
DF_FILE_PRIMARY_INDEX_TRIGGER, DF_FILE_TRANSLATE_OEM_TO_ANSI,
DF_FILE_REFIND_AFTER_SAVE, DF_FIELD_STORE_TIME,
DF_INDEX_NAME.
- When setting the type of a
new or existing column to date the size would not be set
automatically. This has been fixed. The size is set to 10, which
is the smallest size that can hold the string YYYY-MM-DD (the SQL
date representation). What the actual size will be once the change
has been made permanent depends on the backend.
Back to Connectivity
Kits
Notes
The installation will no
longer include the Btrieve 6.15 Workstation
Engine.
New
Features
- Faster file opening; Files
can now be opened faster by using so-called structure cache. This
method will write the complete structure of a file to a .CCH file
so that a next open doesn't need to read the info from the DDF
files, but straight from the CCH file.
- Faster finds; A new method
has been developed to get the (internal) recnum for a file. This
method may speed up finds up to 30% faster.
- New fieldtypes; Support
for three fieldtypes which became available with Pervasive 7, has
been added:
- CURRENCY: This field in
Pervasive.SQL will be converted to 14.4 BCD. The maximum number
of digits for the integer part is 14 in DataFlex while the
CURRENCY supports up to 15 int digits. When a value cannot be
represented in a BCD or a BCD's value cannot be represented in a
CURRENCY field, an error will be generated.
- TIMESTAMP: A Btrieve
timestamp field holds the number of septa seconds (10 ^ -7)
since January 1st 0001 in a Gregorian calendar. The value will
be represented in DataFlex as BCD where the int-part of the BCD
represents the number of seconds and the decimal-part represents
the parts of a second.
- 64-bit INTEGER: These
will be represented as a BCD in DataFlex. If the number in the
file is too large to represent in a BCD, an error will be
generated. If a value to be stored has a decimal value an error
will be generated too.
New Locking / Transaction
Method
A new method for locking has
been implemented. The previous version of the driver used to lock
every file/record accessed during a transaction, including files
that had their filemode set to READONLY. This has been changed so
that when using Concurrent transaction, it will no longer lock file
that have been set to READONLY. Exclusive transaction will lock each
and every open file.
The driver supports two types
of transactions: Exclusive and concurrent. The default is
concurrent. Exclusive transactions will lock a complete file while
concurrent transactions will only lock one record at a
time.
The moment at which a lock
will be placed depends on the setting EXPLICIT_LOCKING. When this
has been set to 0, it will place a lock the first time a file/record
is accessed within a transaction. When set to 1, it will place the
lock immediately when the transaction is started.
TRANSACTION_TYPE =
EXCLUSIVE & EXPLICIT_LOCKING =
0
The files will be locked when
access for the first time in a transaction. The FILE_MODE of a file
doesn't matter, each file will be locked. Note that this mechanism
can cause deadlocks!
TRANSACTION_TYPE =
EXCLUSIVE & EXPLICIT_LOCKING =
1
All open files will be locked
when a transaction is started, no matter what FILE_MODE is
used.
TRANSACTION_TYPE =
CONCURRENT & EXPLICIT_LOCKING =
0
Records will be locked when
accessed for the first time in a transaction. When a file's
FILE_MODE is set to READ_ONLY, records will NOT be locked, unless
the LOCK_READONLY setting has been set to 1. Note that this
mechanism can cause deadlocks!
TRANSACTION_TYPE =
CONCURRENT & EXPLICIT_LOCKING =
1
Active records will be locked
when the transaction is started. When a file's FILE_MODE is set to
READ_ONLY, records will NOT be locked, unless the LOCK_READONLY
setting has been set to 1. Note that this mechanism can cause
deadlocks!
LOCK_TIMEOUT &
LOCK_DELAY
Two keywords are being added
to control the lock timeout and delay value when a record or file is
in use. The LOCK_TIMEOUT can be set to the number milliseconds to
try to get a lock. When set to 0 it will try forever until it
succeeds. By default this setting is read from the DataFlex
DF_LOCK_TIMEOUT attribute. The LOCK_DELAY controls the number of
milliseconds that will be paused between to lock tries. By default
this setting is read from the DataFlex DF_LOCK_TIMEOUT
attribute.
Back to Connectivity
Kits
New
Features
- Spaces in column names
where not supported. The driver will now generate a “quoted
identifier” in such cases.
- The number of records in
use attribute has been implemented.
- Added null value support.
The handling of nulls until now was implicit and ambiguous. We
have changed this into explicit null support. Values are not null
unless the programmer specifies them to be null. Using the
DF_FIELD_IS_NULL attribute, which can be set to true and false,
will accomplish this. Setting a field to null will clear its
value.
- Switched statements to
NOSCAN. The DB2 CLI offers so-called escape clause support. All
statements executed will be scanned for an escape clause and
replaced by the vendor specific equivalent. Most statements we use
do not use escape clauses at all. We have switched scanning for
escape clauses off for all statements where this is possible. This
is supposed to result in faster statement parsing.
- Database Builder &
Drvr_cnv schema name support for conversion. It is now possible to
supply a schema name to convert to. This can be done both for ODBC
and DB2. This can be used when converting files with identical
names to the database. Usually the original database has these
files in different directories.
- When opening a table the
primary key information will be read in from the backend (to not
confuse it with the DF record identity). The primary key fields
are defined as an index. The user can setup the index number for
this index by using the index_name keyword in the intermediate
file and setting it to SQL_PRIMARY.
- Changed the logic that
determines the main index for a field. The main index of an
overlap field is the main index of the first overlapped fields
that has a main index. If for example we have an overlap
overlapping field 1,2 and 3 and the main index for field 1 is 0,
the main index for field 2 is 3 and the main index for field 3 is
1 the main index of the overlap field will be 3.
- Database Builder will now
create a log file when converting to ODBC or DB2. It also has the
possibility to setup a run unattended mode, which will write
errors to the log file instead of popping them up and waiting for
user input. This functionality can be found in Database Builder
1.089, beta’s of Database Builder for VDF6 can be downloaded from
the Data Access FTP site ftp://ftp.dataaccess.com. It can be found
at anonymous/pub/updates/beta.
- We have downloaded and
installed the ODBC 3.5 SDK. This is the new version of ODBC from
Microsoft. Supports some new features but it was mainly installed
to keep up to date with the developments in the ODBC area. We have
not been able to find any problems in this update on running
existing installations.
- Setting the DF_FILE_ALIAS
attribute to DF_FILE_ALIAS_DEFAULT is now supported.
- Field cleared after moving
invalid data. When moving an invalid value to a field the field’s
value would be cleared. This has been fixed. The original value
stays in the field.
- The Data Access ODBC
Client will now check for primary indexes in a more strict way. It
was possible to access a table with no or badly defined primary
index. The check for a legal primary index was added to the find,
delete, update (saving existing records with changes), getting
recnum and setting recnum functions.
- The “connect to ODBC”
option in Database Builder now generates a FD file.
- The ability to connect or
convert to a file data source has been added.
- Introduced SCHEMA_NAME
intermediate file keyword. In SQL database different schemas can
exist within one database. Two schemas can contain the same tables
(at least tables with the same name). This could result in errors
when opening a table that occurs in multiple schema and the user
having access rights to all schema. ODBC would report the fields
of all tables instead of just the one in the schema. This can be
fixed by adding the intermediate file keyword SCHEMA_NAME to the
intermediate file of the table in question. It should be set to
the name of the schema the table is defined in.
- Some backends have
specific types that do not match any of the pre-defined ODBC
types. These types would not be allowed. We changed this so the
type will be reported as TEXT.
- The conversion process
would change the DataFlex definition of a file and then convert.
This has been changed to manipulate the ODBC structure. This way
the original DataFlex file will be intact even if something goes
wrong during conversion. Furthermore, the record copy logic was
changed not to stop when an error occurs but to continue with the
next record.
- The conversion logic did
not support overlap fields in indexes. It simply would not
convert. This has been adjusted. The index definition on the
backend will be an index containing all overlapped fields.
- The conversion logic
assumed all types defined the field length at the end of the
definition, like “BINARY (255)”. It turns out there are backends
that do not comply with this expectation. The driver now checks
where the size must be placed so you can convert to type as
“CHARacter (255) FOR BIT DATA”
- Introduced
PRIMARY_INDEX_TRIGGER intermediate file keyword. You can identify
a primary index as being triggered. Setting the intermediate file
keyword PRIMARY_INDEX_TRIGGER to YES will se this up, The default
is NO. If this is set for a file the driver will try to determine
the new record number of a created file automatically. This is
done by performing a “select max(recordid)” after the record has
been created. Setting up the trigger is the user’s
responsibility.
- Re-find after save. After
a save operation the record will be re-found. This ensures the
correct information is in the buffer in case the server has
triggers defined on some of the columns in the table. Doing a
re-find after each save will slow down the performance of save
operations. Not all files will have triggers defined so it not
needed do perform the extra find after every save. You can setup
the bahavior on a file by file basis by setting the
“REFIND_AFTER_SAVE” intermediate file keyword to “YES”.
Alternatively it can be set to “NO” which is the default.
- Partial overlaps.
Basically the way overlaps are defined in DataFlex is not enough
to accurately convert an overlap field to another backend (any
backend). Overlap fields are defined as an offset and a length. In
another backend the offset is usually not the same and the length
can also be completely different to define an overlap that is
functionally equivalent (overlaps the same fields). For this
reason it will not always be possible to relate between two
overlaps in two different backends. There is nothing we can do
about that. The current conversion logic will convert all overlap
fields as overlapping complete fields, regardless of the original
definition. Determining the start and end field of the overlap and
using the offset and length of those fields in the converted
structure to setup the field attributes does this. This is the
best we can do at conversion. If a partial overlap is needed the
intermediate file must be edited manually. Partial overlaps are
defined by specifying the start and end offset by using the
“FIELD_OVERLAP_OFSET_START” and “FIELD_OVERLAP_OFFSET_END”
intermediate file keywords. Remember that these are manual setting
in the intermediate file. Every re-structure operation that
results in a new intermediate file will overwrite these settings.
The structure_end logic forces overlaps to be on complete fields.
When setting these keywords keep in mind that ODBC fields have
different sizes from their DataFlex counterparts.
- ANSI character set support
In a Windows environment there are two character sets in use, OEM
and ANSI. DataFlex (both VDF and DF3.1c) uses the OEM character
set. Other tools may use another character set. The data in the
database will be stored in the character set provided by the
program. This may cause problems when using both DataFlex and some
other ANSI oriented tool to access and manipulate the same data.
You can set the driver up so it stores string fields in ANSI
format. In that case, the string will be converted to ANSI when
moved in the buffer and converted to OEM when moved out of the
buffer. This can be setup on a file by file basis by setting the
“
TRANSLATE_OEM_TO_ANSI” intermediate file keyword to “YES”.
Alternatively, it can be set to “NO” which is the default.
- Default index names The
index names that are generated by the structure_end logic have
changed from <Tablename>1, <Tablename>2 … to
<Tablename>001, <Tablename>002 …. This will show the
indexes in correct order when querying from the SQL
backend.
- Index_name support When
indexes used a different naming convention than the ODBC Clients
default (001, 002, 003, etcetera), changing the index definition
could be a problem. The ODBC Clients structure_end logic will
delete all indexes and re-create them. To delete an index in ODBC
you need to know its name.
- Structure_End driver
specific keyword support. The Structure_End logic supports all,
driver specific, intermediate file keywords. At this moment it
supports the following keywords: FIELD_OVERLAP_START,
FIELD_OVERLAP_END, PRIMARY_INDEX, PRIMARY_INDEX_TRIGGER,
SYSTEM_FILE, FIELD_STORE_TIME, MAX_ROWS_FETCHED, SCHEMA_NAME,
INDEX_NAME, TRANSLATE_OEM_TO_ANSI, REFIND_AFTER_SAVE. Non
supported keywords are: FIELD_OVERLAP_OFFSET_START and
FIELD_OVERLAP_OFFSET_END. They can be used in an intermediate file
but the Structure_End logic will replace fields defined this way
by complete overlapped fields using the FIELD_OVERLAP_START and
FIELD_OVERLAP_END keywords.
- Attribute support made
sure the extra attributes defined for the driver can be Set and
Get through the DataFlex attribute commands. For this purpose we
have define the following attribute constants:
DF_FILE_MAX_ROWS_FETCHED, DF_FILE_PRIMARY_INDEX_TRIGGER,
DF_FILE_TRANSLATE_OEM_TO_ANSI, DF_FILE_REFIND_AFTER_SAVE,
DF_FIELD_STORE_TIME, DF_INDEX_NAME.
- Date type size when
setting the type of a new or existing column to date the size
would not be set automatically. This has been fixed. The size is
set to 10, which is the smallest size that can hold the string
YYYY-MM-DD (the SQL date representation). What the actual size
will be once the change has been made permanent depends on the
backend.
- FOR FETCH ONLY added to
select statements. All generated select statements have the FOR
FETCH ONLY clause. This should improve concurrency
performance.
- FOR UPDATE and positioned
updates / deletes. Implemented a schema where we use the FOR
UPDATE clause together with positioned updates and deletes.
Basically we use the record found in the reread again when
updating or deleting the record. This should speed up updating
(amending) or deleting records. The optimization only works when
reread is used! (Or a find Eq by recnum after a lock but that is
the same…).
Back to Connectivity
Kits
Contents
Reporting Interface
Changes Report Wizard Registry Settings for the Report
Wizard
The latest version of
WinPrint is 1.22.
The most significant change
in WinPrint is that you can now pass a length parameter with the
DfWritePos and DfWriteLnPos commands. This can be used to solve the
problem of overlapping output. This change is 100% backwards
compatible.
The new syntax for the
commands are:
dfWritePos
variable position [attributes [decimal_places
[MaxLength]]] dfWriteLnPos variable position [attributes
[decimal_places
[MaxLength]]]
The Report Wizard has been
modified to use the max-length feature.
Back
to WinPrint
Commands
dfWritePos
variable position [attributes [decimal_places
[MaxLength]]] dfWriteLnPos variable position [attributes
[decimal_places
[MaxLength]]]
The new parameter, MaxLength,
determines maximum length of output value using the current system
metrics (Inch or Cm).
For
example:
DfWritePos
Customer.Name 5 FONT.DEFAULT -1 6.5 DfWritePos Customer.Total 5
FONT.DEFAULT 2 1.8
This is an optional parameter
and if not passed the entire output value will be output. Passing a
length of zero (0) will also output the entire
string.
If you wish to pass the
MaxLength parameter you must also pass parameters for Attributes and
Decimal_places. Pass -1 for no-decimal places (i.e., a string
output). The following commands are the same:
DfWritePos
Customer.Name 10 DfWritePos Customer.Name 10
FONT.DEFAULT DfWritePos Customer.Name 10 FONT.DEFAULT
-1 DfWritePos Customer.Name 10 FONT.DEFAULT -1
0
WinPrint Object
Message
Note: These messages are not
currently documented. Developers might use these and they should be
considered public messages.
Procedure
DFWritePos String sText DWORD iStyle Number Pos Integer Dec Number
nMaxLen Procedure DFWritelnPos String sText DWORD iStyle Number
Pos Integer Dec Number nMaxLen Procedure DFWritePosToPage
Integer Page String sText DWORD iStyle Number Pos
; Integer Dec number
nMaxLen Procedure DFWritelnPosToPage Integer Page String sText
DWORD iStyle Number Pos
Integer Dec Number
nMaxLen
All of these procedures now
receive an additional parameter, nLen, which determines the maximum
length of the output string. If 0 is passed, the entire string is
output.
This parameter is optional.
If not passed, 0 is used. This parameter was made optional to
maintain backward compatibility. You are encouraged to always pass
the length parameter.
For
example:
Send
DfWritePos to WinPrintId sMyValue (Font.Default) -1 7.2 Send
DfWritePos to WinPrintId sMyNumber (Font.Default+Font.Right) 2
0 WinPrint DLL External
Function
Two new private external
messages have been added. Developers should not be using these
messages. They should be using the commands and messages listed
above.
External_Function32
WriteLineToPositionInchEx "WriteLineToPositionInchEx" DFPRINT.DLL
; Integer iPageNr ; String sText; Integer
iTextLen; Integer iLineFeed; Integer iPosition; Integer
iDecimal; Integer iLength; // new max output length parameter,
0 = All, Returns Integer
External_Function32
WriteLineToPositionMmEx "WriteLineToPositionMmEx"
DFPRINT.DLL; Integer iPageNr ; String sText; Integer
iTextLen; Integer iLineFeed; Integer iPosition; Integer
iDecimal; Integer iLength; // new max output length parameter,
0 = All, Returns
Integer
These are identical to
WriteLinetoPositionMm and WriteLinetoPositionInch except they are
passed an additional Length parameter. These two, old functions are
now obsolete and are maintained only for backward compatibility. The
new "ex" functions should be used in their place. The WinPrint
global object now calls these new functions.
Back
to WinPrint
- The report wizard will now
generate source code that uses the maximum length
parameter.
- The wizard will now does a
more intelligent job of determining the expected length and
position of an output field.
- A new registry setting
will allow you to output wizard generated code in inches (in
addition to the default Centimeter output). That registry key is
..\WinPrint\ReportWizard\Metrics and setting this value to INCH
will cause the report wizard to generate output in inches. Setting
this value to CM (or anything other than INCH) or leaving it blank
will cause the wizard to generate the code in Centimeters.
Back
to WinPrint
The Report Wizard supports a
number of registry settings that can be used to customize the
wizard’s report generation process. While there is no direct access
to these values from within the IDE you can set these values
manually. You can do this by selecting the Modify Workspace option
from the IDE’s Workspace menu. From within the explorer select Other
Keys, and from there select WinPrint. You will then probably need to
create a new sub-key named ReportWizard. All of the following keys
and data can be created within this section.
All of these registry
settings are string type. They determine how your source code will
be generated. For example, if you want your reports to use “Times
New Roman” instead of “Arial” as your body font you would change the
Body_Font value to “Times New Roman.” Refer to the
documentation for a complete list of registry
settings.
Back
to WinPrint
Contents
Database Explorer now
supports:
- Database driver
loading.
- Toggle file list
on/off.
- Choosing which value will
be displayed as filename in the file list.
- A multi-user data refresh
(timer based or button based).
- More attribute and field
information is shown.
- Show/hide file numbers in
the file list was added.
- A toggle function to skip
text fields in the data grid.
- Selecting fields for a
partial data grid build can now be done at user (select)
order.
- Local EPOCH support is
available.
- The export data part was
replaced by a data export wizard that can generate source
code.
- If you like you can start
this wizard from other programs too.
Contents |