Hello David - I'll start with a quick synopsis of the differences
between MVS mainframes and the rest of the computing world.
1) Symbols are stored in EBCDIC rather than ASCII. It's just another
8-bit code, but it must be translated, usually character-by-character
in a network transport protocol layer.
2) The hardware actually does decimal arithmetic directly on business
numbers that are stored as Binary Coded (packed) Decimal (each 4 bits
is a decimal digit. These must also be expanded and translated to
ASCII, and vice-versa.
3) File structure is usually based on fixed-length fields, rather than
delimiters.
4) DB/2 is the original SQL-based RDB developed by Dr. E.F. Codd at
IBM.
5) DB/2 is a "layer" above VSAM (Virtual Seqential Access Method)
6) VSAM is an improvement over ISAM (Indexed Sequential Access
Method).
7) ISAM also originated in the mainframe world and has been "ported
out" elsewhwere.
8) MQ Series is like a "named pipe" or "socket" in UNIX parlance. |
Request for Answer Clarification by
david617-ga
on
25 Oct 2002 19:11 PDT
Hello Chris,
Thanks for ths starter information. I will be very interested to learn
as you answer this in more detail about the integration services that
DB/2 provides between the mainframe VSAM data stores and UNIX data
stores.
I am assuming that this technical information is a precursor to a more
detailed explanation which will answer the questions as posed in the
original request. I am especially interested in taking this from the
perspective of the database architect/project manager, which is be the
position that I would play in this engagement. I try to start at the
business requirements and move to implementation in a top-down
fashion.
I look forward to our future corresondence.
Regards,
David
|
Clarification of Answer by
chris2002micrometer-ga
on
26 Oct 2002 05:17 PDT
Hi David - I am in a hurry this morning and can't seem to find the
button for "clarification" so I will put it here in a comment. Of
course I expect to provide more detail as we progress, even if it
takes a few months. Rest assured. A DBA in the mainframe world
typically plans the storage requirements,normalizations, etc., creates
table "spaces" (VSAM files), and grants "priveleges" (permissions).
Depending on the size of the project, it can amount to a full-time
responsibility for one, or more people. Since your ODS is likely to
reside on the mainframe, I can guide you thru the syncronization
steps. I will be out today and tomorrow, but will check in when I get
back.
|
Request for Answer Clarification by
david617-ga
on
30 Oct 2002 09:39 PST
Chris,
I thought I would put this out as a reminder. Haven't heard back from
you since Saturday.
|
Clarification of Answer by
chris2002micrometer-ga
on
30 Oct 2002 11:51 PST
I'm back. In the absence of a specific question, I will continue with
the generalities.
- A basic introduction to the implementation of a Business Rules
engine on a MQ/XML interface and the likely integration issues which
would arise.
- A comparison of the data update paradigms between VSAM files and an
SQL-driven environment
I will need to research the first of these a bit, but let me expound a
bit on the second. VSAM files can be read/written/updated randomly (by
key or position) and sequentially (forward and backward). More than
one key field can be established. These VSAM methods are generally
implemented in COBOL and are customized/standardized by the particular
site. Equivalent system-level methods come with DB2 and provide all
the needed functionality to support it. DB2 stores its tables in VSAM
files and provides several SQL interfaces and most of the
access-privelege infrastructure.
You said that your client had bought, but not installed DB2. This is
not a big problem here. Running parallel maintenance from a single SQL
source is not a good practice, even with indentical databases. In my
experience at Verizon, the only sure way to maintain perfect
syncronization of separate databases is to reload them from a single
master copy of the data. This master is usually a composite of all of
the unloaded databases and transaction files. There are brief periods
of inaccessability for the files involved, but these can be overcome,
or minimized, in the application's design. On the mainframe, a VSAM
load/unload and a DB2 load/unload are pretty much the same thing.
I get back to you soon with some reasearch info on MQ/XML business
rules.
Regards, Chris
|
Clarification of Answer by
chris2002micrometer-ga
on
30 Oct 2002 13:04 PST
Regarding the "Business Rules engine on a MQ/XML interface", my
experience
is limited to accessing a Message Queue from within a COBOL program.
The MQ
is handled pretty much like a "flat file", having a beginning and,
possibly,
an end. The "business rules" consisted entirely of my use of
third-geneation
COBOL verbs and keywords to carry out my task.
I did a Google search for "Business Rules MQ XML" which yielded:
http://www-3.ibm.com/software/ts/mqseries/integrator/ost.html
Organisations using OST Financial Rules for MQSeries Integrator in
association with the suite of MQSeries products can benefit from a
complete and highly scaleable Enterprise Application Integration (EAI)
solution providing messaging, message broking, workflow and the
provision of financial and business rules within the integration
process. The integration of OSTs product with MQSeries Integrator
provides the crucial link between the underlying technology and the
business user.
--IMHO, this is a "dilbertesque" offering of a 4th-generation
language ("wizard" on a PC) that would create a rapid, but simple and
inflexible prototype.
http://www.developer.ibm.com/dlc/pdf/XMLExtenderWebcast.pdf
The DB2 XML Extender will be used to compose XML Documents from
relational tables and decompose XML Documents into relational tables.
MQSeries will be used to store, transport and retrieve messages.
--This looks relevant, but the website was hard to view on my PC.
http://www.induslogic.com/products/xintdatasheet.pdf
XintegrateTM is an XML-based horizontal integration platform that
provides a complete backbone to automate intra-company and
inter-company business processes by seamlessly translating data
acrossdifferent applications both synchronously and asynchronously.
Built completely on widely deployedopen protocols, Xintegrate delivers
wide integration flexibility to preserve your investment in
existingtechnologies. In addition, it comes with its own XML-based
data adapters that will allow you to workwith data from virtually
anywhere.
--This looks like another package, but you could probably glean some
useful technology from their website.
You also should consider the cost/benefits of any package vs your own
3rd gen code to apply the business rules.
Later, Chris
|
Request for Answer Clarification by
david617-ga
on
03 Nov 2002 16:05 PST
Chris,
Thanks for your excellent research on the XML Business Rules issues
which I had asked for as part of the question. I wonder if we could
now move on to some of the relational/mainframe integration issues.
C.J. Date, in Relational Database Writings 1991-1994, attempted to
come up with a unified framework for updating relational, flat file
and hierachical databases with a single query language -- a successor
to SQL, as it were. He gave up the effort as theoretically hopeless.
Since the situation I have posed involves communication with a
mainframe CIS and a relational ODS, this prospect seems daunting to
someone like myself with limited mainframe experience. I am sure that
it is done all the time, but it is not something that I am familiar
with.
So I wondered if you could examine the following integration issues
posed in my initial request:
- A comparison of the data update paradigms between VSAM files and an
SQL-driven environment
- Methods of sycnchronization which could be used between the CIS and
the ODS
- Any other tips and insights into converting what is a logically
elegant problem (I have done similar work for other banks in an
entirely relational environment and feel confortable with the basic
paradigms involved) into a potentially messy physical implementation
Regards,
David
|
Clarification of Answer by
chris2002micrometer-ga
on
03 Nov 2002 20:48 PST
Hello David - I will add to my response concerning:
1)- A comparison of the data update paradigms between VSAM files and
an
SQL-driven environment
2)- Methods of synchronization which could be used between the CIS and
the ODS
Let me preface #1 with my opinion that what is/was done on a mainframe
(originally) has been pretty much ported to other systems or done in a
very similar way. Having worked with both, the mainframe has always
impressed me as being more refined but also more "anal-retentive and
detail-oriented" at times. For example, on most systems other than a
mainframe, you can copy a file with a single command (copy, cp, como,
etc.). On a mainframe, you specify attributes, allocate space in
blocks, tracks or cylinders for primary and overflow storage and then
(finally) copy the data. There are a few "native" TSO (time share
option) commands and clever ISPF/REXX codes floating around that make
this a one-step process (think .BAT file or script) for simple file
structures. A VSAM file structure (with primary and alternate key
fields) often involves several dozen parameters (filling a screen),
but these are usually cloned from existing files. As I mentioned
earlier, DB2 provides the SQL interface to VSAM. Technically all SQL
queries can be expressed as some combination of VSAM operations. This
truly illustrates the elegance (but potential hazards) of SQL. Before
SQL, most file processing involved sequential passes with keys being
matched to a transaction file (a classic match/merge) or a random-key
access similar to SQL "select/where". A COBOL program of ten or more
pages can often be "replaced" with a 10 line SQL join. Unfortunately
the "business rules", if possible to implement, are rendered almost
incomprehensible to the next programmer. It is also possible (and
likely) to create a very inefficient process with seemingly clean SQL.
For #2, as I mentioned in an earlier post, the proper way to
synchronize (at Verizon and any other big shop with serious data
volume) is to unload and reload in one quick step. This can also be
part of your backup/recovery scheme. Only a novice would expect all to
go well, month-after-month with parallel updates (via SQL, or any
other transaction scheme). You can create image files that keep the
data visible to all users at all times and create a "catch-bucket" for
updates that will post about 5 minutes later.
Chris
|
Request for Answer Clarification by
david617-ga
on
03 Nov 2002 21:37 PST
Chris,
Thanks for the added info. In the relational world, for better or
worse, business rules are often encapsulated in triggers, stored
procedures, and packages. So if I have some business rules in a DB/2
database, in the mainframe world would I have to create similar lines
of COBOL code? (Of course, this is somewhat moot, in that each system
is planned to store different sets of data).
In regards to your previous response, you seem pretty enamored of some
(not all) mainframe database contructs. Of course each system will
have its relative merits, but aren't you familiar with "The Great
Debate," where E.F. Codd pretty convincingly demonstrated that
procedural constructs generally require 50 times more code than SQL
queries which perform identical functions? And aren't the maintenance
costs of COBOL code, with all its hidden business rules,
correspondingly much higher?
Lets move on to a related subject - A comparison of DBA/Developer
responsibilities and capabilities on the mainframe vs. UNIX or NT
platforms. In the latest versions of Oracle, they try to fully support
many of the phsyical paradigms of the mainframe such as table spaces.
Logical and physical partitioning have also proven to be key features
for desision support systems. How do the developmental and day-to-day
tasks compare between systems? How about system-based load procedures,
such as SQL loader and UNIX C scripting? What are there equivalents in
the mainframe world, or did you full cover that in your last post?
You know, I am dealing with a mainframe mentality with this current
client. They havenever seen anything that they didn't feel would look
better when it was done on big iron. Clearly, if this were true,
Oracle and DB/2 would never have come into relative dominance over the
past 25 years. Very few new mainframes go up each year, you know. I
need some well-reasoned advocacy as to why relational systems are
better and have come to dominate (a previous client went to full
relational for their solution and were quite satisfied with it) so
that I can have good arguments against the big iron types. I am
especially intrigued by some of C. J. Date's writings about fronting a
mainframe with a relational front end. He seems pretty convincing, and
it does seem to be the best practices approach these days.
It has been my experience that many people who have worked in the
mainframe world just seem to miss out clear advantages of the
relational world and seem to have constructs which can often defeat
sucessful implementation of projects. I need your help in trying to
overcome such obstacles.
Regards,
David
|
Clarification of Answer by
chris2002micrometer-ga
on
04 Nov 2002 06:10 PST
Ah, where to begin! First, let me state that I am not nearly as biased
as many folks are about mainframes (good or bad). Since I started my
career on a MF, it is simply my basis of comparison for all that has
come along since. Many of the things that seem "new" are renamed,
repackaged, recycled, reinvented, and resold in a vicious marketplace.
E. F. Codd is right ("The Great
Debate,"). SQL is much more concise than procedural code. The problems
have always been the result of poor design, poor coding, and
orthogonal management goals. Bad code is just that, whether it's
written in Cobol, Java, or SQL.
"And aren't the maintenance costs of COBOL code, with all its hidden
business rules, correspondingly much higher?" This is simply
unfounded. A well-written program can, and should, have a useful life
of 20 years or more. A developer should not be bothered with system
concerns external to his application. Some skepticism is well-founded.
I did a little project in asp and used some SQL from DB2. It didn't
work right. It turned out that (MS actually documented the fact) the
"LIKE" construct was not usable. I kluged some 3gl code in the page to
sucessfully demo my prototype.
I am well aware of the conservative mindsets in many of these places
and I was often considered a "radical" and a "hacker", all because I
wanted simplicity and elegance. My "goal" is to not get beeped in the
middle of the night! Much of this comes from "if it ain't broke ..."
thinking. One way to bust thru this is to add some indicators,
counters, crosstotals, etc. to the legacy app. Then, when you find a
dicrepancy, no matter how small, make a fuss about it.
"business rules are often encapsulated in triggers, stored
procedures, and packages ". This is, and always has been, true in both
environments. What is important is that coders know about all of these
places where logic resides (like in Y2K).
"C. J. Date's writings about fronting a mainframe with a relational
front end"
Can you give me a link to this?
"A comparison of DBA/Developer responsibilities and capabilities on
..." I did discuss this earlier and they are nearly identical. The
nainframe guy runs a program called IDCAMS and the CMOS guy clicks
thru ODBC utilities. The planning, tuning and recovery concerns are
all there in full force in both environments.
Later, Chris
|
Clarification of Answer by
chris2002micrometer-ga
on
06 Nov 2002 10:32 PST
David - I did a little more thinking about your situation and want to
be sure you can maximize the information I have provided. If your
client had installed the DB2 and made use of it, you would probably
find the business rules in two areas. Traditionally COBOL (or any 3gl)
would embed the SQL calls to DB2. Some of this SQL would do processing
that would have had to be otherwise done in straight COBOL with VSAM
calls (to MVS). The COBOL resides in a pair of libraries (one for
source, one for binary executables, called "load modules") Some of the
SQL may utilize stored procedures, triggers, etc. that reside in DB2.
Because they don't run DB2, it's all in the COBOL/VSAM. The b rules
are in multiple COBOL source files. These may, or may not, be easy to
discover and convert to SQL. This depends on what kind of management
they have and their dealings with IT professionals. Just as there can
be multiple COBOL source files, there can also be multiple queries,
triggers and stored procedures. There may be some that aren't used.
There may be redundancy or inconsistency. Well-designed COBOL does
make use of "encapsulation" in the form of "copybooks" (includes in C)
and static/dynamic subroutine calls (methods in C) which provides for
a lot of code re-use. I hope this is the case here!
If you are "front-ending" their existing system, you will need to find
all the rules, re-implement them, and reload their VSAM files
periodically from your new system. When I had to do this kind of thing
in a messy place, I found out where the JOB libraries were. Then the
JOB PROCEDURES, then the COBOL. JOBS call PROCEDURS (PROCS for short),
which in turn call COBOL programs in several steps. Ask about the
"onlines" like CICS, etc. These are special jobs that run all day
taking transactions. There are various scans, compares, sorts to help
you find what's relevant. You will need to get help from an MVS person
initally, for a day or two. Then you should be able to do some
meaningful research w/o bothering the techie much. The source codes
are not inherently difficult to read. (but they could have been done
by an idiot!)
HTH, Chris
|