Received: from mail.proteosys.com ([62.225.9.49]) by nummer-3.proteosys with Microsoft SMTPSVC(5.0.2195.5329); Sat, 1 Feb 2003 10:48:24 +0100 Received: by mail.proteosys.com (8.12.2/8.12.2) with ESMTP id h119mK6D012062 for ; Sat, 1 Feb 2003 10:48:22 +0100 Received: from exfront1.zdv.uni-mainz.de (exfront1.zdv.Uni-Mainz.DE [134.93.8.75]) by mailgate1.zdv.Uni-Mainz.DE (8.12.6/8.12.6) with ESMTP id h119mKZE013885 for ; Sat, 1 Feb 2003 10:48:20 +0100 (MET) Received: from spamgate2.zdv.Uni-Mainz.DE ([134.93.8.232]) by exfront1.zdv.uni-mainz.de with Microsoft SMTPSVC(5.0.2195.5329); Sat, 1 Feb 2003 10:47:07 +0100 MIME-Version: 1.0 Content-Type: multipart/alternative; boundary="----_=_NextPart_001_01C2C9D7.0FCC6400" Received: from mailgate3.zdv.Uni-Mainz.DE (mailgate3.zdv.Uni-Mainz.DE [134.93.130.78]) by spamgate2.zdv.Uni-Mainz.DE (8.12.6/8.12.2) with ESMTP id h119kaJw025224 for ; Sat, 1 Feb 2003 10:46:36 +0100 (MET) Received: from tug.org (tug.org [130.225.2.178]) by mailgate3.zdv.Uni-Mainz.DE (8.12.6/8.12.6) with ESMTP id h119kYS0000243 for ; Sat, 1 Feb 2003 10:46:35 +0100 (MET) X-MimeOLE: Produced By Microsoft Exchange V6.5 Received: from tug.org (localhost.localdomain [127.0.0.1]) by tug.org (8.11.6/8.11.6) with ESMTP id h119Gtx26503; Sat, 1 Feb 2003 10:16:55 +0100 Received: from newton.feld.cvut.cz (newton.feld.cvut.cz [147.32.244.10]) by tug.org (8.11.6/8.11.6) with ESMTP id h119GQx26490 for ; Sat, 1 Feb 2003 10:16:26 +0100 Received: from localhost (olsak@localhost) by newton.feld.cvut.cz (8.11.6/8.11.4) with ESMTP id h119DER22468 for ; Sat, 1 Feb 2003 10:13:14 +0100 In-Reply-To: <87vg05jb0a.fsf@infovore.xs4all.nl> Return-Path: X-OriginalArrivalTime: 01 Feb 2003 09:47:07.0587 (UTC) FILETIME=[E240B130:01C2C9D6] List-Id: List-Post: Errors-To: tex-implementors-bounces@tug.org X-BeenThere: tex-implementors@tug.org X-Mailman-Version: 2.1 List-Archive: x-mime-autoconverted: from QUOTED-PRINTABLE to 8bit by tug.org id h119GQx26490 X-Virus-Scanned: by amavisd-milter (http://amavis.org/) X-Authentication-Warning: newton.feld.cvut.cz: olsak owned process doing -bs X-X-Sender: olsak@newton.feld.cvut.cz X-Scanned-By: MIMEDefang 2.28 (www . roaringpenguin . com / mimedefang) X-Spam-Score: -3.2 () EMAIL_ATTRIBUTION,IN_REP_TO,QUOTED_EMAIL_TEXT,SPAM_PHRASE_00_01,USER_AGENT_PINE,X_AUTH_WARNING X-Spam-Report: CARRIAGE_RETURNS,IN_REP_TO,QUOTED_EMAIL_TEXT,SPAM_PHRASE_00_01,USER_AGENT_PINE,X_AUTH_WARNING Content-class: urn:content-classes:message Subject: Re: [tex-implementors] TeX+locale, solution? Date: Sat, 1 Feb 2003 10:13:14 +0100 Message-ID: X-MS-Has-Attach: X-MS-TNEF-Correlator: Thread-Topic: [tex-implementors] TeX+locale, solution? Thread-Index: AcLJ1xAh3iFxCOPwTNq1tT2XSxZwew== List-Help: List-Subscribe: , List-Unsubscribe: , From: "Petr Olsak" To: Status: R X-Status: X-Keywords: X-UID: 4520 This is a multi-part message in MIME format. ------_=_NextPart_001_01C2C9D7.0FCC6400 Content-Type: text/plain; charset="iso-8859-2" Content-Transfer-Encoding: quoted-printable On 31 Jan 2003, Olaf Weber wrote: > LaTeX-the-macro-package and TeX-the-program would need to be able to > communicate this kind of information to each other, which implies > adding new primitives to TeX-the-program. The encTeX actually does it, for example. There exist the primitives by which you can read or set the values of the xord/xchr vector. If these primitives is used by inputenc macro package then this package would be more intelligent than current version. You can create the macro \usedlatinalphabed by encTeX primitives with = the following usage: \usedlatinalphabet{=C1=E1=C8=E8...=AE=BE} The convention of this macro is that the first token in its parameter is Aacute, the second is aacute, third is Ccaron etc. (I don't know what you actually see in this parameter in my e-mail). Now, if the document is sent to another enviromnent and (possibly) recoded then the argument of \usedlatinalphabet is recoded too and the macro still know the input encoding of this document. It is not important what encoding it is. I did not think over the \usedcyralphabet macro and problems with it, but the \usedlatinalphabet is very needed in our country where our alphabed can be encoded by 6 different one-byte encodings (one on UNIX-like systems, two additional on Win+Dos-like systems, one on Mac and others on old systems). When the document is transferred then the encoding of our alphabet is recoded too. The inputenc package parameter does not include the right value after reencoding. The \usedlatinalphabed macro solves only the one-byte to one-byte reencoding. On the other hand, it can determine that more tokens are in its parameter than the normal number and it can activate the UTF-8 reencoding by encTeX primitives in such case. ------ IMHO, the main problem is that the TeX is frozen and we are talking about its extensions only. The searching of a new standards for TeX extensions is very difficult task because there exists no such authority as Donald Knuth. Who is responsible for this extensions? Who will start to use and to propagate the new approach which is not full compatible with the Knuth's TeX? PS: I agree with Vladimir Volovich: the locale-dependency in TeX's typesetting process is not desirable. Petr Olsak _______________________________________________ tex-implementors mailing list postmaster@tug.org http://tug.org/mailman/listinfo/tex-implementors ------_=_NextPart_001_01C2C9D7.0FCC6400 Content-Type: text/html; charset="iso-8859-2" Content-Transfer-Encoding: quoted-printable Re: [tex-implementors] TeX+locale, solution?

On 31 Jan 2003, Olaf Weber wrote:

> LaTeX-the-macro-package and TeX-the-program would = need to be able to
> communicate this kind of information to each = other, which implies
> adding new primitives to TeX-the-program.

The encTeX actually does it, for example. There exist = the primitives
by which you can read or set the values of the = xord/xchr vector.
If these primitives is used by inputenc macro package = then this
package would be more intelligent than current = version.

You can create the macro \usedlatinalphabed by encTeX = primitives with the
following usage:

\usedlatinalphabet{=C1=E1=C8=E8...=AE=BE}

The convention of this macro is that the first token = in its parameter
is Aacute, the second is aacute, third is Ccaron etc. = (I don't know
what you actually see in this parameter in my = e-mail). Now, if the
document is sent to another enviromnent and = (possibly) recoded then the
argument of \usedlatinalphabet is recoded too and the = macro still know
the input encoding of this document. It is not = important what encoding
it is.

I did not think over the \usedcyralphabet macro and = problems with it,
but the \usedlatinalphabet is very needed in our = country where our
alphabed can be encoded by 6 different one-byte = encodings (one on
UNIX-like systems, two additional on Win+Dos-like = systems, one on Mac
and others on old systems).  When the document = is transferred then the
encoding of our alphabet is recoded too. The inputenc = package
parameter does not include the right value after = reencoding.

The \usedlatinalphabed macro solves only the one-byte = to one-byte
reencoding. On the other hand, it can determine that = more tokens are in
its parameter than the normal number and it can = activate the UTF-8
reencoding by encTeX primitives in such case.

------

IMHO, the main problem is that the TeX is frozen and = we are talking
about its extensions only. The searching of a new = standards for TeX
extensions is very difficult task because there = exists no such
authority as Donald Knuth. Who is responsible for = this extensions?
Who will start to use and to propagate the new = approach which is not
full compatible with the Knuth's TeX?

PS: I agree with Vladimir Volovich: the = locale-dependency in TeX's
typesetting process is not desirable.

Petr Olsak


_______________________________________________
tex-implementors mailing list
postmaster@tug.org
http://tug.org/= mailman/listinfo/tex-implementors

------_=_NextPart_001_01C2C9D7.0FCC6400--