Received: from mx0.gmx.net (mx0.gmx.net [213.165.64.100]) by h1439878.stratoserver.net (8.14.2/8.14.2/Debian-2build1) with SMTP id q18Gc90b020308 for ; Wed, 8 Feb 2012 17:38:10 +0100 Received: (qmail 15834 invoked by alias); 8 Feb 2012 16:38:04 -0000 Delivered-To: GMX delivery to rainer.schoepf@gmx.net Received: (qmail invoked by alias); 08 Feb 2012 16:37:20 -0000 Received: from relay2.uni-heidelberg.de (EHLO relay2.uni-heidelberg.de) [129.206.210.211] by mx0.gmx.net (mx030) with SMTP; 08 Feb 2012 17:37:20 +0100 Received: from listserv.uni-heidelberg.de (listserv.uni-heidelberg.de [129.206.100.94]) by relay2.uni-heidelberg.de (8.13.8/8.13.8) with ESMTP id q18GYfv4026780 (version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO); Wed, 8 Feb 2012 17:34:41 +0100 Received: from listserv.uni-heidelberg.de (localhost.localdomain [127.0.0.1]) by listserv.uni-heidelberg.de (8.13.1/8.13.1) with ESMTP id q18G1FtR009130; Wed, 8 Feb 2012 17:34:40 +0100 Received: by LISTSERV.UNI-HEIDELBERG.DE (LISTSERV-TCP/IP release 16.0) with spool id 2013467 for LATEX-L@LISTSERV.UNI-HEIDELBERG.DE; Wed, 8 Feb 2012 17:34:40 +0100 Received: from relay.uni-heidelberg.de (relay.uni-heidelberg.de [129.206.100.212]) by listserv.uni-heidelberg.de (8.13.1/8.13.1) with ESMTP id q18GYeON024678 for ; Wed, 8 Feb 2012 17:34:40 +0100 Received: from csep02.cliche.se (csep02.cliche.se [195.249.40.184]) by relay.uni-heidelberg.de (8.14.1/8.14.1) with ESMTP id q18GYS6f020109 (version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO) for ; Wed, 8 Feb 2012 17:34:32 +0100 Received: from client233-200.wireless.umu.se (client233-200.wireless.umu.se [130.239.233.200]) by csep02.cliche.se (Postfix) with ESMTP id E0E13186627 for ; Wed, 8 Feb 2012 17:34:26 +0100 (CET) User-Agent: Mozilla/5.0 (Macintosh; U; Intel Mac OS X 10.6; sv-SE; rv:1.9.2.22) Gecko/20110902 Thunderbird/3.1.14 MIME-Version: 1.0 References: <20120202133153.GA16604@csmvddesktop> <20120203151218.GA30208@csmvddesktop> <4F2BFFA6.1020306@morningstar2.co.uk> <20120203155926.GA30436@csmvddesktop> <20120205160826.GA6939@csmvddesktop> <20120206054259.GB9462@csmvddesktop> <20120207183403.GA1694@csmvddesktop> Content-Type: text/plain; charset=ISO-8859-1; format=flowed Content-Transfer-Encoding: 8bit X-MIME-Autoconverted: from quoted-printable to 8bit by listserv.uni-heidelberg.de id q18GYeON024679 Message-ID: <4F32A414.4060706@residenset.net> Date: Wed, 8 Feb 2012 17:34:28 +0100 Reply-To: Mailing list for the LaTeX3 project Sender: Mailing list for the LaTeX3 project From: =?ISO-8859-1?Q?Lars_Hellstr=F6m?= Subject: Re: Mapping Functions Versions for All and Some To: LATEX-L@listserv.uni-heidelberg.de In-Reply-To: <20120207183403.GA1694@csmvddesktop> Precedence: list List-Help: , List-Unsubscribe: List-Subscribe: List-Owner: List-Archive: X-GMX-Antispam: 0 (eXpurgate); Detail=5D7Q89H36p7v2e6YIeqtwx/VDteyQQ1H0E758vuTp+EIIMzLUQUrVFYpHFwtBZVQ6qZ63 WUjtu2io+OIdxJGJJEdlu6DNy9u003nl2V49VNeTIm+kcxDigLSXpt8YVvVqAPQoYdwvLlxb7iBz XkmMIXsZcQ1M79apPeSEQmMAvq/cz+6byzHdkp3OCUKJyGHdPQtJKP1Rfc=V1; X-Resent-By: Forwarder X-Resent-For: rainer.schoepf@gmx.net X-Resent-To: rainer@rainer-schoepf.de Status: R X-Status: X-Keywords: X-UID: 7033 dongen skrev 2012-02-07 19.34: > * Bruno Le Floch [2012-02-06 04:26:24 -0500]: > > Hi Bruno, > > I'm starting to understand the problem with expandable caching. [snip] > I think there's a general expandable solution to this. Let's assume we want > to compute f( n ). We compute is by computing f2( {}, n ). The first argument > is the cache of f2. The value that's returned by f2 is a pair (c,r), where c > is the last cache of f2 and r the result of f( n ). The problem with expandable caching is in _how_ one stores the cached data. The elementary way of storing information in TeX is to make an assignment, but an assignment is a command, and thus cannot be performed at expand-time. The subset of TeX programming techniques that are available at expand-only-time are quite unlike imperative programming, and also (AFAICT) not so much supported by the LaTeX3 kernel at the moment. One illustration of this is provided by the suggested solution to the "Mapping Functions Versions for All and Some" that are still in the title this thread: it involved setting a flag variable. Setting a flag is an assignment, so that solution would not do for an "All" or "Some" predicate that had to be evaluated at expand-time. This does not mean it is impossible to do at expand-time, but one has to employ a different set of programming techniques when doing it: mostly techniques from functional programming, and when nothing else helps resort to combinatory logic (as the equivalent and more traditional lambda calculus requires a lambda operator, which again is not available at expand-time in TeX). Since caches can be implemented in lambda calculus, one can make an expand-time cache in TeX, but the details are not exactly easy to get right. FWIW, making something expandable that could cache arbitrary amounts of data was a motivating use-case when I set out to write that 2-3-tree package I've mentioned earlier. It is feasible to use (or will be, once finished), but I think the programming style will take many quite some time to get used to. > This seems very doable and I think it's possible to do it in a time complexity > that is at most O( c^2 ), where c is the number of things that have to be cached. You're thinking recursions of bounded length here? Be aware that even with a cache of fixed length it may be a nontrivial problem to write a TeX macro that will access the right elements; one tends to run out of arguments (#9 is the last there is). Writing correct code that automatically defines the necessary macro is even trickier. > It's also possible to optimise the cache, in the sense that lookups for > frequently looked up items are more efficient. If computations are not restricted to expand-time, then any access is just O(1), so no need to optimise. But if you want to be building the cache at expand-time then you're into very deep waters. > I'll try and implement this for a toy example. If it proves successful, Considering your performance so far, a success at that would surprise me; more likely you'll end up producing a piece of code that doesn't work as intended and then asking everyone why it doesn't. Learn to walk before you run. :-) Lars Hellström