[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index][Search]
Mapping DECtalk commandset to SABLE
- To: emacspeak@xxxxxxxxxxx
- Subject: Mapping DECtalk commandset to SABLE
- From: Mario Lang <mlang@xxxxxxxxxxx>
- Date: Wed, 09 Feb 2000 15:08:32 +0100
- Old-Return-Path: <mlang@xxxxxxxxxxx>
- Resent-Date: Wed, 9 Feb 2000 09:10:21 -0500 (EST)
- Resent-From: emacspeak@xxxxxxxxxxx
- Resent-Message-ID: <"eZcFND.A.hqF.0UXo4"@hub>
- Resent-Sender: emacspeak-request@xxxxxxxxxxx
- User-Agent: WEMI/1.13.7 (Shimada) FLIM/1.13.2 (Kasanui) Emacs/20.5 (i386-debian-linux-gnu) MULE/4.0 (HANANOEN)
Hello.
As some off you probably know, I am in the process of
writing a Emacspeak Server for Festival.
I am currently multithreading the whole thing to be able
to eliminate commands which wouldn't get spoken (sch as fast scrolling with many q d s q d s q d s series)...
When this is finished (I hope soon, its quite a brainer for me),
the next step will be the voice-lock mode.
I plan to convert the strings emacspeak sends to the speech server
to a SABLE-mark-up text for Festival.
First of all, is this the right way? Or does anyone think that
implementing the SABLE commands in a festival-voices.el would be better?
Ok, if q1 is no, than I seek for help:
I'd like to talk with someone how to map the DECtalk commandset
with all his parameters to apropriate SABLE commands and their parameters.
This can only be well done from the ground up.
I am absolutly not familiar with DECtalk and dont know what those
many parameters (the numbers) mean. Didnt find any good specs anyway..
Something to think about:
SABLE supports: <SPEAKER></SPEAKER>, <RATE></RATE>, <SAYAS></SAYAS>, <AUDIO SRC/>
Thats the main commands to map to.
The speech rate has to be in the range of 1 to 99%.
Speakers are speaker names.
SAYAS can be used for pronounciation and so on.
AUDIO SRC can be used to include Wavforms directly into the spoken text.
Just got another question:
Does anyone know how to reconfigure Emacspeak so that it dosent use
the play program for Auditory Icons. It should use the a and p commands
and send Auditory Icons to the Speech server.
This would eliminates the "two-soundcards" problem.
And the last question mainly to raman:
Are you planning to implement a pronounce-word command for the speech servers?
When I tell Emacspeak to read the characters of a word it would send:
l {x}
l {y}
l {z}
and so on.
My speech server would send the following to festival:
(tts "<SABLE><SAYAS MODE="literal">x</SAYAS> </SABLE>" 'sable)
(tts "<SABLE><SAYAS MODE="literal">y</SAYAS> </SABLE>" 'sable)
(tts "<SABLE><SAYAS MODE="literal">z</SAYAS> </SABLE>" 'sable)
But if Emacspeak would send a special command for speaking literals of a word, I could convert to:
(tts "<SABLE><SAYAS MODE="literal">xyz</SAYAS> </SABLE>" 'sable)
which would recude overhead alot.
Regards,
Mario Lang <lang@xxxxxxxxxxx-graz.ac.at>
-----------------------------------------------------------------------------
To unsubscribe or change your address send mail to
"emacspeak-request@xxxxxxxxxxx" with a subject of "unsubscribe" or "help"
Emacspeak Files |
Subscribe |
Unsubscribe