Voice input

Our team recently started using the voice input option in WeSay, and we configured our project to have a separate “pronunciation” field with vernacular voice input. The send/receive function through USB shares these audio files with other users. However, the other computers don’t show the play button for these pronunciations. They only show the record button. The audio files exist in their computer’s folder structure, but the program doesn’t recognize that there is something to play. How do we resolve this?

I have since learned that even the computer that recently created these audio files with the Voice input functionality of WeSay does not have these audio files associated any more with the pronunciation fields in which they were created. I can recreate the association by holding the Shift key and selecting the existing audio file in the folder structure; however, even when I do that, it creates a new audio file with _1 appended to the end of the file name and associates the word with the new file. If I do this for any words, and then complete send/receive between two computers, the associations do remain. But it’s a bit disconcerting that it has to create a new audio file to do this. And why it didn’t preserve the association after the audio file was created in that pronunciation field is really surprising.
Thank you,

Perhaps you could fix up the pronunciation file paths with a find/replace in the LIFT XML directly?

Kind regards,