Post Reply 
Sound quality comparison
29-08-2014, 07:34 (This post was last modified: 29-08-2014 07:35 by simoncn.)
Post: #21
RE: Sound quality comparison
(29-08-2014 02:08)krutsch Wrote:  Wow... that all sounds like quicksand for you as the developer/supporter for MinimServer.

If you want to put more time into transcoding features, maybe think about exposing FFmpeg switches for MinimStreamer via the web-based configurator. For example, I would love to be able to invoke SoX with minimum or linear phased filters and forced up-sampling to best match the characteristics of my DAC.

Just a thought... Cool

I would like to support this at some stage. There are some tricky issues of which options can be exposed safely and which options can't be exposed because they would interfere with MinimStreamer's handling of the stream. Also, a user with multiple DACs might want different upsampling options for each DAC, which brings us back to the original question of how to provide different transcoding settings for different renderers.
Find all posts by this user
Quote this message in a reply
29-08-2014, 13:08 (This post was last modified: 29-08-2014 13:09 by simoncn.)
Post: #22
RE: Sound quality comparison
(25-08-2014 17:55)simoncn Wrote:  The correct way for a UPnP server to support multiple renderers with different capabilities is for the server to offer multiple streams in different formats to all renderers and let each renderer choose which stream format to play. If the renderer is capable of playing more than one of the formats offered by the server, it should choose the first in the list.

For MinimStreamer to support this, the current stream.transcode property syntax would need to be extended to allow a single source type to have multiple target types. This is on my to-do list for a future version of MinimStreamer.

The next version of MinimStreamer will support multiple streams (original and/or transcoded). As previously explained, the control point decides which of the streams to play based on the capabilities advertised by the renderer. When this feature is available, I will be interested to get feedback on how well it works with various renderers and control points.
Find all posts by this user
Quote this message in a reply
29-08-2014, 13:20
Post: #23
RE: Sound quality comparison
(29-08-2014 13:08)simoncn Wrote:  
(25-08-2014 17:55)simoncn Wrote:  The correct way for a UPnP server to support multiple renderers with different capabilities is for the server to offer multiple streams in different formats to all renderers and let each renderer choose which stream format to play. If the renderer is capable of playing more than one of the formats offered by the server, it should choose the first in the list.

For MinimStreamer to support this, the current stream.transcode property syntax would need to be extended to allow a single source type to have multiple target types. This is on my to-do list for a future version of MinimStreamer.

The next version of MinimStreamer will support multiple streams (original and/or transcoded). As previously explained, the control point decides which of the streams to play based on the capabilities advertised by the renderer. When this feature is available, I will be interested to get feedback on how well it works with various renderers and control points.

Count me in, I am happy to help. The use case that would fit my purposes:

1. WAV24 output for dedicated streamers (I can't believe there are dedicated streamer appliances that can't play 24-bit WAV - my AVRs and my 1st-gen Denon streamer can't play ALAC or AIFF, but they can do WAV);

2. Android/Mobile use - I really don't want to stream WAV24 to handhelds (e.g. Android phones), as most of them cannot play 24-bit audio without truncation; and, who wants to send high bit-depth audio over the Internet?

That last case might be a challenge; on the local network, I can go direct to MinimServer. But over the Internet, I am proxying with BubbleUPnP Server, which has a transcoding switch, but no ability to discern its use.

I will be curious to read about use cases from other users.

Thanks for all you do.
Find all posts by this user
Quote this message in a reply
29-08-2014, 14:09 (This post was last modified: 29-08-2014 14:11 by simoncn.)
Post: #24
RE: Sound quality comparison
(29-08-2014 13:20)krutsch Wrote:  Count me in, I am happy to help. The use case that would fit my purposes:

1. WAV24 output for dedicated streamers (I can't believe there are dedicated streamer appliances that can't play 24-bit WAV - my AVRs and my 1st-gen Denon streamer can't play ALAC or AIFF, but they can do WAV);

2. Android/Mobile use - I really don't want to stream WAV24 to handhelds (e.g. Android phones), as most of them cannot play 24-bit audio without truncation; and, who wants to send high bit-depth audio over the Internet?

That last case might be a challenge; on the local network, I can go direct to MinimServer. But over the Internet, I am proxying with BubbleUPnP Server, which has a transcoding switch, but no ability to discern its use.

I will be curious to read about use cases from other users.

Thanks for all you do.

Ignoring the internet aspect for the moment, this use case could present a challenge for the UPnP scheme. For example, if you enable wav24 and mp3 streams (listed in that order), your dedicated streamers will play the wav24 stream (as desired) but any Android device that claims support for WAV will be told by the control point to play the wav24 stream.

An ideal solution would be for the control point to provide a user setting for fine-grained control over which of the available streams should be played on which renderer devices.
Find all posts by this user
Quote this message in a reply
29-08-2014, 14:51
Post: #25
RE: Sound quality comparison
(29-08-2014 14:09)simoncn Wrote:  Ignoring the internet aspect for the moment, this use case could present a challenge for the UPnP scheme. For example, if you enable wav24 and mp3 streams (listed in that order), your dedicated streamers will play the wav24 stream (as desired) but any Android device that claims support for WAV will be told by the control point to play the wav24 stream.

An ideal solution would be for the control point to provide a user setting for fine-grained control over which of the available streams should be played on which renderer devices.

Do any of the control points do that? I am not aware of anything in BubbleUPnP that allows selection of a particular stream.

What about looking at the user-agent from the Minim side of things? Thinking off the top of my head... a user could stream with their devices, a priori, and you capture observed user agent strings. Then, assuming they differ between, say, Android and appliances (like AVRs), you present a transcoding choice for a given user-agent (and a default for everything else).
Find all posts by this user
Quote this message in a reply
29-08-2014, 16:29
Post: #26
RE: Sound quality comparison
(29-08-2014 14:51)krutsch Wrote:  Do any of the control points do that? I am not aware of anything in BubbleUPnP that allows selection of a particular stream.

I'm not either, but it would be possible to add this.

Quote:What about looking at the user-agent from the Minim side of things? Thinking off the top of my head... a user could stream with their devices, a priori, and you capture observed user agent strings. Then, assuming they differ between, say, Android and appliances (like AVRs), you present a transcoding choice for a given user-agent (and a default for everything else).

This doesn't work because the stream details (including audio info such as 24-bit WAV) need to be sent to the control point when the control point issues a Browse request. At this point, the server doesn't know which renderer the control point will use to play the stream.
Find all posts by this user
Quote this message in a reply
29-08-2014, 17:00
Post: #27
RE: Sound quality comparison
(29-08-2014 16:29)simoncn Wrote:  
Quote:What about looking at the user-agent from the Minim side of things? Thinking off the top of my head... a user could stream with their devices, a priori, and you capture observed user agent strings. Then, assuming they differ between, say, Android and appliances (like AVRs), you present a transcoding choice for a given user-agent (and a default for everything else).

This doesn't work because the stream details (including audio info such as 24-bit WAV) need to be sent to the control point when the control point issues a Browse request. At this point, the server doesn't know which renderer the control point will use to play the stream.

You would know better, but at some point, the renderer and the server have to handshake, because it's the renderer that accepts the socket request from the server. That's when the stream transcoding decision could be made, based on a setting from the user in Minim, not while the control point is negotiating with the renderer and the server.

Unless you can get bubbleguum to makes changes to BubbleUPnP to accommodate selecting streams on-the-fly, that seems like a tough approach.

From a UX standpoint, you don't want users to have to manage this whenever they listen to music - they want to set it from the server and have it work the same way every time for a given server/renderer combination.
Find all posts by this user
Quote this message in a reply
29-08-2014, 17:19
Post: #28
RE: Sound quality comparison
(29-08-2014 17:00)krutsch Wrote:  You would know better, but at some point, the renderer and the server have to handshake, because it's the renderer that accepts the socket request from the server. That's when the stream transcoding decision could be made, based on a setting from the user in Minim, not while the control point is negotiating with the renderer and the server.

It isn't possible for the server to change the stream format at this point. The renderer has been told by the control point what stream type to expect from the server (based on information the server has previously provided to the control point) and the server needs to send what the renderer is expecting.

Quote:Unless you can get bubbleguum to makes changes to BubbleUPnP to accommodate selecting streams on-the-fly, that seems like a tough approach.

From a UX standpoint, you don't want users to have to manage this whenever they listen to music - they want to set it from the server and have it work the same way every time for a given server/renderer combination.

It isn't possible for the server to know which renderer will play the stream. Only the control point knows this. The control point could provide a per-renderer option to specify stream types that the user wants to send (or not send) to that renderer. This would be a one-time setting that doesn't need to be changed when the user is playing music.
Find all posts by this user
Quote this message in a reply
29-08-2014, 17:48
Post: #29
RE: Sound quality comparison
(29-08-2014 17:19)simoncn Wrote:  It isn't possible for the server to know which renderer will play the stream. Only the control point knows this. The control point could provide a per-renderer option to specify stream types that the user wants to send (or not send) to that renderer. This would be a one-time setting that doesn't need to be changed when the user is playing music.

How does Twonky solve this problem? I know you can configure Twonky server-side with a set of capabilities, based on renderer type (e.g. Pioneer AVR, Sony Blu-ray player) and then drive from a control point and it just sort of works (as much as Twonky works at all Dodgy )
Find all posts by this user
Quote this message in a reply
29-08-2014, 18:07
Post: #30
RE: Sound quality comparison
(29-08-2014 17:48)krutsch Wrote:  How does Twonky solve this problem? I know you can configure Twonky server-side with a set of capabilities, based on renderer type (e.g. Pioneer AVR, Sony Blu-ray player) and then drive from a control point and it just sort of works (as much as Twonky works at all Dodgy )

Do these configurable capabilities include custom transcoding settings?
Find all posts by this user
Quote this message in a reply
Post Reply 


Forum Jump:


User(s) browsing this thread: 1 Guest(s)