Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

How to do the midi generation and midi playing simultaneously? #81

Open
momo1986 opened this issue Nov 22, 2019 · 2 comments
Open

How to do the midi generation and midi playing simultaneously? #81

momo1986 opened this issue Nov 22, 2019 · 2 comments

Comments

@momo1986
Copy link

momo1986 commented Nov 22, 2019

Hello, dear google-magenta guys,

I tried to use your APIs to generate music.
First, create an estimator:

# Create Estimator.
run_config = trainer_lib.create_run_config(hparams)
estimator = trainer_lib.create_estimator(
		model_name, hparams, run_config,
		decode_hparams=decode_hparams)

Second, I have a midi input interface:

# Create input generator (so we can adjust priming and
# decode length on the fly).

def input_generator():
	global targets
	global decode_length
	while True:
		yield {
				'targets': np.array([targets], dtype=np.int32),
				'decode_length': np.array(decode_length, dtype=np.int32)
		}

# These values will be changed by subsequent cells.
targets = []
decode_length = 0

# Start the Estimator, loading from the specified checkpoint.
input_fn = decoding.make_input_fn_from_generator(input_generator())

Third, do prediction:

unconditional_samples = estimator.predict(
		input_fn, checkpoint_path=ckpt_path)

Fourth, I have an API to generate the MIDI note one by one, after the generation, it will read the MID file as playing:

	# Generate sample events.
	sample_ids = next(unconditional_samples)['outputs']

	#print('-------------2', sample_ids)

	# Decode to NoteSequence.
	midi_filename = decode(
			sample_ids,
			encoder=unconditional_encoders['targets'])

	return open(midi_filename, 'rb').read()

To be frank, my question is that is it possible to iterate the model output and then playing, means that do midi generation with decoding and midi playing simultaneously instead of playing the midi file after the midi file is ready?

Thanks & Regards!
Jun Yan

@momo1986
Copy link
Author

Currently, the demo is generated the midi and then play with midi-file opened.

Is there the potential to iterate the prediction result and play at as one by one music sequence?

Thanks & Regards!
Jun Yan

@momo1986
Copy link
Author

I am sorry.
As our small team found,
the operation caused much time is this:

	# Generate sample events.
	sample_ids = next(unconditional_samples)['outputs']

However, output data is essential for generation of midi and also playing.

How to reduce the operation time instead of the operation of "next" caused too much time?

Thanks & Rehards!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant