Sorry if this is covered somewhere else, I have not been able to find anything that addresses this in the FAQs.
I’ve been digging through the code and looking for the actual forward pass of any of the model combinations and have not had any luck. Is there a generic config or something that I’ve missed that actually defines the flow of model inputs and outputs? i.e. spells out face_image2 = self.decoder(self.bottleneck(sec.encoder(face_image1)))?
I was hoping to drop my own custom architectures in to this (and, if successful, share them) but I can’t seem to find a forward pass in a syntax I am familiar with. I usually work with lower level tensorflow custom loops/gradient_tape oriented trainings, and these higher level keras abstractions (i.e. model.train_on_batch(..)) have me scratching my head.