When using Wan 2.1 with LoRAs, or what you call video effects, prompt quality comes from two layers working together: the base prompt defines the video, and the effect nudges the output toward a specific learned look, motion pattern, or transformation.
Effect trigger + subject + environment + action + camera + lighting + style + constraints
With video effects, the prompt describes the shot while the effect pushes it toward a learned behavior, distortion, transformation, or style lane.
Prompt the video first, then let the effect bend it.
Trigger + who + where + what happens + camera + look + stability priorities
Use the required effect trigger exactly if the effect depends on exact wording.
Define the character, object, outfit, or visual anchor that the effect will act on.
Describe the main motion clearly so the effect has a readable base shot to work with.
Use camera, lighting, mood, and stability cues to keep the effect from overwhelming the scene.
For text-to-video, define the whole shot clearly so the effect has a strong structure to work on.
For image-to-video, let the source image anchor the look and use the prompt mainly for motion, camera, atmosphere, and effect behavior.
Clear camera language helps separate the shot from the effect itself.
Write the shot clearly first. Then use the effect to push the result, not to replace the need for a good prompt.